Alla inlägg av Ichimusai

Borg Backup

The Borg

A very useful utility is the Borg Backup system, or just Borg. It’s a deduplicating backup system meaning that is scans the files and when it finds data that is already in the backup the data in the second and all other subsequent files are replaced with a reference to the first instance of that data.

The idea is that the same data is only stored once. All the backups you take after the initial one stores only the differences and the new data that has been accumulated since the last backup. This means that backing up after the initial backup is done is very fast, efficient, saves bandwidth and storage space.

Traditional backups have usually a full backup every month and then they take increments daily or so. If you need to restore a file you need to take the latest full backup, then apply each increment that was taken after. With the borg backup that is not necessary as you can view the file system exactly as it looked upon each and every backup point taken.

In fact you can mount the whole backup as a file system and then traverse it from there. It’s very effective. So let’s get started because face it, you don’t back up as much as you should do!

Borg can be used for multiple platforms but my commands here will be for linux.

The first step is to create a repository, this may sit on a different machine, NAS, attached USB drive or even on the same machine, of course you want multiple backups really so you can take the borg backup locally and then rsync it to as many locations as you feel is necessary.

Take the backup

The first step is to create the dir of the the backup repo and then we need to initialize it for being used with borg. This is quite simply done as:

$ sudo mkdir /bup
$ sudo borg init /bup

When the repository has thus been created it is time for the first initial backup. The format should be clear in a bit, it’s not complicated and can look like this:

$ sudo borg create --progress --info --stats /bup::lenovo-170202_163423 /home /root /boot /etc /var

The command above should be a single line. The first thing we give to borg is the command, in this case it is create to create a new backup set for us. Then we have some flags, –progress shows a progress indicator while borg is working that details also the number of bytes being read, backed, compressed and deduplicated. The next –info sets the information level borg presents to us and –stats lets borg summarize the operation with some statistics.

The next part of the command the /bup::lenovo-170202_163423 specifies the backup location and backup name. The name is given after the double colon :: mark. In this case its composed of the date yymmdd and time hhmmss of when the backup is started, doing that makes it easy to find the right set of data later when a restore is needed.

Why did I prefix it with lenovo? Well my main linux laptop is a lenovo and I also have other computers, like an ASUS laptop etc.  The beauty with deduplicating backups is that I backup multiple machines to the same repo. By doing that it will deduplicate across the machines and if I have the same files and data in multiple places it will just be replaced by references to the data that is already in the backup.

The final part of the command is just all paths I want to include in this backup. They can vary from time to time. I might backup /home daily but /root only once a week if I want. No problem at all with borg.

Restoring a backup

No system of backup is actually deployed before you have attempted and successfully retrieved data from it so that you know what to do in an emergency as well as being able to extract old data mistakenly erased or restore a full system after a hard drive crash.

Restoring a borg backup works a little different from what you may be used to. First of all you can of course extract the data fully or just single files if you know their paths just like with any other backup system. The restore command is called extract in borg.

$ sudo borg extract /bup::lenovo-170202_163423

This will extract the entire archive and then you can move the files into their respective locations. You can also extract for example only the etc folder from the archive:

$ sudo borg extract /bup::lenovo-170202_163423 etc

Extraction always writes in the current working directory. Therefore you should first extract then move the files into their correct location in your file system or if  all the backups are taken from the root of the file system / then you can cd there before extracting but I recommend extracting on a different volume first and then restoring from there. The reason is that there is usually a lot of stuff in a backup that you may not always want to restore.

Mounting the backup as a file system

So borg actually offers another way also. You can mount the backup as a volume, or you can mount the whole repo and see all the backup points made, select which one you want and then just copy the files from there to the live system.

$ sudo borg mount /bup::lenovo-170202_163423 /mnt

This will mount the backup lenovo-170202_163423 in the file system at /mnt. You can then cd to /mnt and then use cp etc to copy the files to their right places.

When done you can dismount it (otherwise other processes can’t backup, the repo is locked while mounted)

$ sudo umount /mnt

Borg uses fuserfs to mount local directories.

You may also mount the whole repository:

$ sudo borg mount /bup /mnt

Now when you go into the /mnt folder you will see all your backup names as directories:

$ ls
161204_040001 170101_203409 170113_040001 170117_040001 170121_040001 170125_010344 170128_030332
161206_040001 170108_040001 170114_040001 170118_040001 170122_214910 170125_040001 170128_040001
161218_174848 170111_040001 170115_040002 170119_040001 170123_040001 170126_040001 170129_040001
161225_040001 170112_040001 170116_040001 170120_040001 170124_040002 170127_040001 170201_082851

As you can see I generally name my backups with YYMMDD_HHMMSS just so it’s easy for me to find a specific date.

I can then cd to one of them

$ cd 170112_040001
$ ls
boot etc home root var vmlinuz vmlinuz.old

When done, don’t forget to unmount the archive as no new backups can be taken while it is mounted.

There you go. Start using.

 

More effective CIDR-blocking

Previously we have talked about how to block certain addresses using the firewall in Linux (iptables) but if you have a large number of CIDR blocks, say whole countries like China (about 7000 blocks) this will not be keen on the CPU in the server.

Especially the script that inserts it by repeatedly calling iptables. The first few hundred calls will be quit but then is slows down as the kernel won’t process so many insertions in the iptables lists.

There is another way that is just as effective called blackholing the ip ranges you wish to block from your server. This is done by adding routes for those packest that leads nowhere.

# ip route add blackhole <ip address>

This works quite beautifully with tens of thousands of addresses of course. As before we should read the CIDR files we want in order to create the null routes that is needed.

Here is a script that will read a directory of CIDR files and null route all of them.

for f in /etc/iptables/hosts-banned/*
do
    LINES=$(wc -l $f | awk '{print $1}')
    echo -n "Processing k-line file $f with $LINES blocks... "
    date +"%H:%M:%S"
    while read p
    do
        ip route add blackhole $p
    done < $f
done

The CIDR files in this case resides in /etc/iptables/hosts-banned/ an they can be gotten from online or you may add any address ranges you want, perhaps based on automatic firewalling.

To remove a certain blacholed range or ip you can do the same thing again changin the ip route add to an ip route del command instead.

ip route del <ip address>

You can produce a script that removes them by doing the following:

ip route | grep blackhoe | awk '{ print "ip route del " $2 }' >unblock
chmod 700 unblock
./unblock

That’s it, they are all now cleared.

 

Configure zsh in Byobu

Most Linuxen these days runs bash as their native shell. While Bash is OK it’s not my favourite actually. I’ve always been partial to zsh which for example has outstanding completion qualities that Bash totally misses.

If you run byobu which is an add-on for tmux or screen with lots of nifty features then you should perhaps want to configure it for zsh as it’s standard shell.

This works if you are using tmux rather than screen as your terminal multiplexer.

It’s easy if you know what to do. So open an editor and edit the file:

~.byobu/.tmux.conf

Then enter the following:

set -g default-shell /bin/zsh
set -g default-command /bin/zsh

Save the file, restart your byobu and everything should be daddy-o.

 

Ubuntu persistent network interface names

In Ubuntu 16.x the systemd is used more than in the previous versions. This also means it is now responsible for setting up your network cards. Many people have been sort of surprised that their eth0 have changed to something like enp0s25. This is of course an improvement from before, there was no real telling in which order NICs would be assigned names so potentially a hardware change could offset eth0 and eth1 and so on.

The new way is actually not too bad but if you like me do a lot of manual configurations on the fly to the network interfaces their names can be tedious to type and also remember. But of course there is a rather simple mechanism to change this so you can select your own names for the interfaces such as lan0 and dmz1 or why not wifi plain and simple if there is never to be any more than one wifi card in the computer.

This is a step-by step guide that was tested under Ubuntu 16.10 and worked for me. Please leave your comments if you have problems, improvements or any such things to add.

Getting the names

First of all we need to find out what the names of the NICs we have in the system actually are. Here is a dump from my laptop using the ifconfig command to list all interfaces:

root@kraken:~# ifconfig -a
enp0s25: flags=4098<BROADCAST,MULTICAST> mtu 1500
 ether f0:de:f1:8d:89:fe txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 device interrupt 20 memory 0xf2a00000-f2a20000

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10<host>
 loop txqueuelen 1 (Local Loopback)
 RX packets 3143 bytes 204307 (204.3 KB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 3143 bytes 204307 (204.3 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 192.168.1.3 netmask 255.255.255.0 broadcast 192.168.1.255
 inet6 fe80::846f:cc3d:2984:d240 prefixlen 64 scopeid 0x20<link>
 ether 00:24:d7:f0:a3:a4 txqueuelen 1000 (Ethernet)
 RX packets 4600 bytes 5069857 (5.0 MB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 3348 bytes 592050 (592.0 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wwp0s29u1u4i6: flags=4098<BROADCAST,MULTICAST> mtu 1500
 ether 02:80:37:ec:02:00 txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

We are looking for two things in the above, the MAC address and the name of the network interface card we want to rename. The NICs we have here are named after the type of card, the bus it is attached to etc. What used to be called eth0 is now referred to as enp0s25 and wlan0 is wlp3s0 and there is also a WAN card in the machine called wwp0s29u1u4i6 which definitely is a mouthful.

Okay, so we would like to rename these to more sensible names. First we pick the names such as eth0, wlan0, wan0 etc. Then we note down the MAC address of each card. You find this highlighted in red in the above dump next to the keywork ”ether”. Once we have that we can tell the systemd to rename the cards in the way we want. By connecting the name to the MAC address it should also be persistent and not affected by inserting a new card into the computer system.

In directory /etc/systemd/network we will create the following files:

root@kraken:/etc/systemd/network# ll
 total 20
 drwxr-xr-x 2 root root 4096 Dec 11 04:28 ./
 drwxr-xr-x 5 root root 4096 Nov 24 15:03 ../
 -rw-r--r-- 1 root root 55 Dec 6 23:44 01-eth0.link
 -rw-r--r-- 1 root root 56 Dec 6 23:39 02-wifi.link
 -rw-r--r-- 1 root root 55 Dec 6 23:40 03-wan.link

These link files can be used to match a device and then change its parameters. So they consists of a matching section and then a link section. The first one called 01-eth0.link contains the following lines:

[Match]
  MACAddress=f0:de:f1:8d:89:fe

[Link]
  Name=eth0

We can then create the other ones in the same way. When we are done with that we need to do two things. First we need to update the initial ram file system in boot because some of these may already be up during boot time (such as eth0). This is done with the following command:

root@kraken:/etc/systemd/network# update-initramfs -u
 update-initramfs: Generating /boot/initrd.img-4.8.0-30-generic
 W: Possible missing firmware /lib/firmware/i915/kbl_guc_ver9_14.bin for module i915
 W: Possible missing firmware /lib/firmware/i915/bxt_guc_ver8_7.bin for module i915
 W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.

Once we have done this we can reboot our computer.

When up again we can check the network names again:

anders@kraken:~$ ifconfig -a
eth0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
 ether f0:de:f1:8d:89:fe txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 device interrupt 20 memory 0xf2a00000-f2a20000

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10<host>
 loop txqueuelen 1 (Local Loopback)
 RX packets 1732 bytes 110296 (110.2 KB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 1732 bytes 110296 (110.2 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wan0: flags=4098<BROADCAST,MULTICAST> mtu 1500
 ether 02:80:37:ec:02:00 txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 192.168.1.3 netmask 255.255.255.0 broadcast 192.168.1.255
 inet6 fe80::1ed7:d5ac:433d:70c5 prefixlen 64 scopeid 0x20<link>
 ether 00:24:d7:f0:a3:a4 txqueuelen 1000 (Ethernet)
 RX packets 93 bytes 71048 (71.0 KB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 137 bytes 18113 (18.1 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

As you can see we now have eth0, wlan0 and wan0 instead of the default names. So if you like me work from the command line mainly you will be happy that ifconfig eth0 now works just like it did before the systemd entered the scene and if you have firewall scripts you can of course rename your interface to something that is useful to you such as lan, wan and dmz or whatever makes sense.

Rsync non-standard SSH port

Using rsync is a very nice method to syncronize backups or other files between two machines. One thing that causes people a bit of headache is however how to do that when not using the standard port 22 for ssh.

One reason for people to change ssh to a non-standard port is that we are currently internet-wide experiencing a rise in botnets knocking on this port, attempting several default usernames and passwords. A really easy way of fending that off is to move ssh to a different port number. Any port would do but then there are a number of things that may break.

The rsync is one of them. In the manual for rsync it is stipulated that the port number can be inserted in the URL specification such as:

rsync -a /source user@host.name:77/mnt/disk/destination

But this does not seem to work. The best way is instead to pass the port parameter directly to ssh by using the rsync -e directive such as this:

rsync -a -e "ssh -p77" /source user@host.name:/mnt/disk/destination

This works like a charm for most things. One more note, the rsync directive –port=port is a listen port and wont work as a destination port in this case.

Ubuntu 14.04 virtual host user selection broken (mpm_itk)

I recently moved some virtual hosts from an older Ubuntu 12.10 to the newer 14.04. The problem I had was that I could not get the automatic user selection in apache to work for the mpm-itk modules. The module can be installed but it does not appear to configure properly.

To fix it I had to add the following line to /etc/apache2/apache2.conf

LoadModule mpm_itk_module /usr/lib/apache2/modules/mpm_itk.so

After doing that the virtual host definitions including the directive

AssignUserId <user> <group>

Started working as expected again. I thought it was weird it did not work out of the box, people have reported various issues with mpm_itk under Ubuntu 14.04 but I never found a solution to my particular problem so I hope this helps someone else to sort it out.

A Better Login for the Web

Logins to the web today generally consists either of a username/password pair or an email address and password. This has to change. The reason is simple, people don’t select good passwords and even if they do they re-use them on multiple web pages meaning that a sysadmin of some site may know your password to other sites or by accident get hacked or a multitude of other things can happen to it.

Emails for usernames are inherently bad. First of all someone wanting to break in would not even have to guess your login credentials, only your password. Also, people tend to use things like webmail for their email today which means that anyone running that system can use credentials to log on as someone else.

Most services can reset their password if you know someones mail. It’s so easy to sneak in to a colleagues computer and use a password reset, then fetch a new password, delete the mail, go to another computer log in and change the email address for the reset. The first user stands little chance to get it back ever again.

We are starting to see logins based on your facebook account or google accounts or even Yahoo! although their services are getting more scarce every day, but that is at least a step in the right direction — that is if you have at least a two-factor authentication method turned on (google supports this) it can be reasonably safe. Nothing is a hundred percent and may not need to be, resonably safe is good enough here.

But that relies on a third party service which makes you vulnerable if it should be offline. Or if your account gets cancelled for whatever reason, even just a mistake, your screwed and may not even be able to log in to your email to send a complaint.

There has to be a better way!

We have already seen some things. There are web login systems out there that displays a challenge to your phone which then sends a response using a local encryption key. Clever but if your phone does not have network it won’t work very well. There are other similar things that are built around a sound or light show and QR codes or whatnot that you scan with your smartphone.

But all this relies on advanced hardware that may be out of battery, no connection to the network or if you are roaming you may not WANT to connect it to the operator network for data traffic because of the immense roaming charges on data.

There has to be a better way!

In fact, the solution is pretty easy. We can use standard every day public key encryption methods to make this work pretty well. Similar to what we do with SSL but without the fuzz and in a way that a registered user can identify himself using private key signing of a login certificate.

The certificate is issued by the web site when you register. It is encrypted with your public key so only you can decipher it. When you log in you encrypt the same certificate with the web site’s public key and then only they can verify your authenticity. You also sign it with your private key so the web site can verify that your key is valid still.

All this needs is a pen drive and some open source software.

It’s time to build a better login method for the web. It’s time to make it easy. Keys can be stored on a thumb drive, in your phone or even on your own computers. No need for a third party service. Losing a key means a revocation certificate is sent and your old key is no longer valid a new one needs to be provided.

When do we build it?

New Emacs on old Ubuntu

If you want to install Emacs24 on an older Ubuntu such as 12.04 you can do it manually by adding the following repository and then installing.

First remove the emacsen you have which is probably emacs23

$ sudo apt-get remove emacs emacs23

Then add the new repository and update:

$ sudo add-apt-repository ppa:cassou/emacs
$ sudo apt-get update

Then install Emacs24:

$ sudo apt-get install emacs24 emacs24-el emacs24-common-non-dfsg