More effective CIDR-blocking

Previously we have talked about how to block certain addresses using the firewall in Linux (iptables) but if you have a large number of CIDR blocks, say whole countries like China (about 7000 blocks) this will not be keen on the CPU in the server.

Especially the script that inserts it by repeatedly calling iptables. The first few hundred calls will be quit but then is slows down as the kernel won’t process so many insertions in the iptables lists.

There is another way that is just as effective called blackholing the ip ranges you wish to block from your server. This is done by adding routes for those packest that leads nowhere.

# ip route add blackhole <ip address>

This works quite beautifully with tens of thousands of addresses of course. As before we should read the CIDR files we want in order to create the null routes that is needed.

Here is a script that will read a directory of CIDR files and null route all of them.

for f in /etc/iptables/hosts-banned/*
do
    LINES=$(wc -l $f | awk '{print $1}')
    echo -n "Processing k-line file $f with $LINES blocks... "
    date +"%H:%M:%S"
    while read p
    do
        ip route add blackhole $p
    done < $f
done

The CIDR files in this case resides in /etc/iptables/hosts-banned/ an they can be gotten from online or you may add any address ranges you want, perhaps based on automatic firewalling.

To remove a certain blacholed range or ip you can do the same thing again changin the ip route add to an ip route del command instead.

ip route del <ip address>

You can produce a script that removes them by doing the following:

ip route | grep blackhoe | awk '{ print "ip route del " $2 }' >unblock
chmod 700 unblock
./unblock

That’s it, they are all now cleared.

 

Configure zsh in Byobu

Most Linuxen these days runs bash as their native shell. While Bash is OK it’s not my favourite actually. I’ve always been partial to zsh which for example has outstanding completion qualities that Bash totally misses.

If you run byobu which is an add-on for tmux or screen with lots of nifty features then you should perhaps want to configure it for zsh as it’s standard shell.

This works if you are using tmux rather than screen as your terminal multiplexer.

It’s easy if you know what to do. So open an editor and edit the file:

~.byobu/.tmux.conf

Then enter the following:

set -g default-shell /bin/zsh
set -g default-command /bin/zsh

Save the file, restart your byobu and everything should be daddy-o.

 

Ubuntu persistent network interface names

In Ubuntu 16.x the systemd is used more than in the previous versions. This also means it is now responsible for setting up your network cards. Many people have been sort of surprised that their eth0 have changed to something like enp0s25. This is of course an improvement from before, there was no real telling in which order NICs would be assigned names so potentially a hardware change could offset eth0 and eth1 and so on.

The new way is actually not too bad but if you like me do a lot of manual configurations on the fly to the network interfaces their names can be tedious to type and also remember. But of course there is a rather simple mechanism to change this so you can select your own names for the interfaces such as lan0 and dmz1 or why not wifi plain and simple if there is never to be any more than one wifi card in the computer.

This is a step-by step guide that was tested under Ubuntu 16.10 and worked for me. Please leave your comments if you have problems, improvements or any such things to add.

Getting the names

First of all we need to find out what the names of the NICs we have in the system actually are. Here is a dump from my laptop using the ifconfig command to list all interfaces:

root@kraken:~# ifconfig -a
enp0s25: flags=4098<BROADCAST,MULTICAST> mtu 1500
 ether f0:de:f1:8d:89:fe txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 device interrupt 20 memory 0xf2a00000-f2a20000

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10<host>
 loop txqueuelen 1 (Local Loopback)
 RX packets 3143 bytes 204307 (204.3 KB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 3143 bytes 204307 (204.3 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 192.168.1.3 netmask 255.255.255.0 broadcast 192.168.1.255
 inet6 fe80::846f:cc3d:2984:d240 prefixlen 64 scopeid 0x20<link>
 ether 00:24:d7:f0:a3:a4 txqueuelen 1000 (Ethernet)
 RX packets 4600 bytes 5069857 (5.0 MB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 3348 bytes 592050 (592.0 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wwp0s29u1u4i6: flags=4098<BROADCAST,MULTICAST> mtu 1500
 ether 02:80:37:ec:02:00 txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

We are looking for two things in the above, the MAC address and the name of the network interface card we want to rename. The NICs we have here are named after the type of card, the bus it is attached to etc. What used to be called eth0 is now referred to as enp0s25 and wlan0 is wlp3s0 and there is also a WAN card in the machine called wwp0s29u1u4i6 which definitely is a mouthful.

Okay, so we would like to rename these to more sensible names. First we pick the names such as eth0, wlan0, wan0 etc. Then we note down the MAC address of each card. You find this highlighted in red in the above dump next to the keywork ”ether”. Once we have that we can tell the systemd to rename the cards in the way we want. By connecting the name to the MAC address it should also be persistent and not affected by inserting a new card into the computer system.

In directory /etc/systemd/network we will create the following files:

root@kraken:/etc/systemd/network# ll
 total 20
 drwxr-xr-x 2 root root 4096 Dec 11 04:28 ./
 drwxr-xr-x 5 root root 4096 Nov 24 15:03 ../
 -rw-r--r-- 1 root root 55 Dec 6 23:44 01-eth0.link
 -rw-r--r-- 1 root root 56 Dec 6 23:39 02-wifi.link
 -rw-r--r-- 1 root root 55 Dec 6 23:40 03-wan.link

These link files can be used to match a device and then change its parameters. So they consists of a matching section and then a link section. The first one called 01-eth0.link contains the following lines:

[Match]
  MACAddress=f0:de:f1:8d:89:fe

[Link]
  Name=eth0

We can then create the other ones in the same way. When we are done with that we need to do two things. First we need to update the initial ram file system in boot because some of these may already be up during boot time (such as eth0). This is done with the following command:

root@kraken:/etc/systemd/network# update-initramfs -u
 update-initramfs: Generating /boot/initrd.img-4.8.0-30-generic
 W: Possible missing firmware /lib/firmware/i915/kbl_guc_ver9_14.bin for module i915
 W: Possible missing firmware /lib/firmware/i915/bxt_guc_ver8_7.bin for module i915
 W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.

Once we have done this we can reboot our computer.

When up again we can check the network names again:

anders@kraken:~$ ifconfig -a
eth0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
 ether f0:de:f1:8d:89:fe txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 device interrupt 20 memory 0xf2a00000-f2a20000

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
 inet 127.0.0.1 netmask 255.0.0.0
 inet6 ::1 prefixlen 128 scopeid 0x10<host>
 loop txqueuelen 1 (Local Loopback)
 RX packets 1732 bytes 110296 (110.2 KB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 1732 bytes 110296 (110.2 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wan0: flags=4098<BROADCAST,MULTICAST> mtu 1500
 ether 02:80:37:ec:02:00 txqueuelen 1000 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 0 bytes 0 (0.0 B)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 192.168.1.3 netmask 255.255.255.0 broadcast 192.168.1.255
 inet6 fe80::1ed7:d5ac:433d:70c5 prefixlen 64 scopeid 0x20<link>
 ether 00:24:d7:f0:a3:a4 txqueuelen 1000 (Ethernet)
 RX packets 93 bytes 71048 (71.0 KB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 137 bytes 18113 (18.1 KB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

As you can see we now have eth0, wlan0 and wan0 instead of the default names. So if you like me work from the command line mainly you will be happy that ifconfig eth0 now works just like it did before the systemd entered the scene and if you have firewall scripts you can of course rename your interface to something that is useful to you such as lan, wan and dmz or whatever makes sense.

Rsync non-standard SSH port

Using rsync is a very nice method to syncronize backups or other files between two machines. One thing that causes people a bit of headache is however how to do that when not using the standard port 22 for ssh.

One reason for people to change ssh to a non-standard port is that we are currently internet-wide experiencing a rise in botnets knocking on this port, attempting several default usernames and passwords. A really easy way of fending that off is to move ssh to a different port number. Any port would do but then there are a number of things that may break.

The rsync is one of them. In the manual for rsync it is stipulated that the port number can be inserted in the URL specification such as:

rsync -a /source user@host.name:77/mnt/disk/destination

But this does not seem to work. The best way is instead to pass the port parameter directly to ssh by using the rsync -e directive such as this:

rsync -a -e "ssh -p77" /source user@host.name:/mnt/disk/destination

This works like a charm for most things. One more note, the rsync directive –port=port is a listen port and wont work as a destination port in this case.

Ubuntu 14.04 virtual host user selection broken (mpm_itk)

I recently moved some virtual hosts from an older Ubuntu 12.10 to the newer 14.04. The problem I had was that I could not get the automatic user selection in apache to work for the mpm-itk modules. The module can be installed but it does not appear to configure properly.

To fix it I had to add the following line to /etc/apache2/apache2.conf

LoadModule mpm_itk_module /usr/lib/apache2/modules/mpm_itk.so

After doing that the virtual host definitions including the directive

AssignUserId <user> <group>

Started working as expected again. I thought it was weird it did not work out of the box, people have reported various issues with mpm_itk under Ubuntu 14.04 but I never found a solution to my particular problem so I hope this helps someone else to sort it out.

A Better Login for the Web

Logins to the web today generally consists either of a username/password pair or an email address and password. This has to change. The reason is simple, people don’t select good passwords and even if they do they re-use them on multiple web pages meaning that a sysadmin of some site may know your password to other sites or by accident get hacked or a multitude of other things can happen to it.

Emails for usernames are inherently bad. First of all someone wanting to break in would not even have to guess your login credentials, only your password. Also, people tend to use things like webmail for their email today which means that anyone running that system can use credentials to log on as someone else.

Most services can reset their password if you know someones mail. It’s so easy to sneak in to a colleagues computer and use a password reset, then fetch a new password, delete the mail, go to another computer log in and change the email address for the reset. The first user stands little chance to get it back ever again.

We are starting to see logins based on your facebook account or google accounts or even Yahoo! although their services are getting more scarce every day, but that is at least a step in the right direction — that is if you have at least a two-factor authentication method turned on (google supports this) it can be reasonably safe. Nothing is a hundred percent and may not need to be, resonably safe is good enough here.

But that relies on a third party service which makes you vulnerable if it should be offline. Or if your account gets cancelled for whatever reason, even just a mistake, your screwed and may not even be able to log in to your email to send a complaint.

There has to be a better way!

We have already seen some things. There are web login systems out there that displays a challenge to your phone which then sends a response using a local encryption key. Clever but if your phone does not have network it won’t work very well. There are other similar things that are built around a sound or light show and QR codes or whatnot that you scan with your smartphone.

But all this relies on advanced hardware that may be out of battery, no connection to the network or if you are roaming you may not WANT to connect it to the operator network for data traffic because of the immense roaming charges on data.

There has to be a better way!

In fact, the solution is pretty easy. We can use standard every day public key encryption methods to make this work pretty well. Similar to what we do with SSL but without the fuzz and in a way that a registered user can identify himself using private key signing of a login certificate.

The certificate is issued by the web site when you register. It is encrypted with your public key so only you can decipher it. When you log in you encrypt the same certificate with the web site’s public key and then only they can verify your authenticity. You also sign it with your private key so the web site can verify that your key is valid still.

All this needs is a pen drive and some open source software.

It’s time to build a better login method for the web. It’s time to make it easy. Keys can be stored on a thumb drive, in your phone or even on your own computers. No need for a third party service. Losing a key means a revocation certificate is sent and your old key is no longer valid a new one needs to be provided.

When do we build it?

New Emacs on old Ubuntu

If you want to install Emacs24 on an older Ubuntu such as 12.04 you can do it manually by adding the following repository and then installing.

First remove the emacsen you have which is probably emacs23

$ sudo apt-get remove emacs emacs23

Then add the new repository and update:

$ sudo add-apt-repository ppa:cassou/emacs
$ sudo apt-get update

Then install Emacs24:

$ sudo apt-get install emacs24 emacs24-el emacs24-common-non-dfsg