Etikettarkiv: windows

Borg Backup

The Borg

A very useful utility is the Borg Backup system, or just Borg. It’s a deduplicating backup system meaning that is scans the files and when it finds data that is already in the backup the data in the second and all other subsequent files are replaced with a reference to the first instance of that data.

The idea is that the same data is only stored once. All the backups you take after the initial one stores only the differences and the new data that has been accumulated since the last backup. This means that backing up after the initial backup is done is very fast, efficient, saves bandwidth and storage space.

Traditional backups have usually a full backup every month and then they take increments daily or so. If you need to restore a file you need to take the latest full backup, then apply each increment that was taken after. With the borg backup that is not necessary as you can view the file system exactly as it looked upon each and every backup point taken.

In fact you can mount the whole backup as a file system and then traverse it from there. It’s very effective. So let’s get started because face it, you don’t back up as much as you should do!

Borg can be used for multiple platforms but my commands here will be for linux.

The first step is to create a repository, this may sit on a different machine, NAS, attached USB drive or even on the same machine, of course you want multiple backups really so you can take the borg backup locally and then rsync it to as many locations as you feel is necessary.

Take the backup

The first step is to create the dir of the the backup repo and then we need to initialize it for being used with borg. This is quite simply done as:

$ sudo mkdir /bup
$ sudo borg init /bup

When the repository has thus been created it is time for the first initial backup. The format should be clear in a bit, it’s not complicated and can look like this:

$ sudo borg create --progress --info --stats /bup::lenovo-170202_163423 /home /root /boot /etc /var

The command above should be a single line. The first thing we give to borg is the command, in this case it is create to create a new backup set for us. Then we have some flags, –progress shows a progress indicator while borg is working that details also the number of bytes being read, backed, compressed and deduplicated. The next –info sets the information level borg presents to us and –stats lets borg summarize the operation with some statistics.

The next part of the command the /bup::lenovo-170202_163423 specifies the backup location and backup name. The name is given after the double colon :: mark. In this case its composed of the date yymmdd and time hhmmss of when the backup is started, doing that makes it easy to find the right set of data later when a restore is needed.

Why did I prefix it with lenovo? Well my main linux laptop is a lenovo and I also have other computers, like an ASUS laptop etc.  The beauty with deduplicating backups is that I backup multiple machines to the same repo. By doing that it will deduplicate across the machines and if I have the same files and data in multiple places it will just be replaced by references to the data that is already in the backup.

The final part of the command is just all paths I want to include in this backup. They can vary from time to time. I might backup /home daily but /root only once a week if I want. No problem at all with borg.

Restoring a backup

No system of backup is actually deployed before you have attempted and successfully retrieved data from it so that you know what to do in an emergency as well as being able to extract old data mistakenly erased or restore a full system after a hard drive crash.

Restoring a borg backup works a little different from what you may be used to. First of all you can of course extract the data fully or just single files if you know their paths just like with any other backup system. The restore command is called extract in borg.

$ sudo borg extract /bup::lenovo-170202_163423

This will extract the entire archive and then you can move the files into their respective locations. You can also extract for example only the etc folder from the archive:

$ sudo borg extract /bup::lenovo-170202_163423 etc

Extraction always writes in the current working directory. Therefore you should first extract then move the files into their correct location in your file system or if  all the backups are taken from the root of the file system / then you can cd there before extracting but I recommend extracting on a different volume first and then restoring from there. The reason is that there is usually a lot of stuff in a backup that you may not always want to restore.

Mounting the backup as a file system

So borg actually offers another way also. You can mount the backup as a volume, or you can mount the whole repo and see all the backup points made, select which one you want and then just copy the files from there to the live system.

$ sudo borg mount /bup::lenovo-170202_163423 /mnt

This will mount the backup lenovo-170202_163423 in the file system at /mnt. You can then cd to /mnt and then use cp etc to copy the files to their right places.

When done you can dismount it (otherwise other processes can’t backup, the repo is locked while mounted)

$ sudo umount /mnt

Borg uses fuserfs to mount local directories.

You may also mount the whole repository:

$ sudo borg mount /bup /mnt

Now when you go into the /mnt folder you will see all your backup names as directories:

$ ls
161204_040001 170101_203409 170113_040001 170117_040001 170121_040001 170125_010344 170128_030332
161206_040001 170108_040001 170114_040001 170118_040001 170122_214910 170125_040001 170128_040001
161218_174848 170111_040001 170115_040002 170119_040001 170123_040001 170126_040001 170129_040001
161225_040001 170112_040001 170116_040001 170120_040001 170124_040002 170127_040001 170201_082851

As you can see I generally name my backups with YYMMDD_HHMMSS just so it’s easy for me to find a specific date.

I can then cd to one of them

$ cd 170112_040001
$ ls
boot etc home root var vmlinuz vmlinuz.old

When done, don’t forget to unmount the archive as no new backups can be taken while it is mounted.

There you go. Start using.

 

Using tar for backing up your data

Tar (tape archiver) is an old Unix-command that has been largely forgotten among people who are not in touch with the unix world daily today. In several forums I hear people asking how to back up files in Linux in a simple and efficient way and what to use. Most seem to prefer a graphical solution but some are happy with a command line version as well.

Personally I generally distrust graphical backup softwares. It puts a layer between you and what is actually going on that is unnecessary and those that are not just graphical shells on top of programs such as tar are usually proprietary and you can’t rely on that there is something that can read the archives in even five years time.

Tar is a little different, it has been proven over time to be one of the most efficient and well functioning backup solutions. However, people today have generally forgot how to perform full archiving, incremental backups and differential backups using tar properly.

And, as always any backup solution that fails at restoring your data now or in the future is doomed from the start. Tar builds on a format that many archive handlers can read not to mention the source code is open source and freely distributable and not likely to disappear any time soon.

Types of backups

There are generally three type of backups that we will be discussing here.

Full archive

The first and by far simplest one is a full archive. This means that everything is archived. This is generally a very time consuming and space consuming task and not something you would want to do every day. The full archive is however the simplest to restore and does not need any special considerations except that you might have to split it over several volumes depending on your media, be that tapes, CD-R/DVD-R or hard disk volumes.

Personally I prefer hard disk volumes as my backup media. A full archive for me is somewhere clodse to 700 GB so using CD-R is not really feasible (945 volumes ca) and DVD-R is not that much better (170 volumes). Tapes are probably scarce today and their capacity is usually even lower than the optical medias so harddisks is what I use. I get a Western Digital MyBook storage disk (USB2 connection) and just disconnects it between backups. This way data should not be eraseable unless you physically plug it in.

Incremental backups

Then we have incremental backups. Incrementals work like this. First you dump a full archive with everything in it. Then periodically you back up everythin that has changed since the last time. This is a very efficient backup method if you want to make backups often to minimize data loss if there is an accident. The downside of it is that you will quickly come to have lots of files to keep track of and even more important, a restore operations means that you must restore all the files in the order they were created. This is very time consuming and more risky that something goes wrong.

However, incremental backups are incredibly popular also, they are usually fast. The more often you backup the faster the backup goes, at least in theory.

Differential backups

Then the last type we will be talking about are differential backup. They start out just like incrementals with a full archive copy of everything. Then everytime you backup you backup everything that has changed since the full archive was made. The difference here to incrementals is that you only have two active files at any time, the last full archive and the differential archive. A restore operation is therefore very efficient and a two-step operation only.

The downside with differentials is that over time the diff file will grow since more and more files have changed from the time of the last full archive, and therefore the efficiency over time is not great. When the differential has grown to the size of something like 50% of the full archive then it may be better to make a new full archive and start over with the differential.

Using tar

Using tar to perform a full backup is done like this:

tar -c -v -f archive.tar /home/

Using tar in windows under the Cygwin package you would have to change the /home/ path to /cygdrive/c/Documents\ and\ Settings/ or something similar because that is where your personal data will be located on the computer (unless your ”My Documents” has been moved to a different location for some reason.

Using tar to perform incremental backups requiers a two-step process. First you creat a full archive but with a separate date stamp file:

tar -g incremental.snar -c -v -f archive.0.tar /home/

The -g option is the same as –listed-incremental=incremental.snar option and allows the tar to store additional metadata outside the archive that can be used to perform increments later.

tar can also do without the external file, but since this put non-standard meta-data into the tar archive itself it is not recommended since it might break compatibility with non-gnu tools.

The next level or the first increment is thus performed such as:

tar -g incremental.snar -c -v -f archive.1.tar /home/

Since the incremental.snar file already exists only files newer than files referred in the meta-data file will be dumped. The meta-data file incremental.snar will be updated and you will have your first increment.

Keep going like this for each increment. When you want to perform a full restore again use a new incremental.snar file or delete the old one. The meta-data file is not necessary in order to restore the file system.

Restore is done with

tar -g /dev/null -x -v -f archive.0.tar 

Repeat this for each increment you have done, i.e. archive.1.tar, archive.2.tar and so on. Remember that when using tar incrementally it will try to recreate the exact file system, i.e. it will delete files that did not exist when the archive was dumped. Therefore you will see the file system change until you have the last increment in place and it will be fully restored.

Differential files are simplest done by dumping files that have changed on or after the date of the full archive. In order to do this, create the full archive first. Then note the time stamp of the archive (I put it in the file name of the archive) thus:

tar -cvf full-archive-2010-05-01.tar /home/

Then to create a differential for all files that changed since the 1st of may 2010 you can perform the following:

tar -N 2010-05-01 -cvf diff-archive-2010-05-05.tar

The new archive will contain all the files that has changed on the date or later dates that you give to the -N option.

The next differential is created in the same way but at a later date. After that you may remove the old differential since it will be superseeded by the new one.

To restore simply untar the full-archive and then the latest differential. When those two operations have finished your file system is up to date again.

This version of the command will however NOT delete any files from the file system as the incremental version will do.

tar -xvf full-archive-2010-05-01.tar
tar -xvf diff-archive-2010-05-05.tar

That’s it for this time. Have fun with tar.

 

Make your XP installation SSD-flavoured

Many people are considering SSD (Solid State Drive)  in their laptops. There are of course many reasons for this, the SSD is silent and less heat generating, in many cases less power consuming and above all else, not susceptible to shock or sudden movements of the laptop.

Field engineers love SSD it has extended battery time, made laptops that are quickly closed and shoved in a bag much less prone to overheating and they can be used in harsh electro-magnetic evironments such as in the vincinity of radio trasmitters without risking that the hard disk loses data.

The problem with XP and SSD is that most drives made with this technology requires an erase operation on an entire block before it can be written back to the drive. This means that things like disk caching and so on works different from with standard disk drives and needs to be tweaked in order to get maximum performance out of it.

Most SSD manufacturers also guarantees only 10 000 writes to a cell and although most SSD uses techniques such as wear leveling where the cells are written to in a fashion to spread the wear on them over the whole disk eventually they will start to fail. An SSD is a rather expensive item still so people would like to maximise the life spand of their drive. Hence the following tweaks.

Disable windows XP prefetcher

Change the following registry keys:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contro l\Session Manager\Memory Management\PrefetchParameters]
”EnablePrefetcher”=dword:00000000

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Dfrg\BootOpt imizeFunction]
”Enable”=”N”

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Curr entVersion\OptimalLayout]
”EnableAutoLayout”=dword:00000000

You must reboot after the changes have been made.

Restore original setting

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contro l\Session Manager\Memory Management\PrefetchParameters]
”EnablePrefetcher”=dword:00000003

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Schedule]
”Start”=dword:00000002

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Dfrg\BootOpt imizeFunction]
”Enable”=”Y”

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Curr entVersion\OptimalLayout]
”EnableAutoLayout”=dword:00000001

You must reboot after changes has been applied.

Change the disk cache behaviour

Start Device Manager

  • Select the drive for which you wish to administer the caching policy (your SSD)
  • Select Properties
  • Click on the Policies tab
  • Look for the option ”Enable write caching on the disk” and make sure it is selected
  • Look for a second option ”Enable advanced performance” and select it.

This option favors throughput/speed at the potential risk of data corruption. Since this is to protect removable drives from suffering data corruption if they are removed while a write operation is in progress — you may safely change this option on your internal SSD.

This trick can also be used to increase performance about 10-fold on USB-attached disks, but then you should be very careful when removing them from your system, use the device manager to disconnect before you remove them.

Other tweaks

Hibernation

Turning off hibernation can mean a better performance and longer life for the SSD because then Windows will not have to update the page file constantly in anticipation of a hibernation order.

  • Go to the control panel
  • Open the Power Options
  • Select the Hibernate tab
  • Uncheck Enable Hibernation box to disable
  • Click Ok

Reboot your system and the hibernation option is gone (but you can still use sleep mode of course which is brilliant in combination with SSD.

 

Auto logon in firefox

When you browse the company intranet with Internet Explorer it automatically sends the credentials you used to authenticate to the windows domain. Other browsers do not do this by default and therefore you get a sign on box now and then asking you to fill in username and password again in order to browse the site.

There is a remedy for this.

  1. Navigage in firefox to the following page about:config if you get a warning message that is okay.
  2. Locate the following keys:
    network.automatic-ntlm-auth.trusted-uris
    network.negotiate-auth.delegation-uris
    network.negotiate-auth.trusted-uris
  3. Add to these keys the server root path that you wish automatic credentials be sent to. For example if your intranet page is launched from the URI http://intranet/ then you should add ”intranet” to these three keys.

Now you should be automatically logged in with your windows credentials next time you navigate to these pages.

OBSERVE! Only add domains you fully trust!