Building An Effective And Cheap Redundant Structure For My Servers

People tell me I am paranoid about backups, but hey I admit it – I am! No traumas with lost data for me. Being an independent sort of person, I have always run my own servers, going back many years. Initially with a Linux server at home, but in recent years I have moved everything to VPS servers. However, my independent streak still makes me distrustful of these ‘cloud’ suppliers, and one of them let me down with an outage of 20 hours when their edge router died and they had no redundancy – and this was one of those who claimed fanatical support. Ha!

Linux-taking-over-the-cloud

So I set about building an effective and cheap redundant structure for my mail and other servers – yes I am one of those weird people who run their own cloud data servers with ownCloud – no Dropbox or whatever for me. Crazy? Maybe. Secure and redundant – for sure. ownCloud has good clients for Linux, Mac OS X and Windows. Works unobtrusively like Dropbox and all my working documents on my main Linux desktop are automatically replicated to servers and my laptop.

Setting up the redundant structure

Not satisfied with a single supplier anymore, and certainly not a single country, I set about choosing different suppliers and countries. At present I have two suppliers and three countries. Linode is great – fast support, reliable infrastructure, direct access to server instances, reliable backups and imaging and an easy to use DNS. Not the cheapest, but at $20/month for a basic Linux server, not bad. My second supplier is Digital Ocean – the new kid on the block – very cheap ($5/month for a basic Linux instance) and so far quite reliable although management and backups are not as good as Linode. Network performance has been a bit patchy compared to Linode, although they have done an upgrade in the last few days which presumably will improve things. However, for five bucks who can complain? I have tried several others but various factors such as cost, poor administration interfaces, lesser quality support and poor reliability have crossed them off my list.

So now I have a node in London, another in Tokyo and a third in New York. I don’t care if London burns – I still have servers across the world. My servers provide everything we need from mail (both web-based and imap) and documents through to web servers, WordPress, ownCloud and a few other things. Currently I keep two different Linux server distros going, just in case there are problems with one – my paranoia shows up again! This makes syncing a little more tricky, but not much. Besides, I like to keep my hand in with different ways of doing things. At present I use Ubuntu 10.04 LTS and Centos 6. Both very stable and reliable.

Syncing the servers

Rysync is your friend in this endeavour – all my configs, web pages and such like are synced regularly via a cron job. To minimize network traffic I use Lsyncd – http://code.google.com/p/lsyncd/. This is a daemon that monitors nominated directory structures for changes and replicates them with rsync. Quite straight forward and has minimal impact on server and network loads. I use the master-slave configuration for my Mysql databases (WordPress and ownCloud). Somewhat tricky to set up initially, but well documented here: http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html

Mail is handled differently as it changes more often, is very important and is complicated slightly by running different Linux distros. I could avoid that by using matching distros, but instead there is the simple and elegant solution of imapsync – http://imapsync.lamiral.info. Again runs via cron and synchronizes imap mail servers, moving only changes. I have found it to be very reliable.

Software setup for all this is quite straight forward. Each component is well documented on the web. In addition to my normal Linux server setup I only need to install Rsync, Lsyncd and imapsync plus set up some cron jobs. Rsync and Lsyncd are straight apt-get installs. imapsync is provided in ready-to-run Perl sources. It is a little trickier to get going as you need to install some specific Perl modules that are not on all systems by default. However, it comes with clear documentation and I had the relevant modules installed easily via CPAN.

You may think that all  this is a little over the top, but actually it was fairly simple to setup, takes minimal administration and avoids the more complex redundant approaches like GlusterFS, WordPress replication, NFS and such like. Also, with my unusual spread across vendors and distros it is a solution that works reliably.

In the end I have multiple domains and services fully redundant across suppliers and countries and I have spent very little money.

Recently I have acquired a Tonidoplug2 and am in the process of configuring it for home backups and will build it into my redundant web (more on this in a later post). Because I live in a different country to all my current servers, I can remove one of them and still have three countries with backups. Of course you don’t need as many servers as I have and could more cheaply get away with one VPS and a cheap home server. For me though, experimenting with different aspects of Linux is all part of the game. I am learning all the time, whilst at the same time providing us with great server infrastructure. Besides, it is an interesting intellectual exercise.