Skip to main content

Testing provisioning scripts with throw-away VMs

Using expendable virtual machines for testing purposes is hardly a new concept.
Now pretty-please-with-sugar-on-top would be automating the whole create-provision-test-destroy cycle.

Enter Vagrant. Vagrant is a tool for building and distributing virtualized development environments. It allows automated creation, provisioning (with Puppet, Chef or shell scripts) and tear down.

Installing Vagrant on Ubuntu 10.04 LTS (which I'm still running) requires the following:

  1. a 4.1.x VirtualBox distribution (the 3.1 bundled with Ubuntu will not work)
  2. a recent version of ruby (you guessed it: the 1.8.x bundled with Ubuntu will not work)
To install VirtualBox add the following repo to /etc/apt/sources.list:

deb http://download.virtualbox.org/virtualbox/debian lucid contrib non-free
then issue the usual apt-get update and install with (remove stock packages first):
sudo apt-get update
sudo apt-get remove virtualbox-ose virtualbox-ose-dkms
sudo apt-get install virtualbox-4.1
Ruby is a more complicated and longer issue because, in case your system still ships with 1.8, it requires a full build of ruby 1.9. If you already have ruby 1.9 you can skip this section.
To install ruby 1.9 alongside any other existing ruby we'll use rvm. Rvm will install a separate version of ruby without replacing or breaking the one that came with our system. Unfortunately this requires a full build, so you might want to launch it before you go out (following instructions were taken from here):
curl -s https://rvm.beginrescueend.com/install/rvm -o rvm-installer
chmod +x rvm-installer
./rvm-installer --version latest
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # This loads RVM into a shell session
rvm install ruby-1.9.2
rvm use 1.9.2
rvm --default use 1.9.2
gem install vagrant
Now we're ready to use Vagrant. First we need to download a virtual machine image that vagrant will use as a template to jumpstart our throw-away vms:
vagrant box add base http://files.vagrantup.com/lucid32.box
This will take some time depending on your connection speed. When it's done we need to tell vagrant to generate a template configuration file with the command:
vagrant init

The configuration file is called a Vagrantfile and it provides vagrant with all the information needed to assemble the vm. The parts of the Vagrantfile we'll want to change are:
  • networking: by bridging one vm interface to our LAN so that the VM can access resources beyond the localsystem. Set the network option as follows: config.vm.network :bridged
  • a shared (read-write) folder from the host for caching and file distribution: config.vm.share_folder "v-data", "/vagrant_data", "install-data"
  • a provisioning method between Chef, Puppet and shell
After that the vm can be created with:
vagrant up
The up command will also run the configured provisioning method after the vm has completely booted-up. In my case I chose to run just a shell script.
We can now login into the vm (as the vagrant user, use sudo to issue commands as root) to check if everything's allright:
vagrant ssh
To destroy the vm:
vagrant destroy
Now whenever we need/want to recreate the vm we just run vagrant up again and Vagrant will take care of booting the vm, setting up networking and then running the configured provision method. And it's quick too as on my laptop the whole process, including the complete run of the provision script takes roughly 2 mins.

Comments

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.