Skip to main content


NGINX stream module with dynamic upstreams

NGINX has had support for dynamic upstream modules for a while in the community distribution and examples abund. I think this is probably one of the clearest I could find.

Finding a similar config for stream proxies turned out to be surprisingly hard, so here I'm sharing my solution in the hope that it can be useful to somebody. Or someone more experienced can point out a better alternative.
In my case my upstream is an ELB which can and will change ip address often so using the static DNS name was not an option.

Recent posts

Standing desk review: Actiforce

I am too jumping on the standing desk bandwagon. I first gave it a try using a support on the table and once I felt the benefit (I felt more creative and focused) I decided to make the move. After all you can still use a standing desk as a normal desk.

I first tried buying one from the usual, US-based, suspects but it can be hard to figure out the cost of shipping and then I'd also have to manage import which has never been too fun the rare times I had to (fyi: you must send a bloody fax. In 2017!).

So I looked around and found some good looking stuff from Italian manifacturers, but the prices are omg!

So I searched Amazon and of course, there it was. It is from a German vendor who is in fact reselling products from Actiforce. I know people using Actiforce and they are happy with it so I bought it.

Testing logstash filters

There are many posts on techniques for testing your logstash config, but I found most of them to lack in the exact details of getting it working and others are just obsolete, so here are my dumbed down notes:
download, unpack and cd into the logstash version you are using or planning to useinstall development tools: ./bin/logstash-plugin install --developmentcheck if the bin directory contains an rspec file. If not create it and make it executable using this sourcenow cd into the project holding your logstash configs. I'll assume your logstash config lives in a conf.d directory: create a spec directory at the same level or run ${LOGSTASH_HOME}/bin/rspec --init for rspec to create its directory structure. You should now have conf.d and spec at the same levelin spec drop a test specification, like the one belowtest your specs with the following command:${LOGSTASH_HOME}/bin/rspec
Enjoy :-)

Edited on Jan 29th 2017 as I missed the plugin step. Apparently I had an older version lying ar…

Codemotion Milan 2016: thoughts

I have never written a review on a conference and this isn't one: this is a brain dump (hence the lack of form and structure) that I quite simply needed to get out here. It might be incomplete, biased or what else: if there is anything that I should know then let me know in the comments. If you think I'm wrong then let me know that too, but please elaborate :-).

Last week I was, for the first time, at Codemotion in Milan. A first time doubly so: I had never been to a generalist conference before. Being I a generalist person I appreciated the format, even though I found some of the technical talks too light on actual details.

The talks that I liked the most were those of the inspirational track. I went to thesetwo among others and I heartly recommend that you watch them (heck, have all your team watch them!). Maybe because I'm old and I have recently realized that people matter more than tools or technologies?
Overall I would rate the conference an 8 out of 10 just for tho…

A not so short guide to TDD SaltStack formulas

One of the hardest parts about Infrastructure As Code and Configuration Management is establishing a discipline on developing, testing and deploying changes.
Developers follow established practices and tools have been built and perfected over the last decade and a half. On the other hand sysadmins and ops people do not have the same tooling and culture because estensive automation has only become a trend recently.

So if Infrastructure As Code allows you to version the infrastructure your code runs on, what good is it if then there are no tools or established practices to follow?

Luckily the situation is changing and in this post I'm outlining a methodology for test driven development of SaltStack Formulas.

The idea is that with a single command you can run your formula against a matrix of platforms (operating systems) and suites (or configurations). Each cell of the matrix will be tested and the result is a build failure or success much alike to what every half-decent developer of…

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive.
Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

SaltStack targeting: storing roles in pillar

This is an attempt to record my thoughts and describe a solution with regard on how to target/classify minions in a SaltStack environment.

An interesting discussion on the topic can be found in this (rather old) thread on the Salt-User mailing list:!topic/salt-users/R_jgNdYDPk0

Basically I share the same concern of the thread author Martin F. Kraft, who in an attempt to put and end to this madness ended up writing reklass.

Roles seem to be easy enough to understand and provide for a clear separation between the actual infrastructure and the desired configuration state, while allowing extensibility and customization (a more specific role can override some settings from another role).