Skip to main content

SaltStack targeting: storing roles in pillar

This is an attempt to record my thoughts and describe a solution with regard on how to target/classify minions in a SaltStack environment.
SaltStack logo

An interesting discussion on the topic can be found in this (rather old) thread on the Salt-User mailing list:

https://groups.google.com/forum/#!topic/salt-users/R_jgNdYDPk0

Basically I share the same concern of the thread author Martin F. Kraft, who in an attempt to put and end to this madness ended up writing reklass.

Roles seem to be easy enough to understand and provide for a clear separation between the actual infrastructure and the desired configuration state, while allowing extensibility and customization (a more specific role can override some settings from another role).


OTOH SaltStack approach is more oriented towards targeting (perhaps because of its remote execution roots?) and offers no simple centralized way of classifying minions. In fact, until pillar targeting was introduced there was no simple way of doing it besides the catch-22 idea of using salt to customize the minion conf file with a grain specifying its roles (which, btw requires a mid-flight restart, if used in a highstate).

My solution, at the moment is the following:
  1. specify roles as pillar data
  2. target minions in highstate using said roles
  3. optionally install a mine function to push minion roles back to the master (for inventory, dns, linking, you-name-it purposes)
  4. name minions using a dev/prod/staging prefix to simplify the handling of multiple environments

Whenever the role assignment changes the new configuration can be easily pushed to all minions by running the following two commands (can be assembled in an orchestrate state):

salt '*' saltutil.pillar_refresh
salt '*' mine.flush
salt '*' mine.update

without a master or minion restart.

Comments

Popular posts from this blog

Indexing Apache access logs with ELK (Elasticsearch+Logstash+Kibana)

Who said that grepping Apache logs has to be boring?

The truth is that, as Enteprise applications move to the browser too, Apache access logs are a gold mine, it does not matter what your role is: developer, support or sysadmin. If you are not mining them you are most likely missing out a ton of information and, probably, making the wrong decisions.
ELK (Elasticsearch, Logstash, Kibana) is a terrific, Open Source stack for visually analyzing Apache (or nginx) logs (but also any other timestamped data).

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive.
Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

A not so short guide to ZFS on Linux

Updated Oct 16 2013: shadow copies, memory settings and links for further learning.
Updated Nov 15 2013: shadow copies example, samba tuning.

Unless you've been living under a rock you should have by now heard many stories about how awesome ZFS is and the many ways it can help with saving your bacon.

The downside is that ZFS is not available (natively) for Linux because the CDDL license under which it is released is incompatible with the GPL. Assuming you are not interested in converting to one of the many Illumos distributions or FreeBSD this guide might serve you as a starting point if you are attracted  by ZFS features but are reluctant to try it out on production systems.

Basically in this post I note down both the tought process and the actual commands for implementing a fileserver for a small office. The fileserver will run as a virtual machine in a large ESXi host and use ZFS as the filesystem for shared data.