16 minutes
Configuration Seasoning: Getting started with Saltstack
Configuration management is the practice of deploying and managing your application and infrastructure configuration through automated tooling, rather than managing all of your infrastructure manually.
This can cover everything from Linux servers, to network equipment, installing packages to updating existing services. The primary benefits are that you can manage more infrastructure without the operational burden increasing significantly, and that your configuration is consistent across your estate.
There are already a number of tools which achieve this: -
- CFEngine - One of the first
- Puppet - Still one of the most popular
- Chef - Also very popular still
- Ansible - Probably the most used tool (currently)
- and many more…
In this post, I’m going to cover another popular choice, Saltstack. Before that, I’ll explain a few of the differences between the most popular tools.
Agent-based vs Agentless
Agent-based
Agent-based configuration management is where something is installed on the infrastructure being managed that responds to a central “master” or “control” server. This can be to report back details about the host, and also makes changes to the host that the central server requests.
For example, if your central server sent an instruction to install tcpdump
, the agent would ensure the host goes through the process of installing it. Once the installation is complete, it would report back to the central server to say that the job is finished.
Puppet and Chef are mostly agent-based. They do have the option of running agentless, but the most common way of using them is with a central server.
Agentless
Agentless configuration management is where the changes to the infrastructure are made from a remote server. This means that when you run the configuration management tool, it will log in to the infrastructure and make the changes, rather than relying on an agent that is installed on the machine.
This has advantages over Agent-based configuration management, in that infrastructure that cannot have an agent installed (for example, networking equipment) can be managed by the same tooling as other infrastructure (i.e. servers, applications)
The disadvantage of this is your “fact gathering” process (e.g. what operating system the host runs, what version, CPU architecture) is typically executed on every configuration run. In an agent-based scenario, the control server already knows this due to having a running agent on said Infrastructure.
Ansible is agentless. Other tools have also began to adopt Agentless approaches (Puppet has Bolt, Chef has Solo).
Configuration Language
Chef and Puppet use a DSL (domain-specific language), meaning a configuration language that is specific to the tool itself. This means that to use Chef and Puppet, you have to learn the language it uses (along with its quirks and caveats).
Ansible uses YAML (Yet Another Markup Language), meaning if you are already familiar with YAML, you’ll feel at home with the syntax. It also can make use of Jinja2, to allow for templating, basic conditional logic (if/else, for loops).
What is Saltstack?
Saltstack is an agent-based configuration management tool. However unlike Chef and Puppet, it does not use a DSL, instead using YAML and Jinja2 (like Ansible). If you have used Ansible at any point, Saltstack isn’t too difficult to get started with.
Salt uses the terms Master and Minion to refer the control server and the agents respectively. As Salt uses agents, its “fact gathering” process is quicker than using Ansible (in my experience).
States, Pillars and Grains
The three main concepts to understand with Saltstack are States, Pillars and Grains.
States
States are your desired configuration. This can be as simple as installing some packages, to deploying entire application stacks on a server. For Ansible users, these are like your playbooks.
Pillars
Pillars are variables, that can be associated with a server, a group of servers, or with all hosts managed by the Salt Master. For anyone familiar with Ansible, these are similar to your host_vars and group_vars
Grains
Grains are the facts/details about the infrastructure you are managing. This could be the operating system (Linux, Windows, Cisco, VyOS), the distribution (Debian, Ubuntu, IOS-XR), the version (Buster, OpenBSD 6.6) and can even include IPs assigned to interfaces.
By combing your Pillars and Grains, you can take a generic State configuration, and make it specific to the host in question (e.g. what IP an application listens on), while still making it consistent across your infrastructure.
Getting started
Master configuration
To start using Saltstack, you’ll need to set up either a Linux or BSD system to act as your master. Solaris is possible, but is not officially supported. Windows can run as a Minion, but it cannot act as a master.
Many package repositories for Linux and BSD will include packages for Salt. If you want to run on the latest however, go to the Saltstack Repository page.
I’ll go through the process on a Debian Buster-based master.
Prerequisites
On Debian Buster (at least the minimal ISO), no version of gnupg
is shipped by default. This is required for using apt
to add GPG keys, so you can install gnupg2
like so: -
$ sudo apt install gnupg2
Add the repository
First, add the GPG key: -
$ wget -O - https://repo.saltstack.com/py3/debian/10/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -
Then, add the repository: -
$ echo "deb http://repo.saltstack.com/py3/debian/10/amd64/latest buster main" | sudo tee /etc/apt/sources.list.d/saltstack.list
Run sudo apt update
after this, and the Saltstack repo should be added.
Install the Master package
Install the Salt Master package like so: -
$ sudo apt install salt-master salt-minion
This will install the master, and also make the master a minion of itself.
While having the Master also a minion of itself may seem counter-intuitive, it means you can control your master in the same way you do the rest of your infrastructure.
If for example you have a base set of applications you like to have installed (e.g. vim
, lldpd
) , this would also apply to the master as well
Configure the master
The configuration for the master is in /etc/salt/master
. A very basic configuration would be something like the below: -
file_roots:
base:
- /srv/salt/states
pillar_roots:
base:
- /srv/salt/pillars
A number of other options are available, but the above will give you a basic installation that has all of its state files in /srv/salt/states
, and all of the pillars in /srv/salt/pillars
Apply something like the above, and then run systemctl reload salt-master
to apply it.
Salt Directory Structure
The directory structure I tend to use is a State directory containing all my configuration, and Pillar directory with all my variables in. This is separated out per application, for example: -
$ ls -l /srv/salt/states
total 12
drwxrwx--- 2 root salt-adm 4096 Nov 12 08:00 base
drwxrwx--- 3 stuh84 stuh84 4096 Nov 9 14:12 consul
-rw-rwx--- 1 root salt-adm 37 Oct 31 08:56 top.sls
$ ls -l /srv/salt/pillars
total 8
drwxrwx--- 2 stuh84 stuh84 4096 Nov 10 16:24 consul
-rw-rwx--- 1 stuh84 stuh84 51 Nov 9 12:14 top.sls
Installing Minions
Minions are installed in a similar way to masters. Follow the above instructions for adding the Salt repository, and then when it comes to installing packages, only install salt-minion
(rather than both the minion and the master.
Minion Configuration
The Minion configuration has many options, but at its most basic looks like this: -
master: salt-master.example.com
id: config-01.example.com
nodename: config-01
Apply something like the above, and then run systemctl reload salt-minion
to apply it.
The ID can be generated by Salt itself, or you can override it in the configuration above. The nodename
will be a grain that is accessible for your pillar configuration (as well as state configuration if you choose to).
You don’t need to have a difference nodename
and id
, but it can be useful in case you want to keep your configuration across multiple environments consistent. For example, you may have config-01.preprod.example.com
and config-01.prod.example.com
. You would want these to potentially go to different masters, but if they have the same nodename
, you can reuse the same configuration files for Production and Preprod
You also need to make sure that the hostname specified for the master is resolvable. This could be via DNS, or it could be via an entry in your hosts file (e.g. 10.1.1.1 salt-master
). So long as it can be resolved (and reachable) that is all that is required.
Salt Keys
Minions are authenticated against the Master. Until they are accepted, configuration cannot be applied to the Minions.
To view the current list of keys, do: -
$ sudo salt-key -L
Accepted Keys:
config-01
vyos-01
db-01
Denied Keys:
Unaccepted Keys:
meshuggah
As you can see in the above, it has three accepted keys, and one unaccepted. To accepted this key do: -
$ sudo salt-key -a 'meshuggah'
You can match multiple hosts using regular expressions, or you can use the -A
switch instead to accept all currently unaccepted keys (caution: only accept all keys in a situation where you are in complete control of the environment, do NOT do this in production!)
Salt Configuration Files
Top Files
The top files (top.sls
) contains the list of states and/or pillars that apply to a group of hosts. The top file in my states directory looks like the below: -
base:
'*':
- base
- consul
The base designation matches the master configuration that location refers to base:
. You can set up multiple state directories, so that you can have a shared set of hosts, but potentially configured by different teams for different applications.
This is very simple, but you can extend how you match hosts with regular expressions. For example: -
base:
'*':
- base
- consul
'frr*':
- frr
'frr-01.domain':
- frr.bgp-edge
The second part of the above top file would match any host that has an ID (we’ll get to the host IDs in the Minion configuration section) that starts frr
, whereas the third part only matches the host frr-01.domain
.
It’s also worth noting that if you specify just the directory name of your application (e.g. frr
), Salt would look for an init.sls
file within that directory. This typically would contain all your standard configuration, and may also refer to other state files to finish off configuration.
If you specify something like frr.bgp-edge
, the state file that would be applied to the host would be $STATE-DIRECTORY/frr/bgp-edge.sls
Similarly, the pillars file is laid out similarly: -
base:
'*':
- consul.{{ grains['nodename'] }}
In here, I’m referring to a grain (a fact of the host), in this case the nodename. Looking at the contents of the Consul directory, this will start to make sense: -
$ ls consul
config-01.sls exodus.sls meshuggah.sls teramaze.sls vanhalen.sls vpn-01.sls vyos-01.sls
db-01.sls git-01.sls ns-03.sls testament.sls vektor.sls vps-shme.sls
Within here, I can apply node specific variables, without having to refer to every individual host in the top file.
State files
A basic state file looks something like the below: -
/srv/salt/states/base/init.sls
base_packages:
pkg.installed:
- pkgs:
- tcpdump
- lldpd
- jq
- moreutils
The first line (base_packages
) refers to the name of the action you are taking. The second refers to the action (i.e. installing packages). Below that, there is the pkgs
field, followed by a list of packages to install.
The equivalent in Ansible would be: -
tasks:
- name: Install base packages
package:
state: present
name:
- tcpdump
- lldpd
- jq
- moreutils
In some ways, the Ansible configuration can be easier to understand, but is more verbose. You can often achieve similar results in Salt using less configuration.
Another example would be: -
{% if grains['os'] != 'Alpine' %}
/etc/systemd/system/consul.service:
file.managed:
- source: salt://consul/files/consul.service
- user: root
- group: root
- mode: 0644
service.systemctl_reload:
module.run:
- onchanges:
- file: /etc/systemd/system/consul.service
{% endif %}
/etc/consul.d/consul.hcl:
file.managed:
- source: salt://consul/files/consul.hcl
- user: consul
- group: consul
- mode: 0640
- template: jinja
{% for service in pillar['consul']['prometheus_services'] %}
/etc/consul.d/{{ service }}.hcl:
file.managed:
- source: salt://consul/files/{{ service }}.hcl
- user: consul
- group: consul
- mode: 0640
{% endfor %}
consul_service:
service.running:
- name: consul
- enable: True
- reload: True
- watch:
- file: /etc/consul.d/consul.hcl
This is a snippet from a state file I have that installs Consul, and also adds additional services. The main points to note are the Jinja2 syntax (i.e. the if
blocks, the for
blocks) which allow you to apply some conditional and looping logic within the state file itself.
In the above, I’m saying: -
- If the OS is not Alpine Linux, install the SystemD unit file (Alpine uses OpenRC rather than SystemD, so this wouldn’t work on alpine)
- I’m applying a templated Consul configuration (as noted with the field
template: jinja
- I’m looping through a list of services, as identified by the
consul.prometheus_services
Pillar, and copying files to the destination host that are named$SERVICE.hcl
(e.g. BIND, node_exporter etc)
Pillars
A pillar file is referred to in the top file via its relative location, without the .sls
extension (e.g. $PILLAR-DIRECTORY/consul/config-01.sls
). Within a state file however, you do not refer to the file directory, instead referring to the Pillar it has created.
This is useful because it means that hosts can have common-named Pillars, (e.g. consul.prometheus_services
) but can be host specific. An example is below: -
config-01.sls
consul:
bind_int: eth0
prometheus_services:
- node_exporter
In the above, the list of Prometheus Services available is just the node_exporter
. Compare this to the below that is used on my virtual machine host: -
meshuggah.sls
consul:
bind_int: br0
prometheus_services:
- node_exporter
- cadvisor
- libvirt
- mikrotik
- traefik
- bird
As you can see, a lot more services are available on here. However if you refer back to the state configuration, I didn’t need to specify the pillar consul.prometheus_services.meshuggah
. This is why pillars are referred to by the Pillar name and structure, rather than the filename.
Running Salt
Once you have put together some of your first state and pillar files, you can begin a Salt run. You have multiple ways of doing this: -
- Apply a specific state - This can be done per host, across hosts, or matching a regular expression
- Apply all states available - As above, you can match all hosts or a subset
Applying a specific state
To apply a specific state, you can run the following: -
$ salt 'vps*' state.apply consul
This would apply the state consul
, and would apply it to one host only. You can also do a dry run by appending test=True
to the end of the command, so that you can see what would happen if you applied it.
You can apply across all hosts using
$ salt '*' state.apply consul
Applying all states
To apply all states, you use the following: -
$ salt 'vps*' state.highstate
This will apply every state that applies to this node in one run. This can be useful to ensure the host has the latest configuration of everything, or if you are bringing up a new host and want to apply all the configuration to bring it into service.
To run against all hosts use: -
$ salt '*' state.highstate
Again, apply test=True
at the end of the commands (e.g. salt 'vps*' state.highstate test=True
) to see what changes would be made, without actually applying them.
$ salt '*' state.highstate test=True
[...]
----------
ID: consul_service
Function: service.running
Name: consul
Result: True
Comment: The service consul is already running
Started: 10:17:56.194608
Duration: 96.399 ms
Changes:
----------
ID: consul_reload
Function: cmd.run
Name: consul reload
Result: None
Comment: Command "consul reload" would have been executed
Started: 10:17:56.302612
Duration: 0.929 ms
Changes:
Summary for db-01
-------------
Succeeded: 11 (unchanged=1)
Failed: 0
-------------
Total states run: 11
Total run time: 8.170 s
[...]
Version control and cross team usage
As with most configuration management systems, you should be managing it using version control. I (like most of the planet it seems) use Git for this, and so I place my /srv/salt
directory under Git-based version control. To do this, you would do something like: -
$ cd /srv/salt
$ git init
$ git remote add git://[email protected]/username/salt.git
$ git add .
$ git commit -m "First commit"
$ git push -u origin master
In terms of using Salt across teams (especially when using version control), it makes sense to having a Salt Admin group that all the people who administer Salt would be in. By doing this, people do not need to use sudo
or doas
to update files, but also Git commits would be tied to a username (rather than root).
To do this, first create a group: -
$ sudo groupadd salt-admin
$ sudo usermod -aG salt-admin $USERNAME
Once you have all of your admin users in the group, you’ll need to apply a File ACL to the Salt directory. The acl
package is not installed by default in Debian, so first do sudo apt install acl
, and then do the following: -
$ cd /srv
$ sudo setfacl -Rm g:salt-admin:rwx salt
$ sudo setfacl -d -Rm u::rwx salt
$ sudo setfacl -d -Rm g:salt-admin:rwx salt
$ sudo setfacl -d -Rm other::--- salt
$ sudo setfacl -d -Rm other::rx salt
$ sudo setfacl -Rm other::--- salt
When the above is applied, you should see the following if you check using getfacl
: -
sudo getfacl /srv/salt
## file: salt
## owner: root
## group: salt-admin
user::rwx
group::r-x
group:salt-admin:rwx
mask::rwx
other::---
default:user::rwx
default:group::r-x
default:group:salt-admin:rwx
default:mask::rwx
default:other::r-x
Now any user in the salt-admin
group should be able to edit the files in the Salt directory, and also push them to version control (e.g. Git) as their own user.
Saltstack Caveats
In using Salt, I have come across the following caveats.
Python versions
Upgrading Python versions can break the Salt Master and Minions. You could consider pinning Python package versions or use Python virtual environments to avoid this.
Memory usage
The memory usage of the Salt Minion can be a bit high. At times I have encountered the minion using more memory than applications on running systems. Bare in mind that it is still only a couple of hundred megabytes at peak, but if you are running virtual machines with 512Mb of RAM this could cause issues.
Documentation
The documentation for Salt can be found here. It covers everything from the modules you can use (e.g. package installation, file modules, system service control etc) to how to use ad-hoc Salt commands and everything in between.
A combination of this and looking at examples should help you if you’re new to configuration management, or already familiar with an existing system.
Summary
In summary, Saltstack is a great option for configuration management across your estate. I haven’t delved into how you can manage network infrastructure with it (using salt-proxy) as I haven’t used it as of yet. Instead, I would refer to this post by Mircea Ulinic (who is a noted contributor to Salt) for more information about network automation using Saltstack.
For those of you who already know Ansible, it would be worthwhile to look into Saltstack. The initial setup takes longer, but once this is out of the way, I have found the speed that it applies updates is improved over Ansible, mainly due to the fact that tasks are being ran by the Minions themselves (rather than your host machine).
I still prefer Ansible in the situations where you want to bootstrap a node, but for ongoing management of configuration and applications I would be happy using either Ansible or Salt to do so.
devops sysadmin saltstack config management
technical sysadmin config management
3212 Words
2019-11-20 11:25 +0000