In previous posts on this site we have covered using both Ansible and Saltstack for configuration management. These are far from the only options out there, with Chef and Puppet also being very popular options.

With a recent job change came the chance to learn and use Puppet. For me, the best way to learn a different tool-set is to take something you have done before, and then try and do it using the new tool. In this case, we are using Puppet to configure Consul, Prometheus and the Prometheus node_exporter to monitor Linux machines (similar to a previous series using Saltstack).

What is Puppet?

Puppet allows you to manage a fleet of servers and other devices. The benefits of this are increasing consistency in application and infrastructure configuration, as well as reducing the toil (i.e. human interaction) of managing infrastructure.

Puppet’s first release was in 2005, which predates Chef (2009), Saltstack (2011) and Ansible (2012), with only CFEngine being released beforehand. While other configuration management systems were release between CFEngine and Puppet, only CFEngine is still under active development. This means that some of the early advances in configuration management were made with Puppet, with the influence being seen in nearly all configuration management tools since.

Puppet’s syntax is based upon a subset of the Ruby language. The Puppet Agent (the client that runs on servers) is written in Ruby as well. Up until recent versions of Puppet, the server side component of Puppet was also written in Ruby, but has transitioned to being written in Clojure (which itself is based upon Java).

Puppet has the concept of manifests, which are the equivalent of Saltstack’s state files, and Ansible’s playbooks. You can also run different environments, allowing you to have common code, but parameters specific to different classes of infrastructure (e.g. staging, production, different regions etc).

A collection of manifests, along with static files and templates, is known as a Puppet module. They are separated by directory (e.g. $module_directory/my_module/manifests and $module_directory/my_other_module/manifests), with the directory structure defining how they are referenced.

Variables can be defined directly inside the manifests themselves, or using Hiera.

Hiera?

Hiera is a way of separating out the configuration itself from data. For example, if you want to install a package, the action to be taken is defined in a manifest (i.e. package installation), but the version can be specified in Hiera.

Hiera data is written in YAML, It can be environment specific, and even node specific. You could refer to different DNS servers, different package versions (e.g. a testing/beta package for your staging environment, prior to general release), or whatever else you deem necessary.

What does a manifest look like?

Puppet manifests contain classes, which are unique blocks of code that define an action (or actions) to be taken. They can be referred to directly, inherited by other classes (e.g. defining a base class, and then extending them in other classes where necessary), or you can refer to them from other classes, so that multiple “child” classes are referenced using a “parent” class.

Using Debian (i.e. apt) package installation as an example, a basic manifest looks like the below: -

class packages::qemu {
    package{'qemu-guest-agent': 
        ensure => 'installed'
    }
}

The equivalent Ansible code is below: -

- name: Install qemu-guest-agent
  package:
    name: qemu-guest-agent
    state: present

The equivalent Saltstack code would be: -

# Shorthand
qemu-guest-agent:
  pkg.installed

#Longhand
install_qemu_guest_agent:
  pkg.installed:
    - pkgs: 
      - qemu-guest-agent

The directory structure in Puppet defines how you refer to classes as well. Each class must be in one of your modules directory, and must be in a directory that is the same as your class name. For example, our class packages::qemu must be in the $module_directory/packages/manifests directory. If it was in the mypackages or qemupackages directory instead, Puppet would throw an error. Your directory structure matters more than in other configuration management systems, with Ansible and Saltstack not requiring named tasks/actions/blocks of code.

Roles and Profiles

Roles and Profiles is a way of structuring your Puppet code that adds abstractions which lead to more generic modules (rather than environment or node-specific code). This method was referenced in this post back in 2012, and has since become a semi-official way of defining Puppet code.

The idea is that rather than attaching classes (i.e. the code blocks previously mentioned) directly to a node or nodes, you create a node role and associate profiles with this node. For example, you could create a base role that includes standard packages, monitoring agents and other common configuration, but then create a separate role that is only used by the monitoring servers (in our case, Prometheus).

Roles can be extensions of other roles, meaning that if you choose to assign say, a prometheus role to the prometheus server, it will also inherit the base role too, which includes all common configuration.

A role can contain multiple profiles, which can be application specific, refer to an application and its dependencies, or profiles that align with how your business deploys applications. For example you could require that all applications must register a service with Consul, a set of firewall rules and SELinux policies, which would all be referred to in a single profile (or multiple profiles and grouped by class).

This means that it is perfectly possible to create a set of base roles and profiles that all nodes must use, but then additional profiles can be referenced for installing/managing the applications running on them.

Aims

In this post, we will do the following: -

  • Set up a Puppet Server
  • Install multiple Linux servers (physical and virtual) with Puppet agents
    • We will have machines running Debian (plain Buster and Proxmox), Rocky Linux, Ubuntu and Alma Linux
  • Run Prometheus on one of the servers
  • Install and run Hashicorp’s Consul on all of the servers (for service discovery)
  • Install and run the Prometheus Node Exporter on all of the servers
  • Register the Prometheus node_exporter as a service in Consul so that Prometheus can discover and monitor the servers

I decided to include Rocky and Alma Linux to see how well Puppet works on both Debian-based and RHEL-based distributions, while also seeing the progress of both distributions vying to be the “replacement” for pre-Stream CentOS.

Setting up a Puppet Server

The Puppet Server for this post is running on Debian Buster, with 2 CPU cores and 2G of memory. In a production scenario you would want to use more RAM and CPU, but for the purposes of this post, this is more than enough.

Add the Apt repository

To install the Puppet Apt repository, follow the instructions here. To summarise, you install a Debian package that adds the Puppet repository to your machine, including the relevant GPG keys.

Install the Puppet Server

As the Puppet Server uses Clojure, first install a compatible Java Virtual Machine. You can install either a Java 8 or 11 compatible JVM. The easiest to get started with is the OpenJDK (install with apt install openjdk-11-jre-headless). If you want to choose an alternative (e.g. Amazon’s Corretto, the official Oracle JVM), then Puppet supports these too.

After this, install the Puppet Server using apt install puppetserver. You can also install the Puppet Agent (using apt install puppet-agent) so that the configuration of the server is also managed by Puppet.

Configure the server

After installing the server we then need to configure it. It has mostly sane defaults, but we do need to make a couple of changes. If you are using less than 4Gb of memory, or you are using significantly more (and therefore have more resources that Puppet could use), update the memory assigned the JVM in /etc/default/puppetserver: -

# Modify this if you'd like to change the memory allocation, enable JMX, etc
JAVA_ARGS="-Xms1g -Xmx1g -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger"

The -Xms (initial memory assigned) and -Xmx (maximum amount of memory) are 2g (i.e. 2Gb) of memory by default. With the machine I am running, this would take up all available memory, so I reduced this to 1g (i.e. 1Gb) instead. If your Puppet Server has more memory available (say 8Gb, 16Gb or more), then raise this so that the server has more resources available to process more agents.

I also update the basemodulepath variable in /etc/puppet/puppet.conf to point to multiple directories: -

basemodulepath = $codedir/forge:$codedir/modules:$codedir/custom:modules

The $codedir can be changed, but by default is /etc/puppetlabs/code (i.e. where all of your manifest and modules reside). The purpose of each directory is: -

  • module - Standard directory for modules
  • forge - Modules downloaded from Puppet Forge
  • custom - Modules that I have created

Puppet Forge is a place where users can submit modules they have created, which allows you to make use of other peoples code in your Puppet environment. This can be everything from package management to firewall rules. Using modules from Puppet Forge allows you to focus on what is unique to your environment, with common tasks using tried and tested modules.

Add Puppet to $PATH

By default, Puppet is installed in the /opt/puppetlabs directory when on Linux. This is fine, except that if you want to interact with the Puppet Server or Puppet Agent, you have to refer to /opt/puppetlabs/bin/puppet or /opt/puppetlabs/bin/puppetserver.

The easiest way to change this is to add this to your $PATH variable (usually defined in ~/.bashrc, ~/.zshrc, ~/.profile, or other shell configuration file). However Puppet usually requires privileged access (i.e. sudo or root). To avoid running commands as root, you can update an option in your sudo configuration called the secure_path. This is the $PATH variable that sudo uses, which makes it available to all users with sudo access.

This variable is in /etc/sudoers on both Debian-based and RHEL-based distributions. For example, on Debian, you would update it to this: -

Defaults	secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/puppetlabs/bin"

At this point, you can run sudo puppet or sudo puppetserver directly.

Setting up machines as Puppet Agents

The next task is to install the agents on our nodes. In this post, I have installed the agent on: -

  • A Debian Buster virtual machine with 2Gb of RAM and 2 CPU cores, running the Puppet server
  • A Debian Buster virtual machine with 2Gb of RAM and 2 CPU cores, to run Prometheus
  • An Ubuntu virtual machine with 1 CPU core and 2Gb of RAM
  • A Rocky Linux virtual machine with 2 CPU cores and 2Gb of RAM
  • An Alma Linux virtual machine with 2 CPU cores and 2Gb of RAM
  • Two Proxmox (based upon Debian Buster) hypervisors that run all of the aforementioned machines
    • One is a Lenovo X220 with an Intel i5-2420m and 16Gb of RAM
    • One is a HP Elitebook 9470m with an Intel i7-3687u and 16Gb of RAM
    • Both are running in a Proxmox cluster

Installing the repositories

As with the Puppet Server installation, when on Debian or Debian-like systems (e.g. Ubuntu), download the Debian package that enables the repository. This is the same repository as used for the server. If you are on Debian, do the following: -

# Debian Buster
$ wget https://apt.puppet.com/puppet7-release-buster.deb

# Ubuntu Focal Fossa (20.04 LTS)
$ wget https://apt.puppet.com/puppet7-release-focal.deb

You then install the repository using dpkg -i puppet7-release-$DISTRO.deb.

On RHEL-based distributions, you can install the RPMs directly using: –

# RHEL7-based (eg CentOS 7, Red Hat Enterprise Linux 7, Oracle Linux 7)
$ rpm -Uvh https://yum.puppet.com/puppet7-release-el-7.noarch.rpm

# RHEL8-based (eg Rocky Linux 8, Alma Linux 8, Red Hat Enterprise Linux 8)
$ rpm -Uvh https://yum.puppet.com/puppet7-release-el-8.noarch.rpm

Installing the Puppet Agent

Once the repositories are enabled, install the Puppet Agent with either apt install puppet-agent, yum install puppet-agent or dnf install puppet-agent (depending on the distribution). The Puppet Agent is mostly self contained, so you do not need to install anything like Ruby or a Java Virtual Machine to start using it.

Once the agent is installed, you can register the agents against the Puppet Server

Registering Agents

Configure the Puppet Server

To register agents, they need to know what server they are registering with. You can update the /etc/puppet/puppet.conf file, or you can run the following command: -

$ sudo puppet config set server $DNS_OF_PUPPETSERVER --section main

Replace $DNS_OF_PUPPETSERVER with the DNS hostname of your Puppet Server. For me, this is puppetserver.meshuggah.yetiops.lab. This updates the Puppet configuration file to refer to the correct Puppet server to look like the below: -

[main]
server = puppetserver.meshuggah.yetiops.lab

SSL Bootstrap

Puppet Agents register against the server by creating a certificate signing request. This must be signed by the server to confirm that the Agent is authorized to run manifests and modules retrieved from.

To do this on a machine running the Puppet agent, run puppet ssl bootstrap: -

$ sudo puppet ssl bootstrap
Info: Creating a new RSA SSL key for prometheus.meshuggah.yetiops.lab
Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for prometheus.meshuggah.yetiops.lab
Info: Certificate Request fingerprint (SHA256): 50:2A:55:A1:99:72:9F:A0:43:0C:A0:76:CD:01:24:9E:A0:63:19:EC:61:2E:50:1B:22:44:8A:4D:CF:0A:A4:3D
Info: Certificate for prometheus.meshuggah.yetiops.lab has not been signed yet
Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (prometheus.meshuggah.yetiops.lab).
Info: Will try again in 120 seconds.

Login to Puppet Server and sign the certificate with puppetserver ca sign --certname prometheus.meshuggah.yetiops.lab: -

$ sudo puppetserver ca sign --certname prometheus.meshuggah.yetiops.lab
Successfully signed certificate request for prometheus.meshuggah.yetiops.lab

The Puppet Agent will then check if the request has been signed within 2 minutes (120 seconds), and then confirm that the process is complete: -

Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for prometheus.meshuggah.yetiops.lab
Info: Certificate Request fingerprint (SHA256): 50:2A:55:A1:99:72:9F:A0:43:0C:A0:76:CD:01:24:9E:A0:63:19:EC:61:2E:50:1B:22:44:8A:4D:CF:0A:A4:3D
Info: Downloaded certificate for prometheus.meshuggah.yetiops.lab from https://puppetserver.meshuggah.yetiops.lab:8140/puppet-ca/v1
Notice: Completed SSL initialization

Other tasks

Autosigning

If you manage a lot of infrastructure, the process of signing each individual server can become quite unwieldy and time consuming. Instead you can use autosigning. This comes with a caveat in that, this potentially makes your Puppet estate less secure, as the rules can be quite open for what will be autosigned.

However if you are only on a private network with no external access, or are willing to overlook the potential security implications for the gains in time and automation, you can create rules in a file called /etc/puppetlabs/puppet/autosign.conf for Agents that should have their certificate signing requests automatically approved and assigned. More information on this is available here, but at a very basic level, you can autosign anything with a certain domain name like so: -

$ cat autosign.conf
*.meshuggah.yetiops.lab

View all registered agents

At this point, you can view all registered agents on the Server with: -

$ sudo puppetserver ca list --all
Signed Certificates:
    prometheus.meshuggah.yetiops.lab         (SHA256)  27:0C:C5:79:00:20:21:36:72:90:3C:DD:C5:2A:BE:5D:27:BA:A7:A2:B5:53:11:C9:EC:F6:88:10:AF:92:EC:92	alt names: ["DNS:prometheus.meshuggah.yetiops.lab"]
    ubuntu-util.meshuggah.yetiops.lab        (SHA256)  BF:59:C4:B5:68:68:E1:2F:F1:74:D1:28:33:D3:75:00:17:63:6F:E9:2F:4D:2A:62:44:19:C0:E0:BC:3A:3A:AD	alt names: ["DNS:ubuntu-util.meshuggah.yetiops.lab"]
    almautil.meshuggah.yetiops.lab           (SHA256)  1F:C0:51:D8:BF:48:FF:F7:C8:1B:87:CA:21:A6:29:4E:F3:71:5E:AF:58:65:13:51:CD:61:4A:AA:44:FB:3B:F6	alt names: ["DNS:almautil.meshuggah.yetiops.lab"]
    pve.meshuggah.yetiops.lab                (SHA256)  04:C5:BE:D9:EE:55:71:1F:26:39:94:4C:1F:CC:9C:82:C6:2F:BD:A2:81:96:DC:5B:5B:61:0B:38:17:4E:5E:FD	alt names: ["DNS:pve.meshuggah.yetiops.lab"]
    pve2.meshuggah.yetiops.lab               (SHA256)  FD:66:5F:D7:46:AF:38:91:31:4A:26:65:76:FA:4B:AE:37:B0:FD:E8:A0:01:38:6F:3E:D5:19:E0:33:D1:B9:37	alt names: ["DNS:pve2.meshuggah.yetiops.lab"]
    rockyutil.meshuggah.yetiops.lab          (SHA256)  95:8B:C2:06:0F:D9:81:00:6D:B5:B7:DB:FA:78:74:C2:6C:54:1C:27:85:51:F8:12:D7:FF:80:69:3F:A8:B3:ED	alt names: ["DNS:rockyutil.meshuggah.yetiops.lab"]
    puppetserver.meshuggah.yetiops.lab       (SHA256)  F4:C6:B6:11:A7:CC:03:5E:0B:79:1A:9B:F9:37:7D:CA:9E:63:7A:5D:C2:DA:5D:0A:79:4F:EE:6C:93:44:4B:91	alt names: ["DNS:puppet", "DNS:puppetserver.meshuggah.yetiops.lab"]	authorization extensions: [pp_cli_auth: true]

If you run this without the --all option, it will show all unsigned requests, which is useful for if you need to determine the certname (effectively the hostname) of Agents that have registered.

Forge modules

As we don’t want to define our own code to install packages, define firewall rules and other “basic” operating system tasks, we add Puppet modules from Puppet Forge. It is possible to install them all manually. However you may prefer to use a module manager for this.

There are multiple module managers available, including r10k (which is one of the most known and popular). For the purposes of this post, I am using one called librarian-puppet (mainly because this is what my workplace uses). It is supplied as a Ruby Gem, which is installed using gem install librarian-puppet.

Once it is installed, run librarian-puppet init from inside the /etc/puppetlabs directory. This creates a Puppetfile, which you populate with the required modules and versions. I also run librarian-puppet config path code/forge to make sure that any modules installed by librarian-puppet go into a separate directory, reducing the risk of conflicting with my own modules.

My Puppetfile looks like the below: -

#!/usr/bin/env ruby
#^syntax detection

forge 'https://forgeapi.puppetlabs.com'

mod   'puppet-archive',              '3.2.0'
mod   'puppet-boolean',              '2.0.1'
mod   'puppetlabs-apt',              '4.5.1'
mod   'puppetlabs-concat',           '4.2.1'
mod   'puppetlabs-firewall',         '1.12.0'
mod   'puppet-firewalld',            '4.4.0'

If you then run librarian-puppet install, all of the above modules with their specified versions are installed, allowing you to refer to them in your code.

Puppet Module: Setting an MOTD

The first module we create is a Message Of The Day module. This is displayed to users on login. First, create a folder called motd in your modules directory (in my case /etc/puppetlabs/code/custom/motd) and then create a manifests and templates directory.

The following is the directory structure of this module: -

/etc/puppetlabs/code$ tree custom/motd/
custom/motd/
├── manifests
│   └── env.pp
└── templates
    └── motd.erb

Manifest

The env.pp manifest looks like the below: -

class motd::env {
	$pve_host = hiera('pve_host', '')
	file { '/etc/motd':
		content => template('motd/motd.erb'),
		owner   => 'root',
		group   => 'root',
		mode	=> '644',
	}
}

The Puppet class is called motd::env. In this we: -

  • Lookup a variable called pve_host that we define in hiera data
  • Create an action to update a file (/etc/motd), based upon a template
  • The file will be owned by root, the group root, and have rw-r--r-- permissions

This is a very basic class, but is very common as a lot of what we will do is based upon templates.

Template

The template itself looks like the below: -

Configured by Puppet
Environment: <%= @environment %>
Hostname: <%= @hostname %>
Hypervisor: <%= @pve_host %>

For those who have used Jinja2, the templating system in Puppet can take a little getting used to. However a lot of the functionality is similar, with the difference mainly being in syntax.

In the above, the <%= $VAR %> syntax is a placeholder for a variable (like {{ $VAR }} in Jinja2). The environment and hostname variables are derived from Puppet facts about the node this runs on, whereas the pve_host variable is derived from our hiera data in the previous section.

Profile

As mentioned, we are using roles and profiles to associate the actions required to nodes. This profile can be found in /etc/puppetlabs/code/environments/production/modules/profile/manifests/mymotd.pp

It is worth noting that I am not taking full advantage of the profiles approach, in that I am using a “default” classifier in the profile: -

class profile::mymotd
{
    case $facts['hostname_role'] {
        default: {
            include motd::env
        }
    }
}

We could derive different roles based upon a fact called hostname_role (which would be a created/custom fact applied to nodes). You could choose to include additional/different classes based upon matching a different hostname_role (e.g. if the hostname_role is monitoring-server, also include additional classes that update the MOTD).

In most cases in this post, we will include a single class, but there is nothing saying this is a strict requirement.

Role

The role that refers to this profile is the base role: -

class role::base
{
    include profile::mymotd
    include profile::packages
    include profile::consul
    include profile::node_exporter
}

This role can be found in /etc/puppetlabs/code/environments/production/modules/profile/role/base.pp.

As you can see, in this base profile, we are including multiple profiles, which is where the power of the Roles and Profiles abstraction method starts to come to fruition.

Now all nodes that match our base role will include the mymotd profile, packages, profile, consul profile and node_exporter profile. This is where the consistency of configuration management is achieved, as we can effectively guarantee a minimum set of applications/configuration in place for the entire Puppet-managed estate.

Matching nodes

Now we need to make sure this role is applied to nodes. This can be achieved by adding node declarations in our site.pp file (found in /etc/puppetlabs/code/environments/production/manifests/site.pp): -

node /^prometheus./     
{ 
    include role::prometheus
}

node default
{
    include role::base
}

As we can see, we have a default node selector, as well a selector that matches the prometheus role. Any node that has a hostname that doesn’t begin with prometheus. will be matched by the default role.

Puppet Module: Installing packages

The next module is for installing packages. In this, we are just installing the qemu-guest-agent, but it could be any package(s): -

class packages::qemu {
	package{'qemu-guest-agent': 
        ensure => 'installed'
    }
}

There are no templates being referenced, and we are using the distribution packages rather than adding any additional repositories.

Our profile is almost identical to the previous profile: -

class profile::packages
{
    case $facts['hostname_role'] {
        default: {
            include packages::qemu
        }
    }
}

As we saw in the previous module, we are referencing the profile from the role::base class, meaning all machines referencing this class will have the agent installed.

Puppet Module: Hashicorp Repository

The next module is one that is not referenced by a role or profile. Instead it is used by other modules. This module adds the Hashicorp Debian or RPM repositories so that we can install the latest versions of Hashicorp’s packages (e.g. Consul), rather than relying on out-of-date distribution packages.

The contents of the manifest are: -

class hashicorp::repository {
	$distro = $facts['os']['distro']['codename']
	if $facts['os']['family'] == 'Debian' {
		apt::source { 'hashicorp':
			comment  => 'Hashicorp Repository',
			location => 'https://apt.releases.hashicorp.com',
			release  => "${distro}",
			repos    => 'main',
			key      => {
				'id'=>'E8A032E094D8EB4EA189D270DA418C88A3219F7B',
				'server' => 'keyserver.ubuntu.com',
			},
			include  => {
				'deb' => true,
			},
		}
	}
	if $facts['os']['family'] ==  'RedHat' {
		yumrepo { 'hashicorp':
			enabled  => 1,
			descr    => 'Hashicorp Repository',
			baseurl  => 'https://rpm.releases.hashicorp.com/RHEL/$releasever/$basearch/stable',
			gpgkey   => 'https://rpm.releases.hashicorp.com/gpg',
			gpgcheck => 1,
			target    => '/etc/yum.repo.d/hashicorp.repo',
  		}
	}
}

In this, we are referencing facts (i.e. information about or applied to the Agent(s)) to get the codename (e.g. buster for Debian 10, focal for Ubuntu 20.04 LTS). We also run a conditional against the operating system family, which for us is either Debian-based or Red Hat-based.

If the host is Debian-based, add the Apt repository using the codename. If the host is Red Hat-based, then add the Hashicorp Yum repository. The codename is not required for the Yum repository, as this information is derived by Yum/DNF when installing packages.

I have separated this out because it is not Consul specific, and instead could be used to install Hashicorp’s Terraform, Packer or other tools. However it is only required if we want any of Hashicorp’s tools, so referring to it from the Consul module (and potentially Terraform or Packer modules) makes a lot of sense.

Puppet Module: Installing and running Consul

The Consul module has multiple manifests, allowing us to separate out the logical steps taken. This makes it easier to understand what the module is doing. The structure of the module is as follows: -

/etc/puppetlabs/code$ tree custom/consul/
custom/consul/
├── manifests
│   ├── config.pp
│   ├── firewall.pp
│   ├── init.pp
│   ├── install.pp
│   └── service.pp
└── templates
    ├── consul.client.hcl.erb
    ├── consul.prom_service.hcl.erb
    └── consul.server.hcl.erb

We use an init.pp manifest that contains other classes: -

class consul {
    contain hashicorp::repository
    contain consul::install
    contain consul::config
    contain consul::firewall
    contain consul::service

	Class['hashicorp::repository']
	~> Class['consul::install']
	~> Class['consul::config']
	~> Class['consul::firewall']
	~> Class['consul::service']
}

What we see here is that not only are we referencing all of our Consul manifests, but we also reference the Hashicorp module (specifically the hashicorp::repository class) so that we can install Consul directly from Hashicorp.

In our profile, rather than referencing each class in this module, we can just reference class consul to include all of the manifests.

Also, we can see that there are a number of Class['$CLASS_NAME'] definitions. This is to guarantee the order in which each Class is executed. The reason behind this is that task execution in Puppet is ran in parallel rather than serial (i.e. one after the other). We wouldn’t be able to start the Consul service until Consul is actually installed, so we need to ensure the step that controls the service does not trigger too early.

In my experience so far, the RHEL-based distributions installed Consul in a short enough time that the task execution still succeeded. Debian-based distributions however hadn’t finished installing Consul by the time we want to configure Consul and start the service.

Installing Consul

Manifest

The install.pp manifest looks like the below: -

class consul::install {
	group { 'consul':
		name => 'consul',
	}

	user { 'consul':
		name   => 'consul',
		groups => 'consul',
		shell  => '/bin/false',
	}

	if $facts['os']['family'] == 'Debian' {
		package{'consul':
			ensure => 'latest',
			require => Class['apt::update']
		}
	}

	if $facts['os']['family'] == 'RedHat' {
		package{'consul':
			ensure => 'latest',
		}
	}

	file { '/var/lib/consul':
		ensure  => 'directory',
		owner   => 'consul',
		group   => 'consul'
	}
}

To explain the steps here, we are: -

  • Creating a consul user and group
  • Installing the latest consul package, as well as triggering an update of apt if running on a Debian-based system
    • Without the update, Debian-based distributions will not use the repository, instead using whatever is in their package archive (usually a very out of date version)
  • Creating the /var/lib/consul directory for the consul data, owned by the consul user and group

At this point Consul is installed, but is not running or configured.

Configuring Consul

Manifest

The next manifest is the config.pp manifest: -

class consul::config {
	$consul_dc = lookup('consul_vars.dc')
	$consul_enc = lookup('consul_vars.enckey')
	$servers = lookup('consul_vars.servers')
	$is_server = lookup('consul.server')

	if $is_server {
		file { '/etc/consul.d/consul.hcl':
			content => template('consul/consul.server.hcl.erb'),
			owner   => 'root',
			group   => 'root',
			mode	=> '644',
		}
	} else {
		file { '/etc/consul.d/consul.hcl':
			content => template('consul/consul.client.hcl.erb'),
			owner   => 'root',
			group   => 'root',
			mode	=> '644',
		}
	}
}

In the above we are: -

  • Using Hiera lookups to retrieve our Consul Datacentre (consul_vars.dc), encryption key (consul_vars.enc), the list of Consul servers (consul_vars.servers) and whether the host is a Consul server itself (consul.server)
  • If the host has consul.server set to true, we use our consul.server.hcl.erb template
  • If the host has consul.server set to false, we use our consul.client.hcl.erb template

Hiera Data

The Hiera data being referenced here can be found in code/environments/production/data. We have a common.yaml file which applies to all nodes in the production environment, and a node-specific nodes/puppetserver.meshuggah.yetiops.lab.yaml that overrides the common.yaml file for that node only: -

common.yaml

consul:
    server: false
consul_vars:
    dc: meshuggah
    servers:
    - puppetserver.meshuggah.yetiops.lab
    enckey: "fUPA97Ga7IDLl/HST3H3xw=="

nodes/puppetserver.meshuggah.yetiops.lab.yaml

consul:
    server: true

What this means is that almost all Consul values are the same. The only exception is that the puppetserver will be a Consul server as well, rather than just a client.

Templates

The templates that we reference are for the server and the client: -

consul.server.hcl.erb

{
<%- @networking['interfaces'].each do |interface| -%>
  <%- if interface[1]['ip'] =~ /^10.15.40/ -%>
  "advertise_addr": "<%= interface[1]['ip'] %>",
  "bind_addr": "<%= interface[1]['ip'] %>",
  <%- end -%>
<%- end -%>
  "bootstrap_expect": 1,
  "client_addr": "0.0.0.0",
  "data_dir": "/var/lib/consul",
  "datacenter": "<%= @consul_dc %>",
  "encrypt": "<%= @consul_enc %>",
  "node_name": "<%= @hostname %>",
  "retry_join": [
  <% @servers.each do |server| -%>
    "<%= server %>",
  <%- end -%>
  ],
  "server": true,
  "autopilot": {
    "cleanup_dead_servers": true,
    "last_contact_threshold": "200ms",
    "max_trailing_logs": 250,
    "server_stabilization_time": "10s",
  },
  "ui": true,

consul.client.hcl.erb

{
<%- @networking['interfaces'].each do |interface| -%>
  <%- if interface[1]['ip'] =~ /^10.15.40/ -%>
  "advertise_addr": "<%= interface[1]['ip'] %>",
  "bind_addr": "<%= interface[1]['ip'] %>",
  <%- end -%>
<%- end -%>
  "data_dir": "/var/lib/consul",
  "datacenter": "<%= @consul_dc %>",
  "encrypt": "<%= @consul_enc %>",
  "node_name": "<%= @hostname %>",
  "retry_join": [
  <% @servers.each do |server| -%>
    "<%= server %>",
  <%- end -%>
  ],
  "server": false,
}

In these templates, we see our first use of loops and conditionals in Puppet/ERB (Extended Ruby Template) syntax. If we look at the server loop, and compare to what we would use in Jinja2, you can start to see some similarities: -

ERB

<% @servers.each do |server| -%>
  "<%= server %>",
<%- end -%>

Jinja2

{% for server in servers -%}
  "{{ server }}",
{%- endfor -%}

There are differences, but not to the point of making it hard to understand. In the ERB example, the variable we will reference is inside pipes (|), and the variable to run the loop against has a .each method attached to it. In Jinja2, the language describes the action (i.e. for var in list_of_vars).

The conditional logic is also very similar: -

ERB

<%- if interface[1]['ip'] =~ /^10.15.40/ -%>
  [...]
<% end %>

Jinja2

{%- if (interface[1]['ip'] | regex_search("^10.15.40")) -%}
  [...]
{% endif %}

If anything, the ERB syntax is much less verbose.

To explain what is happening in the templates themselves: -

  • The network interface facts are checked
    • If one of the interfaces has an IP starting with 10.15.40 (my lab range), then use that for the Consul advertise_addr and bind_addr values
    • This ensures Consul is listening on the correct interface
  • We define the directory Consul will store it’s data (/var/lib/consul)
  • We reference our Consul Datacentre, Encryption Key and the machine’s hostname
  • We list all of the Consul Servers so that when a Consul Client or Server starts, it attempts to reach an existing Consul Server first
  • In addition, if the machine is running as a Consul Server, the template: -
    • Sets the bootstrap_expect value to 1 (i.e. don’t wait for quorum, a single server is enough)
    • Set server to true
    • Set some timeouts/thresholds for Consul
    • Enable the UI

Configuring the firewall

This task is only for Red Hat-based distributions, as Rocky and Alma Linux both have firewalld enabled by default. This isn’t required for Debian, although in a production scenario it is advisable to run some form of firewall.

Manifests

The firewall.pp manifest looks like the below: -

class consul::firewall {
	if $facts['os']['family'] == 'RedHat' {
		firewalld::custom_service{'consul':
			short       => 'consul',
			description => 'Consul',
			port        => [
			  {
			      'port'     => '8301',
			      'protocol' => 'tcp',
			  },
			  {
			      'port'     => '8500',
			      'protocol' => 'tcp',
			  },
			  {
			      'port'     => '8600',
			      'protocol' => 'tcp',
			  },
			  {
			      'port'     => '8301',
			      'protocol' => 'udp',
			  },
			  {
			      'port'     => '8600',
			      'protocol' => 'udp',
			  },
			]
		}
		firewalld_service { 'Consul':
			ensure  => 'present',
			service => 'consul',
			zone    => 'public',
		}
	}
}

The above creates a firewalld custom service for Consul, and then attaches it to the public zone. This will allow Consul to communicate with other hosts.

As noted, we only apply this if the os.family fact contains RedHat (i.e. the machine is running a Red Hat-based distribution).

Enabling the service

Finally, we need to enable the Consul service. All distributions in this lab run systemd, so we use the same manifest and template for Debian-based and Red Hat-based distributions.

Manifests

The service.pp manifest looks like the below: -

class consul::service {
    service {'consul':
        ensure => 'running',
        enable => true,
        restart => "consul reload",
    }
}

When installed via the Hashicorp repositories, Consul includes a systemd unit file. The only interesting part here is the restart => "consul reload" section. What this says is that rather than restarting the entire service, if another class/manifest triggers a reload/refresh, use the consul reload command rather than restarting the entire service.

In most cases, consul reload is all we need to use. For example, if we add or take away a service (using a Consul HCL or JSON file), a consul reload will discover the changes.. A full restart has the potential to break the Consul cluster, or at least break the connection from Prometheus to Consul for service discovery. Triggering a reload is much safer.

Puppet Module: Install the node_exporter

Like the consul module, the node_exporter module has multiple manifests and templates too. The structure is as follows: -

/etc/puppetlabs/code$ tree custom/node_exporter/
custom/node_exporter/
├── manifests
│   ├── consul.pp
│   ├── firewall.pp
│   ├── init.pp
│   ├── install.pp
│   ├── prereqs.pp
│   └── service.pp
└── templates
    └── systemd.erb

Again, we use the init.pp manifest to include other classes: -

class node_exporter {
	contain node_exporter::prereqs
	contain node_exporter::install
	contain node_exporter::service
	contain node_exporter::firewall
	contain node_exporter::consul
}

Prerequisites

As the latest node_exporter isn’t packaged for Debian or Red Hat directly, we must create the users, groups and required directories before we attempt to install the node_exporter: -

class node_exporter::prereqs {
	group { 'node_exporter':
		name => 'node_exporter',
	}

	user { 'node_exporter':
		name   => 'node_exporter',
		groups => 'node_exporter',
		shell  => '/sbin/nologin',
	}

	file { [ '/opt/node_exporter', '/opt/node_exporter/exporters', '/opt/node_exporter/exporters/dist', '/opt/node_exporter/exporters/dist/textfile' ]:
		ensure => 'directory',
		owner  => 'node_exporter',
		group  => 'node_exporter',
	}
}

The first action is to create the node_exporter group. After this, we create the node_exporter user. Finally, we create the directory structure for the textfile_collector (i.e. where node_exporter will ingest metrics from files in a certain directory).

What you’ll notice here is that we are creating each directory in the tree. This is because Puppet does not have a native way of creating a directory and it’s parent directories if they do not already exist. To achieve this in Saltstack, you would use: -

/opt/exporters/dist/textfile:
  file.directory:
    - user: node_exporter
    - group: node_exporter
    - mode: 755
    - makedirs: True

The makedirs option is what creates the parent directories. In Ansible, we would use something like: -

- name: Create a directory if it does not exist
  ansible.builtin.file:
    path: /opt/exporters/dist/textfile 
    state: directory
    mode: '0755'

Ansible natively creates the parent directories, so nothing changes here.

At this point, we can now install the node_exporter.

Install the node_exporter

As no Debian/RPM package is available of the latest node_exporter version, we retrieve the node_exporter tarball file from GitHub: -

class node_exporter::install {
    $install_path   = "/usr/local/bin"
    $package_org    = "prometheus"
    $package_name   = "node_exporter"
    $package_type   = "linux-amd64"
    $package_ensure = lookup("node_exporter.version")
    $archive_name   = "${package_name}-${package_ensure}.${package_type}.tar.gz"
    $repository_url = "https://github.com/${package_org}/${package_name}/releases/download"
    $package_source = "${repository_url}/v${package_ensure}/${archive_name}"
    
    archive { $archive_name:
        path         => "/tmp/${archive_name}",
        source       => $package_source,
        extract      => true,
        extract_command => 'tar xfz %s --strip-components=1',
        extract_path => $install_path,
        creates      => "${install_path}/${package_name}",
        cleanup      => true,
    }
}

What we can see here is that rather than specifying the full paths and URLs directly in the action, we use variables. This allows us to change the package type (e.g. use a different architecture), the package name, location to download from and more in one place. In the above, the package_name variable is reference three times, so being able to change it once place avoids typos or forgetting to change it everywhere.

The Puppet archive action is able to use a URL as the source directly, rather than needing to download the file with another action first.

One point to note is the extract_command field. This is supplied because otherwise the destination directory in /usr/local/bin would be /usr/local/bin/node_exporter-v1.1.2.linux-amd64/node_exporter. Using the strip-components flag means that the top-level directory inside of the archive is “stripped”, with the contents of said directory being placed directly in /usr/local/bin instead.

The package version itself is sourced from hiera data. In this lab, it looks like this: -

node_exporter:
    version: 1.1.2

Defining and enabling the service

We use systemd to run the node_exporter, which requires us to create a systemd unit file. The manifest for this is below: -

class node_exporter::service {
	file { '/etc/systemd/system/node_exporter.service':
		content => template('node_exporter/systemd.erb'),
		owner   => 'root',
		group   => 'root',
		mode	=> '644',
	}

	service {'node_exporter':
        ensure => 'running',
        enable => true,
	}
}

This is similar to Consul, except we just need to create the unit file first. The referenced template looks like the below: -

[Unit]
Description=Node Exporter
After=network.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter --collector.systemd --collector.wifi --collector.textfile --collector.textfile.directory=/opt/exporters/dist/textfile

[Install]
WantedBy=multi-user.target

There are no template variables in this, but it does give us the option to add them if required in future.

Firewall rules

As with Consul, this is for Red Hat-based distributions only, as they come with firewalld enabled by default: -

class node_exporter::firewall {
	if $facts['os']['family'] == 'RedHat' {
		firewalld::custom_service{'node_exporter':
			short       => 'node_exporter',
			description => 'Prometheus Node Expoter',
			port        => [
			  {
			      'port'     => '9100',
			      'protocol' => 'tcp',
			  }
			]
		}
		firewalld_service { 'Allow Node Exporter':
			ensure  => 'present',
			service => 'node_exporter',
			zone    => 'public',
		}
	}
}

The node_exporter runs on fewer ports than Consul does, so we only need to allow TCP/9100 for Prometheus to be able to scrape metrics from it.

Registering the service with Consul

Now that the node_exporter is running, we need to register a service with Consul. This service tells Consul what port the node_exporter is listening on and associated tags. This means that rather than needing to tell Prometheus what machines are running node_exporter, Prometheus discovers from Consul what each machine is running. Adding additional exporters and services requires no change on the Prometheus side.

The manifest looks like the below: -

class node_exporter::consul {
	$consul_service_name = "node_exporter"
	$consul_service_port = 9100

	file { "/etc/consul.d/${consul_service_name}.hcl":
		content => template('consul/consul.prom_service.hcl.erb'),
		owner   => 'root',
		group   => 'root',
		mode	=> '644',
	}

	exec {'consul_reload':
		command => '/usr/bin/consul reload',
		subscribe => [
			File["/etc/consul.d/${consul_service_name}.hcl"],
		],
		refreshonly => true,
    }
}

In this, we are referencing a template in the Consul module. This is so that rather than needing to have this template exist in every exporter module, we can use a common Consul template instead.

We also have an exec action, which is how arbitrary commands are ran on the machine by Puppet. This has the subscribe option, which says that we will only reload Consul if there is a change in the Consul service file.

The template itself looks like the below: -

{"service":
  {
    "name": "<%= @consul_service_name %>",
    "tags": ["<%= @consul_service_name %>", "prometheus"],
    "port": <%= @consul_service_port %>,
  }
}

As you can see, we are using the Consul service name and port as placeholders in this template. In this case, they are replaced with node_exporter and 9100, producing the below: -

{"service":
  {
    "name": "node_exporter",
    "tags": ["node_exporter", "prometheus"],
    "port": 9100, 
  }
}

After this runs, the Consul client will register this service with the Consul server. This allows the Prometheus server (and anything else that can use Consul as a source) to discover all available services by only needing to contact the Consul server(s).

Puppet Module: Prometheus

This module is only applied to any node with a hostname that starts prometheus, as seen in the role, profile and site.pp manifests below: -

site.pp

node /^prometheus./		{ 
    include role::prometheus
}

Role - prometheus.pp

class role::prometheus inherits role::base
{
    include profile::prometheus
}

Note here that we are inheriting the base role as well, meaning we also include the MOTD, base package, Consul and node_exporter profiles too.

Profile - prometheus.pp

class profile::prometheus
{
    case $facts['hostname_role'] {
        default: {
            include prometheus
        }
    }
}

The structure of the module itself is below: -

$ tree custom/prometheus/
custom/prometheus/
├── manifests
│   ├── config.pp
│   ├── init.pp
│   ├── install.pp
│   ├── prereqs.pp
│   └── service.pp
└── templates
    ├── prometheus.yml.erb
    └── systemd.erb

The structure is very similar to the node_exporter module, except we are not registering any services with Consul.

Again, our init.pp references the other classes and manifests: -

class prometheus {
	contain prometheus::prereqs
	contain prometheus::install
	contain prometheus::config
	contain prometheus::service
}

Prerequisites

The following manifest is very similar to the node_exporter prerequisites manifest: -

class prometheus::prereqs {
	group { 'prometheus':
		name => 'prometheus',
	}

	user { 'prometheus':
		name   => 'prometheus',
		groups => 'prometheus',
		shell  => '/sbin/nologin',
	}

	file { [ '/etc/prometheus', '/etc/prometheus/alerts', '/etc/prometheus/rules', '/var/lib/prometheus' ]:
		ensure => 'directory',
		owner  => 'prometheus',
		group  => 'prometheus',
	}
}

Again, we create our group and user, and also create some directories for use with Prometheus. The first three directories contain Prometheus’s configuration, as well as locations for alerts (for use with Alertmanager) and recording rules. The last directory is where Prometheus stores it’s data.

Install Prometheus

Again, this manifest is very similar to the node_exporter module: -

class prometheus::install {
    $install_path   = "/usr/local/bin"
    $package_name   = "prometheus"
    $package_type   = "linux-amd64"
    $package_ensure = lookup("prometheus.version")
    $archive_name   = "${package_name}-${package_ensure}.${package_type}.tar.gz"
    $repository_url = "https://github.com/${package_name}/${package_name}/releases/download"
    $package_source = "${repository_url}/v${package_ensure}/${archive_name}"

	archive { $archive_name:
        path         => "/tmp/${archive_name}",
        source       => $package_source,
        extract      => true,
        extract_command => 'tar xfz %s --strip-components=1',
        extract_path => $install_path,
        creates      => "${install_path}/${package_name}",
        cleanup      => true,
	}
}

The only difference here is that the package_name and package_org would be the same, so I have eschewed it in favour of using the same variable twice. If in future the organisation changes, then it is quite straightforward to add package_org variable in the repository_url instead.

Prometheus Configuration

Unlike the node_exporter, Prometheus requires a configuration file to run, so we use the following manifest to create one: -

class prometheus::config {
	$consul_servers = lookup('consul_vars.servers')
	file { '/etc/prometheus/prometheus.yml':
		notify  => Service['prometheus'],
		content => template('prometheus/prometheus.yml.erb'),
		owner   => 'prometheus',
		group   => 'prometheus',
		mode	=> '644',
	}
}

In this, we also reference the Consul servers, because we will be using them for service discovery. We also use the notify property to tell Prometheus to reload if the configuration changes.

The configuration template looks like the below: -

global:
  scrape_interval:     15s
  evaluation_interval: 15s

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - localhost:9093

rule_files:
  - 'alerts/*.yml'
  - 'rules/*.yml'

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets:
      - 'localhost:9090'
  - job_name: 'consul'
    consul_sd_configs:
<% @consul_servers.each do |server| -%>
      - server: '<%= server %>:8500'
<%- end -%>
    relabel_configs:
      - action: labelmap
        regex: __meta_consul_metadata_(.+)
      - source_labels: [__meta_consul_tags]
        regex: .*,prometheus,.*
        action: keep
      - source_labels: [__meta_consul_service]
        target_label: job

The two jobs are for Prometheus itself, and consul_sd_configs, which is the job used to discover services from Consul.

Defining and enabling the service

As with the node_exporter, we need to define the systemd service and run it, as per below: -

class prometheus::service {
	file { '/etc/systemd/system/prometheus.service':
		content => template('prometheus/systemd.erb'),
		owner   => 'root',
		group   => 'root',
		mode	=> '644',
	}

	service {'prometheus':
		ensure => 'running',
                enable => true,
	}
}

This references the below template: -

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.enable-lifecycle \
    --web.external-url=http://prometheus.noisepalace.home \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

As with the node_exporter, we aren’t supplying any template variables, but we have the option to if required in future.

Running Puppet

At this point, the only action we need to take on each machine running the Puppet agent is: -

$ sudo puppet agent -t

This will tell Puppet to contact the Puppet server, and run any modules that apply to it. To summarise, for our machines, this is what would apply: -

Machine MOTD Packages Consul node_exporter Prometheus
puppetserver
pve
pve2
prometheus
rockyutil
almautil
debutil
ubuntuutil

By the end of the run on all of these, you would have: -

  • 8 servers running Consul client and the Prometheus node_exporter
  • The Puppet server running as a Consul server as well
  • The Prometheus server also running Prometheus

To show some of this in action, I have put together a video demonstration of running Puppet on all of the “util” servers: -

This doesn’t show Prometheus server or Puppet server being configured, but the steps to allow Puppet to manage them are identical to what is shown in the video.

Comparisons with Ansible and Saltstack

As noted, I have used Saltstack and Ansible quite extensively in the past, whereas using Puppet is still relatively new for me. However already I have noticed some interesting differences (as well as similarities) with Ansible and Saltstack.

Ansible

One of the reasons Ansible was created was so that configuration management could be achieved without needing an agent. Puppet, by contrast, relies heavily on the use of agents. This has positives and negatives.

The setup process to get started with Ansible is minimal, in that you install Ansible (from package managers or pip) and then start managing machines straight away. If you have SSH, WinRM or other required access to machines (even API access is enough for some tasks), you can run your Playbooks against them. With Puppet, you need to setup the server, ensure working DNS (otherwise you might encounter issues), install agents on every machine, and then start applying your modules and manifests.

However when it comes to the speed of tasks/events, Puppet is almost always quicker to apply changes in my experience. As you need to run a Puppet server, this will often be placed much closer to your infrastructure than where you are. This means that latency and bandwidth of your connection becomes much less of an issue.

There are caveats to both of the above points, in that there is nothing stopping you from running Ansible playbooks from a server(s) much closer to your infrastructure (or using something like Ansible Tower/AWX), and also Puppet does have an agentless mode (called Bolt), but in “traditional” usage of both tools, the above still applies.

The syntax differences are massive, in that Ansible aims to be very human-readable, relying on a YAML-based configuration. Puppet by contrast has some idiosyncrasies to the syntax and approach that do take a while to understand in my experience. As mentioned, if you come from a Ruby background, Puppet will feel quite natural.

Finally, the Ansible ecosystem around managing other systems like BSD (OpenBSD, FreeBSD etc), networking equipment and more is very mature and well supported (the lack of agents being an advantage here). By contrast, Puppet do not package agents directly for the BSDs, and the requirement for running agents does make managing networking hardware much more difficult.

One major advantage of Puppet is it’s longevity. There are articles from the early 2000s on some Puppet concepts which are still relevant. Also, as a symptom of nearly two decades of Puppet usage, the reliability of the product is stellar in my experience. The only time I have found Puppet to have issues is due to outside factors. Good examples would be non-synchronised clock sources (meaning certificate validation fails), failing DNS resolution or the hardware is simply not powerful enough to run it. From an ongoing usage and maintenance perspective, I have yet to see any issues with Puppet.

Saltstack

As Saltstack uses agents in a similar way to Puppet, there are fewer major differences between it and Puppet.

One of the biggest differences I have noticed is that there is no requirement for Puppet agents to maintain ongoing communication with the Puppet server. This has the upside in that hosts which are not always available are well supported. Saltstack expects hosts to always be available, meaning that for ad-hoc/hosts intermittently online it will timeout when applying configuration. This can make applying configuration changes quite slow if some of the hosts are timing out, or at least throw a lot of errors when running tasks from the Saltstack Master.

However this also means that because the bulk of the communication in Puppet is controlled by the agent, as far as I am aware there is no mechanism to run ad-hoc commands against all managed hosts, or retrieve data from all managed hosts in real time. The closest you can get with Puppet is viewing facts/catalogs from the last time Puppet made contact with the server. By contrast, running a command against every single Salt-managed machine is as simple as salt '*' cmd.run 'sudo iptables -vnL', and you can get node-specific facts using salt '*' grains.items.

As with Ansible, the syntax differences between Saltstack and Puppet are quite large. Saltstack (like Ansible) uses YAML-based configuration, but with the added ability to dynamically generate tasks with Jinja2, rather than just using Jinja2 inside of templates. For example, you can build a task in Saltstack like the below: -

{% if pillar['consul'] is defined %}
{% if pillar['consul']['prometheus_services'] is defined %}
{% for service in pillar['consul']['prometheus_services'] %}
/etc/consul.d/{{ service }}.hcl:
  file.managed:
    - source: salt://consul/services/files/{{ service }}.hcl
    - user: consul
    - group: consul
    - mode: 0640
    - template: jinja

consul_reload_{{ service }}:
  cmd.run:
    - name: consul reload
    - watch:
      - file: /etc/consul.d/{{ service }}.hcl
{% endfor %}
{% endif %}
{% endif %}

This means that we have a dynamically-generated task based upon pillar data (pillar data is very similar to Hiera, for those more familiar with Puppet), meaning that there is less repetition in your Saltstack state files.

As far as I am aware, this is not possible with Puppet. However it is also not possible with Ansible either, so this is more of a general Saltstack advantage rather than specifically a difference to Puppet.

Again, like Puppet, Saltstack can be ran without an agent. However the “traditional” installation approach tends to use a central Salt server (called a master) with agents (minions) registered with it.

The support across operating systems with Saltstack is also quite good, and it has also become increasingly popular in the networking community as well (see Mircea Ulnic’s blog for more on this).

Summary

Puppet is a great configuration management tool, and given it has been around for nearly two decades as well, it is also incredibly mature at this stage too.

If you are in the process of choosing a configuration management tool, Puppet is definitely worth your consideration. While the syntax is very different from tools like Ansible and Saltstack, this doesn’t necessarily make it more difficult, just different.

In future posts, I will show how to integrate Puppet with Cloud-Init for managing cloud-based infrastructure, as well as managing operating systems that are not Linux too!

For all the Puppet code in this blog, see my Puppet Lab repository.