Securely Bootstrapping Munki Using Puppet Certificates

Previously, I wrote about setting up a Munki Docker container to use Puppet SSL certificates.

Time to take it a step farther: doing a full Munki bootstrap deployment using Puppet’s client certificates.

The goal of the Munki bootstrap is to make it easy to set up and deploy a new computer simply by installing Munki on it and applying the bootstrap file. This process is easy and straightforward, and is the cornerstone of my deployment.

But now that we can introduce Munki with SSL client certificates, we can also guarantee secure delivery of all of our content over an authenticated SSL connection. Since Puppet is providing the certificates for both the server and client, we need to install Puppet on the client to allow Munki to use it for verification.

The General Idea:

If we’re going to bootstrap a machine with Puppet, I could just install Puppet and let it do all the work to install Munki. However, this puts a heavy burden on the Puppet master. While embracing Puppet for client configuration is certainly a possibility, I’m not at the point where I think Puppet is the best solution for OS X management, and I don’t want to turn my small Puppetmaster Docker container into the definitive source for Munki for my entire fleet.

In other words, I don’t want to rely on using Puppet to install Munki, because I don’t want to turn my Puppetmaster into a file server – it’s rather resource intensive to do so.

Instead, what I’d like to do is leverage the tools I already use – like DeployStudio and Munki – to do the work it does best, which is to install packages.

Here’s the scenario:

  1. DeployStudio installs OS X.
  2. The OS X installer includes:
    1. Local admin account
    2. Skip the first time Setup Assistant
    3. Puppet, Hiera, Facter
    4. Custom Mac-specific Facts for Facter
    5. Custom CSR attributes (see this blog post)
    6. Munki
    7. A .mobileconfig profile to configure Munki to use SSL to our repo
    8. Outset
    9. A script that sets hostname and runs the Puppet agent on startup
  3. On startup, the hostname is set.
  4. Once the hostname is set, Puppet runs.
  5. Create the Munki bootstrap
  6. Munki runs and installs all software as normal.

Preparing The Deployment:

For my deployments, I like using Greg Neagle’s CreateOSXInstallPkg (henceforth referred to by acronym “COSXIP”) for generating OS X installer packages. Rather than crafting a specific image to be restored using DeployStudio, a package can be used to both install a new OS as well as upgrade-in-place over an existing OS.

One of the perks of using COSXIP is being able to load up additional packages that are installed at the same time as the OS, in the OS X Installer environment.

As mentioned above, we’re going to use a number of specific packages. Here’s what the COSXIP plist looks like:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
<plist version="1.0">
<dict>
<key>Source</key>
<string>/Applications/Install OS X Yosemite.app</string>
<key>Output</key>
<string>InstallYosemitePuppetMunki.pkg</string>
<key>Packages</key>
<array>
<string>AddMunkiToHostsDist.pkg</string>
<string>AddPuppetToHostsDist.pkg</string>
<string>ClearRegistrationSignedDist.pkg</string>
<string>create_admin-fl-SignedDist-1.9.pkg</string>
<string>puppet-3.7.4.Dist.pkg</string>
<string>hiera-1.3.4.Dist.pkg</string>
<string>facter-2.4.0.Dist.pkg</string>
<string>Facter-MacFactsDist.pkg</string>
<string>CSRAttributesCOSXIPDist.pkg</string>
<string>OutsetDist.pkg</string>
<string>OutsetPuppetAgentDist.pkg</string>
<string>munkitools-2.2.0.2399.pkg</string>
<string>ManagedInstalls-10.10-SSL-2.5.Dist.pkg</string>
</array>
<key>Identifier</key>
<string>org.sacredsf.installosx.yosemite.pkg</string>
</dict>
</plist>

Note that I’ve added “Dist” to the names of them. Due to a Yosemite requirement that all included packages be distribution packages, I have forcefully converted each package to a distribution using productbuild as described in the above link, and added “Dist” to the end to distinguish them.

The Packages:

create_admin-fl-Dist-1.9.pkg is a local admin account created with CreateUserPkg.

ClearRegistrationDist.pkg creates the files necessary to skip the first-boot OS X Setup Assistant.

Puppet, Hiera, and Facter are all downloaded directly from Puppetlabs (or via Autopkg recipe).

The Facter-MacFactsDist.pkg package is one I created based on the Mac-Facts facts that Graham Gilbert wrote, linked above.

CSRAttributesCOSXIPDist.pkg is a package I created to add a customized csr_attributes.yaml file to the client, for use with my custom CSR autosigning policy).

OutsetDist.pkg is a distribution copy of the latest release of Outset. Outset is an easy way to run scripts on firstboot, subsequent boots, or user login.

OutsetPuppetAgentDist.pkg is where the magic happens. A script is placed into /usr/local/outset/firstboot-scripts/, which executes and then deletes itself. This script is what does all the hard work. I’ll talk about this script in detail in the next section. This package is also available in my Github repo.

munkitools-2.2.0.2399.pkg is the current (as of writing time) release version of Munki, available from Munkibuilds.

ManagedInstalls-10.10-SSL-2.5.Dist.pkg is the package version of my ManagedInstalls-SSL profile for 10.10. This package was created using Tim Sutton’s make-profile-pkg tool.

The OS X installer is then built:
sudo ./createOSXinstallPkg --plist=InstallYosemite-PuppetMunki.plist

The resulting InstallYosemitePuppetMunki.pkg is copied to my DeployStudio repo.

Critical note for those following at home: if you do not have your Puppet server and Munki server available in DNS, you will need to add them to the clients’ /etc/hosts files. You can do so with a script like this:

#!/bin/sh  
echo "10.0.0.1 munki2.domain.com" >> "$3/private/etc/hosts"

You can use pkgbuild to create a simple payload-free package to do this, and then use productbuild to make it a Distribution package, and then add it to the COSXIP plist.

Deploying OS X:

The DeployStudio workflow is quite simple: erase the hard drive, install the “InstallYosemitePuppetMunki.pkg” to the empty “Macintosh HD” partition, automated, as a live install (not a postponed install).

Once the package is installed, the machine reboots automatically and begins the actual OS X installation process.

The First Boot:

The first boot triggers Outset, which delays the login window while it runs all the scripts in /usr/local/outset/firstboot-scripts/ (and then does other things, but those are not relevant for this blog post). I added a package above, OutsetPuppetAgentDist.pkg, which places a script into this folder for firstboot execution.

This script, PreparePuppet.sh, looks like this:


#!/bin/bash
# Stolen from PSU:
# https://wikispaces.psu.edu/display/clcmaclinuxwikipublic/First+Boot+Script
echo "Waiting for network access"
/usr/sbin/scutil -w State:/Network/Global/DNS -t 180
sleep 5
# Get the serial number
serial=`system_profiler SPHardwareDataType | awk '/Serial/ {print $4}'`
# If this is a VM in VMWare, Parallels, or Virtual Box, it might have weird serial numbers that Puppet doesn't like, so change it to something static
if [[ `system_profiler SPHardwareDataType | grep VMware` || `system_profiler SPHardwareDataType | grep VirtualBox` || `system_profiler SPEthernetDataType | grep "/0x1ab8/"` ]]; then
# Remove any silly + or / symbols
serial="${serial//[+\/]}"
fi
/usr/sbin/scutil –set HostName "$serial.sacredsf.org"
/usr/sbin/scutil –set LocalHostName "$serial.sacredsf.org"
/usr/sbin/scutil –set ComputerName "$serial.sacredsf.org"
/usr/bin/puppet agent –test –waitforcert 60 >> /var/log/puppetagent.log
/usr/bin/touch /Users/Shared/.com.googlecode.munki.checkandinstallatstartup

The goal of this script is to wait for the network to kick in, and then set the hostname to the serial number of the client, then trigger Puppet, followed by kickstarting the Munki bootstrap.

First, I borrowed a technique from Penn State University’s FirstBootScript to wait until network access is up. This is done with scutil, which waits up to 180 seconds for DNS to resolve before continuing. This ensures that all network services are up and running and the hostname can be successfully set.

serial=`system_profiler SPHardwareDataType | awk '/Serial/ {print $4}'`

Simple way to parse the serial number for the client.

When doing this in a virtual machine (like via VMWare Fusion, Parallels, or VirtualBox), sometimes you get weird things. VMWare Fusion, in particular, reaches into an ASCII grab bag to find characters for the serial number. It uses symbols like “+” and “/” in its serial number, and if I’m going to assign this to a hostname, Puppet is certainly going to complain about a hostname like “vmwpwg++jkig.sacredsf.org”. Better to avoid that completely by removing the special characters.

Once the hostnames are set with scutil, trigger a Puppet run. I use
--waitforcert 60
to give Puppet time (up to 60 seconds) to send a CSR to the Puppetmaster, get it signed, and bring it back. I also store the output in /var/log/puppetagent.log so I can see the results of the Puppet run (although this was really only necessary for testing, and probably worth removing for production).

When Puppet runs, it also checks for any configurations that need to be applied, and executes them. As part of its configurations, Puppet will copy all the appropriate Puppet certificates into the /Library/Managed Installs/certs/ directory, so Munki can use them for SSL client certificates.

Finally, the script then creates the Munki bootstrap, which can now run correctly thanks to the profile installed above, and the client certificates that Puppet has created.

The Puppet Configuration:

I mentioned two paragraphs ago that Puppet applies some configurations. Right now, my Puppet usage is very light and simple:

  1. Remove the ‘puppet’ user and groups, because I don’t need them.
  2. For OS X clients, copy the Puppet certificates to /Library/Managed Installs/certs/ so Munki can use them.

The first part is done with my site.pp manifest:


user { 'puppet':
ensure => 'absent',
}
group { 'puppet':
ensure => 'absent',
}
if $::operatingsystem == 'Darwin' {
include munki_ssl
}

The second part is done with munki_ssl module I wrote, which you can find on Github. The manifest:


class munki_ssl {
if $::operatingsystem != 'Darwin' {
fail('The munki_ssl module is only supported on Darwin/OS X')
}
file { ['/Library/Managed Installs', '/Library/Managed Installs/certs/' ]:
ensure => directory,
owner => 'root',
group => 'wheel',
}
file { '/Library/Managed Installs/certs/ca.pem':
mode => '0640',
owner => root,
group => wheel,
source => '/etc/puppet/ssl/certs/ca.pem',
require => File['/Library/Managed Installs/certs/'],
}
file { '/Library/Managed Installs/certs/clientcert.pem':
mode => '0640',
owner => root,
group => wheel,
source => "/etc/puppet/ssl/certs/${clientcert}.pem",
require => File['/Library/Managed Installs/certs/'],
}
file { '/Library/Managed Installs/certs/clientkey.pem':
mode => '0640',
owner => root,
group => wheel,
source => "/etc/puppet/ssl/private_keys/${clientcert}.pem",
require => File['/Library/Managed Installs/certs/'],
}
}

Aggressively check to make sure that we’re only doing this on OS X, and then use Puppet’s file resources to copy the Puppet certs from /etc/puppet/ssl/ to the appropriate names in /Library/Managed Installs/certs/.

Munki Configuration:

Using generic names makes it easy to configure Munki’s SSL settings with a profile, mentioned above:


<key>mcx_preference_settings</key>
<dict>
<key>InstallAppleSoftwareUpdates</key>
<true/>
<key>SoftwareRepoURL</key>
<string>https://munki2.domain.com/repo</string&gt;
<key>SoftwareUpdateServerURL</key>
<string>http://repo.domain.com/content/catalogs/others/index-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1_release.sucatalog</string&gt;
<key>SoftwareRepoCACertificate</key>
<string>/Library/Managed Installs/certs/ca.pem</string>
<key>ClientCertificatePath</key>
<string>/Library/Managed Installs/certs/clientcert.pem</string>
<key>ClientKeyPath</key>
<string>/Library/Managed Installs/certs/clientkey.pem</string>
<key>UseClientCertificate</key>
<true/>
</dict>

With this profile in place, Munki is configured to use SSL with client certificates – which are put into place by Puppet.

The last step of the script mentioned above is to kick off the Munki bootstrap, which can now run without problems.

Conclusions

It was a bit of a complicated process, but it’s a way to guarantee secure delivery of content from out-of-the-box provisioning all the way to the end point. Even if there were a rogue Munki server operating at http://munki/repo or https://munki/repo/, using a non-default server name (admittedly, “munki2” is not very creative) helps mitigate that risk. The use of client certificates prevent rogue Munki clients from pulling data from our Munki server. The use of SSL prevents a MITM attack, and DeployStudio is configured to use SSL connections as well.

We can generally rest easy knowing that we have secure provisioning of new devices (or refreshing of old devices), and secure delivery of Munki content to our end clients.

(Mandatory Docker reference: my Puppetmaster and Munki are both running in the Docker containers mentioned in the blog post at the top of this one)

Enhancing Sal with Facter and Profiles

In a previous post, I showed how to set up Sal.

Sal’s basic functionality is useful on its own, for the basic Munki reporting – what are the completed installs, pending updates, what OS versions, how many devices checked in the past 24 hours, etc. In this post, I’m going to demonstrate how to get more out of Sal.

Adding in Facter:

You can add much more, though, by the use of Puppet, and more specifically, the piece of Puppet called Facter. Facter is a separate program that works with Puppet that simply gathers information (“facts”) about the host OS and stores them (ostensibly so that Puppet can determine what the machine’s state is and what needs to happen to it to bring it in line with configured policy).

At the bottom of Sal’s client configuration guide is a small section on using custom Facter facts. Puppet is not required to use Facter, and you can actually download it yourself as part of Puppet’s open source software.

Note: if you’re an Autopkg user, you can find a Facter recipe in the official default repo: autopkg run Facter.download (or autopkg run Facter.munki if you have Munki configured with Autopkg).

Install Facter on your clients, either with Munki or by simply installing the Facter.pkg.

Test out Facter on that client by opening up the Terminal and running Facter:
facter
You’ll see a whole lot of information about the client printed out. Handy!

Additional Facts:

A nice thing about Facter is that it’s easy to extend and customize with additional facts, which are essentially just Ruby scripts. Puppet Labs has documentation on Custom Facts here.

Graham Gilbert, the author of Sal, has also written some helpful custom facts for Macs, which I’m going to use here.

We’re going to need to get these facts downloaded and onto our clients. Use whatever packaging utility you like to do this, but all of those .rb files have to go into Facter’s custom facts directory. There are lots of places to put them, but I’m going to place them in /var/lib/puppet/lib/facter/, where they can also be used by Puppet in the future.

Once those facts are installed on your client, you can run Facter again and access them using an additional argument:
sudo facter --puppet

Note that you now need sudo to see these extra facts. Facter needs administrative privileges to get access to those facts, so simply running facter --puppet will give you the same results we had previously (before the new Mac facts were installed). This won’t be a problem, as the Sal postflight script, when executed with Munki, is run as root.

To make use of Facter with Sal, we need only run Munki again, which executes the Sal postflight:
sudo managedsoftwareupdate

When the run is complete, take a look at the machine’s information in Sal. You’ll now see a “Facter” toggle with all of those neat facts for that client machine.

Faster Client Configuration:

One of the instructions as part of my Sal setup post, as well as part of the official documentation, is to set the client’s preferences for Sal URL and the Machine Group key for it to use. This was done using the defaults command to write the preferences to the com.salsoftware.sal preferences domain.

Instead of using defaults at the command line, we could also provide a simple .plist file that contains the two keys (machine group key, and URL) and two values, and place that in /Library/Preferences/com.salsoftware.sal.plist. However, relying on .plist files to load preferences is problematic with cfprefsd, the preference caching daemon introduced in 10.9 Mavericks.

Well, if you can do it with defaults, you can do it with configuration profiles! Configuration profiles (also known as .mobileconfig files) allow us to enforce preference domain values – such as enforcing the key and URL values for com.salsoftware.sal.

Making a configuration profile by hand is madness, so it’s better to use a tool that already produces profiles effectively – such as Profile Manager, Apple Configurator, or any MDM suite. That’s a lot of work just to get a profile, though.

Instead, we can thank Tim Sutton for his awesome mcxToProfile script, which takes a .plist or existing MCX object and converts it into a profile. We could use mcxToProfile to convert an existing com.salsoftware.sal.plist into a profile, but that means we now need to handcraft a .plist file for each Machine Group key we create.

I’m not a fan of manual tasks. I’m a big fan of automation, and I like it when we make things as simple, automatic, and repetitive as possible. We want a process that will do the same thing every time. So rather than create a plist for each Machine Group I want a profile for, and then run the mcxToProfile script, I’m going to write another script that does it for me.

All of this can be found on my Github repo for SalProfileGenerator.

Writing the script:

Here’s the code for the generate_sal_profile.py script:

#!/usr/bin/python

import argparse
import os
from mcxToProfile import *

parser = argparse.ArgumentParser()
parser.add_argument("key", help="Machine Group key")
parser.add_argument("-u", "--url", help="Server URL to Sal. Defaults to http://sal.")
parser.add_argument("-o", "--output", help="Path to output .mobileconfig. Defaults to 'com.salsoftware.sal.mobileconfig' in current working directory.")
args = parser.parse_args()

plistDict = dict()

if args.url:
	plistDict['ServerURL'] = args.url
else:
	plistDict['ServerURL'] = "http://sal"

plistDict['key'] = args.key

newPayload = PayloadDict("com.salsoftware.sal", makeNewUUID(), False, "Sal", "Sal")

newPayload.addPayloadFromPlistContents(plistDict, 'com.salsoftware.sal', 'Always')

filename = "com.salsoftware.sal"

filename+="." + plistDict['key'][0:5]

if args.output:
	if os.path.isdir(args.output):
		output_path = os.path.join(args.output, filename + '.mobileconfig')
	elif os.path.isfile(args.output):
		output_path = args.output
	else:
		print "Invalid path: %s. Must be a valid directory or an output file." % args.output
else:
	output_path = os.path.join(os.getcwd(), filename + '.mobileconfig')

newPayload.finalizeAndSave(output_path)

Looking at the script, the first thing we see is that I’m importing mcxToProfile directly. No need to reinvent the wheel when someone else already has a really nice wheel with good tires and spinning rims that is also open-source.

Next, you see the argument parsing. As described in the README, this script takes three arguments:

  • the Machine Group key
  • the Sal Server URL
  • the output path to write the profiles to

The payload of each profile needs to be enforced settings for com.salsoftware.sal, with the two settings it needs – the key and the URL. The URL isn’t likely to change for our profiles, so that’s an easy one.

First, initialize mcxToProfile’s PayloadDict class with our identifier (“com.salsoftware.sal”), a new UUID, and filler content for the Organization, etc. We call upon mcxToProfile’s addPayloadFromPlistContents() function to add in “always” enforcement of the preference domain com.salsoftware.sal.

The obvious filename to use for our profile is “com.salsoftware.sal.mobileconfig”. This presents a slight issue, because if our goal is to produce several profiles, we can’t name them all the same thing. The simple solution is to take a chunk of the Machine Group key and throw it into the filename – in this case, the first 5 letters.

Once we determine that our output location is valid, we can go ahead and save the profile.

Ultimately we should get a result like this:

./generate_sal_profile.py e4up7l5pzaq7w4x12en3c0d5y3neiutlezvd73z9qeac7zwybv3jj5tghhmlseorzy5kb4zkc7rnc2sffgir4uw79esdd60pfzfwszkukruop0mmyn5gnhark9n8lmx9
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>PayloadContent</key>
	<array>
		<dict>
			<key>PayloadContent</key>
			<dict>
				<key>com.salsoftware.sal</key>
				<dict>
					<key>Forced</key>
					<array>
						<dict>
							<key>mcx_preference_settings</key>
							<dict>
								<key>ServerURL</key>
								<string>http://sal</string>
								<key>key</key>
								<string>e4up7l5pzaq7w4x12en3c0d5y3neiutlezvd73z9qeac7zwybv3jj5tghhmlseorzy5kb4zkc7rnc2sffgir4uw79esdd60pfzfwszkukruop0mmyn5gnhark9n8lmx9</string>
							</dict>
						</dict>
					</array>
				</dict>
			</dict>
			<key>PayloadEnabled</key>
			<true/>
			<key>PayloadIdentifier</key>
			<string>MCXToProfile.2e34dadf-df5a-4b3c-b729-3a2a7bb7e44a.alacarte.customsettings.dcaacd13-3fea-47eb-991d-c0183c640b2e</string>
			<key>PayloadType</key>
			<string>com.apple.ManagedClient.preferences</string>
			<key>PayloadUUID</key>
			<string>dcaacd13-3fea-47eb-991d-c0183c640b2e</string>
			<key>PayloadVersion</key>
			<integer>1</integer>
		</dict>
	</array>
	<key>PayloadDescription</key>
	<string>Included custom settings:
com.salsoftware.sal

Git revision: a9edc21c62</string>
	<key>PayloadDisplayName</key>
	<string>Sal</string>
	<key>PayloadIdentifier</key>
	<string>com.salsoftware.sal</string>
	<key>PayloadOrganization</key>
	<string>Sal</string>
	<key>PayloadRemovalDisallowed</key>
	<true/>
	<key>PayloadScope</key>
	<string>System</string>
	<key>PayloadType</key>
	<string>Configuration</string>
	<key>PayloadUUID</key>
	<string>2e34dadf-df5a-4b3c-b729-3a2a7bb7e44a</string>
	<key>PayloadVersion</key>
	<integer>1</integer>
</dict>
</plist>

Adjusting mcxToProfile:

On OS X, plists can be handled and parsed easily because it’s built into the Foundation libraries. mcxToProfile itself incorporates several functions from Greg Neagle’s FoundationPlist library, which does improved plist handling compared to Python’s built-in plistlib.

Because of the reliance on the OS X Foundation libraries, however, we can’t use FoundationPlist outside of OS X. Since Sal is built to run on multiple platforms, and the Docker image is built on Ubuntu, we can’t use FoundationPlist as the core of our plist handling functionality.

Thus, we’ll need to make some adjustments to mcxToProfile:

try:
	from FoundationPlist import *
except:
	from plistlib import *

In Tim Sutton’s original version of the script, he imports the necessary Foundation libraries into Python for use of them, and inline copied the parts of FoundationPlist he needed. If we’re going to make this more cross-platform friendly, we need to remove those dependencies.

So in my revision of mcxToProfile, I’ve removed all of the FoundationPlist functions completely from the code, instead relying on bundling a copy of FoundationPlist.py with the project. Instead of importing Foundation libraries, we’re going to try to use FoundationPlist – and if any part of that import goes wrong, we just abandon the whole thing and use Python’s built-in plistlib.

Dirty, but effective, and necessary for cross-platform compatibility.

Now we have a simple script for generating a profile for a Machine Key and URL for Sal that can run on any platform.

Automating the script:

Generating a single profile is a useful first step. The ultimate goal is to be able to generate all of the profiles we’ll need at once.

This script was written in Bash, rather than Python. You can find it in the Github repo here:

#!/bin/bash

profile_path=`printenv PROFILE_PATH`
if [[ ! $profile_path ]]; then
	profile_path="/home/docker/profiles"
fi

oldIFS="$IFS"
IFS=$'\n'
results=$( echo "SELECT key FROM server_machinegroup;" | python /home/docker/sal/manage.py dbshell | xargs | awk {'for (i=3; i<NF-1; i++) print $i'} )
read -rd '' -a lines <<<"$results"
IFS=$oldIFS
for line in "${lines[@]}"
do
	if [[ -z $1 ]]; then
		/usr/local/salprofilegenerator/generate_sal_profile.py $line --output $profile_path
	else
		/usr/local/salprofilegenerator/generate_sal_profile.py $line --url $1 --output $profile_path
	fi
done

It’s ugly Bash, I won’t deny. The README documents the usage of this script in detail.

The assumption is that this will be used within the Sal Docker container, and thus we can make use of environment variables. With that assumption, I’m also expecting that an environment variable PROFILE_PATH gets passed in that can be used as the location to place our profiles. Absent the environmental variable, I chose /home/docker/profiles as the default path.

IFS=$'\n'
The purpose of the IFS here is to help parse a long string based on newlines.

The actual pulling of the machine group keys is the complex part. I’m going to break down that one liner a bit:
echo "SELECT key FROM server_machinegroup;"
This is the SQL command that will give us the list of machine group keys from the Postgres database.

python /home/docker/sal/manage.py dbshell
This invokes the Django manage.py script to open up a database shell, which allows us to execute database commands directly from the command line. Since dbshell opens up an interpreter, we’re going to pipe standard input to it by echoing the previous SQL query.

xargs
Without going into a huge amount of unnecessary detail about xargs, the purpose of this is simply to compress the output into a single line, rather than multiple lines, for easier parsing.

awk {'for (i=3; i<NF-1; i++) print $i'}

Pretty much any time I start using awk in Bash, you know something has gone horribly wrong with my plan and I should probably have just used Python instead. But I didn’t, so now we’re stuck here, and awk will hopefully get us out of this mess.

In a nutshell, this awk command prints all the words starting at 4 through the last-word-minus-two. Since dbshell queries produces output at the end saying how many rows were produced as a result, we need to skip both the number and the word “rows” at the very end. The parsing works out because we set the IFS to divide words up based on newlines.

Ultimately, this handles the odd formatting from dbshell and prints out just the part we want – the two Machine Group keys.

read -rd '' -a lines <<<"$results"

This takes the list of Machine Group keys produced by the long line and shoves it into a Bash array.

for line in "${lines[@]}"
The for loop iterates through the array. For each key found in the array, call the generate_sal_profile.py script.

As the README documents, the shell script does handle a single shell argument, if you want to pass a different URL than the default. If a shell argument is found, that is used as a --url argument to the generate_sal_profile.py script.

By calling the script, we now get a .mobileconfig profile for each Machine Group key. Those profiles can be copied off the Sal host (or out of the Sal container) and into a distribution system, such as Profile Manager, an MDM, or Munki. Installing profiles on OS X is a trivial matter, using the profiles command or simply double-clicking them and installing them via GUI.

Because I’m currently in a “dockerize ALL THE THINGS” phase right now, I went ahead and created a Docker image for Sal incorporating this profile generation script.

Conclusion

Munki is a very useful tool. Munki with Sal by itself is a useful tool, but the best tools are ones that can be extended. Munki, Sal, and Facter provide great information about devices. Making Sal easy to install lessens the burden of setting it up, and makes the entire process of migrating to a more managed environment simpler.

Using Puppet with WebHelpDesk to Sign Certs In, Yes, You Guessed It, Docker

In a previous post, I showed how to use Munki with Puppet SSL Client certificates in a Docker image.

In that example, the Puppetmaster image is set to automatically sign all certificate requests. Good for testing, but not a good idea for production use.

Instead, we should look into Puppet policy-based signing to sign requests only based on some credentials or criteria we control. This means that random nodes can’t come along and authenticate to the Puppet master, and it also means that the Puppet admin won’t have to manually sign every node’s certificate request. Manually signing works great for testing, but it quickly spirals out of control when you’re talking about dozens, or hundreds (or thousands) of machines.

Puppet’s policy-based autosigning allows us to execute a script. The exit code of that script determines whether a certificate is signed or not (exit code 0 means we should sign). So we need to write a script that will check something about the client that lets us determine it’s “ours” or “safe,” and sign accordingly – or reject.

Well, we have a really easy to way to do that – why not look up the client in inventory? We have WebHelpDesk, with its customized Postgres database, which can track inventory for us. If we’re using WebHelpDesk for inventory (as I am), then an autosign script that checks the WHD inventory for ownership would be an effective way to screen for cert requests.

One of WebHelpDesk’s best features, in my opinion, is its REST API, which allows us to make requests from WebHelpDesk’s backend in a more automated fashion than via the web interface. Using the REST API, we can develop scripts that will manage information for us – such as the one I wrote, WHD-CLI.

I’ve even made a separate Docker container for it (which is admittedly better documented than the original project), although we’re not actually going to use the container separately for this purpose (as there’s no way to get Puppet to use an autosign script that isn’t installed locally, so having it exist in a separate Docker container isn’t going to help us).

So, we have WebHelpDesk, which has inventory for our machines. We have a script, WHDCLI, which allows us to query WebHelpDesk for information about devices. We have the Puppetmaster container, which is running Puppet. Let’s combine them!

Building Puppetmaster with WHD-CLI installed:

The repo for this project is here. Start with the Dockerfile:

FROM macadmins/puppetmaster

MAINTAINER nmcspadden@gmail.com

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz
WORKDIR /home/requests
RUN python /home/requests/setup.py install
ADD https://github.com/nmcspadden/WHD-CLI/tarball/master /home/whdcli/master.tar.gz
RUN tar -zxvf /home/whdcli/master.tar.gz --strip-components=1 -C /home/whdcli && rm /home/whdcli/master.tar.gz
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install
ADD puppet.conf /etc/puppet/puppet.conf
ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
ADD check_csr.py /etc/puppet/check_csr.py
RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

FROM macadmins/puppetmaster
Since we have a nice Puppet master container already, we can use that as a baseline to add our WHD-CLI scripts onto.

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz

Use ADD to download the Requests project. Requests is an awesome Python library for handling HTTP/S requests and connections, much more robust and much more usable than urllib2 or urllib3. Unfortunately, it’s not a standard library, so we’ll need to download a copy of the module in tarball form, then extract and install it ourselves.

WORKDIR /home/requests
The WORKDIR directive changes the local present working directory to /home/requests before the next command. This is equivalent to doing cd /home/requests.

RUN python /home/requests/setup.py install
Now we use the Python setuptools to install Requests so it’s available system-wide, in the default Python path.

RUN git clone https://github.com/nmcspadden/WHD-CLI.git /home/whdcli
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install

Same thing happens here to WHD-CLI – clone the repo, change the working directory, and install the package.

ADD puppet.conf /etc/puppet/puppet.conf
In the Puppetmaster image, we already have a Puppet configuration file – but as I documented previously, it’s set to automatically sign all cert requests. Since we’re changing the behavior of the Puppet master, we need to change the configuration file to match our goals.

Here’s what the new puppet.conf looks like:

[agent]  
    certname        = puppetmaster  
    pluginsync      = true  
  
[master]  
    certname        = puppet  
    confdir	    = /opt/puppet  
    vardir	    = /opt/varpuppet/lib/puppet/  
    basemodulepath  = $confdir/site-modules:$confdir/modules:/usr/share/puppet/modules  
    factpath        = $confdir/facts:/var/lib/puppet/lib/facter:/var/lib/puppet/facts  
    autosign        = $confdir/check_csr.py  
    hiera_config    = $confdir/hiera.yaml  
    rest_authconfig = $confdir/auth.conf  
    ssldir          = $vardir/ssl  
    csr_attributes  = $confdir/csr_attributes.yaml  

The major change here is the autosign directive is no longer set to “true.” Now, it’s set to $confdir/check_csr.py, a Python script that will be used to determine whether or not a certificate request gets signed. Note also the use of csr_attributes = $confdir/csr_attributes.yaml directive – that’ll come into play in the script as well.

ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
Add in a default WHD-CLI configuration plist. This will be used by WHD-CLI to get API access to WebHelpDesk.

ADD check_csr.py /etc/puppet/check_csr.py
Here’s the actual script that will be run whenever a certificate request is received on the Puppet master. An in-depth look at it comes later.

RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

As we’ll see later in-depth, the script will log its results to a logfile in /var/log/check_csr.out. To prevent possible permissions and access issues, it’s best to create that file first, and make sure it has permissions where the Puppet master can read and write to it.

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

These last two commands are copies of those from the original Puppetmaster image. Since we’re adding in new stuff to /etc/puppet, it’s important for us to make sure all the appropriate files end up in the right place.

As usual, you can either build this image yourself from the source:
docker build name/puppetmaster-whdcli .
Or you can pull from the Docker registry:
docker pull macadmins/puppetmaster-whdcli

Crafting Custom CSR Attributes:

The goal of an autosign script is to take information from the client machines (the Puppet nodes) and determine if we can sign it based on some criteria. In this use case, we want to check if the client nodes are devices we actually own, or know about in some way. We have WebHelpDesk as an asset tracking system, that contains information about our assets (such as serial number, MAC address, etc.), and we already have a script that allows us to query WHD for such information.

So our autosigning script, check_csr.py, needs to do all of these things. According to Puppet documentation, the autosigning script needs to return 0 for a successful signing request, and non-zero for a rejection. A logical first choice would be to ask the client for its serial number, and then look up the serial number to see if the machine exists in inventory, and exit 0 if it does – otherwise reject the request.

The first question is, how do we get information from the client? This is where the csr_attributes.yaml file comes into play. See the Puppet documentation on it for full details.

In a nutshell, the csr_attributes.yaml file allows us to specify information from the node that goes into the CSR (certificate signing request), which can then be extracted by the autosigning script and parsed for relevance.

Specifically, we can use the CSR attributes to pull two specific facts: serial number, and whether or not the machine is physical, virtual, or a docker container.

This is the csr_attributes.yaml file that will be installed on clients:

---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: mySerialNumber  
  1.3.6.1.4.1.34380.1.2.1.2: facter_virtual  

The two extension_request prefixes are special Puppet OIDs that allow us to add attributes to the CSR – essentially they’re labels for what kind of data can be put into the CSR.

Here’s an example of what it looks like in a VMWare Fusion VM, after installation:

sh-3.2# cat /etc/puppet/csr_attributes.yaml   
---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: VMYNypomQeS5  
  1.3.6.1.4.1.34380.1.2.1.2: vmware  

The serial number has been replaced with what the VM reports, and the “virtual” fact is replaced by the word “vmware”, indicating that Facter recognizes this is a virtual machine from VMWare. This will be important in our script.

For convenience, I have a GitHub repo for installing these attributes (built with Whitebox Packages) available here. A release package is available for easy download.

The Autosigning Script:

The autosign script, when called from the Puppetmaster, is given two things. The hostname of the client requesting a certificate is passed as an argument to the script. Then, the contents of the CSR file itself is passed via stdin to the script. So our script needs to be able to parse an argument, and then read in what it needs from stdin.

The full script can be found on GitHub. Here’s a pared-down version of the script, with many of the logging statements removed for easier blog-ability:

#!/usr/bin/python

import sys
import whdcli
import logging
import subprocess

LOG_FILENAME = '/var/log/check_csr.out'

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
logger = logging.getLogger(__name__)

logger.info('Start script')

hostname = sys.argv[1]

if hostname == "puppet":
	logger.info("It's the puppetmaster, of course we approve it.")
	sys.exit(0)

certreq = sys.stdin.read()

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

lineList = output.splitlines()

strippedLineList = [line.lstrip() for line in lineList]
strippedLineList2 = [line.rstrip() for line in strippedLineList]

try:
	trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
except:
	logger.info("No serial number in CSR. Rejecting CSR.")
	sys.exit(1)
	
serial_number = strippedLineList2[trusted_attribute1+1]
logger.info("Serial number: %s", serial_number)	  

try:
	trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
except:
	logger.info("No virtual fact in CSR. Rejecting CSR.")
	sys.exit(1)

physical_fact = strippedLineList2[trusted_attribute2+1]

if physical_fact == "virtual" or physical_fact == "vmware":
	logger.info("Virtual machine gets autosigned.")
	sys.exit(0)
elif physical_fact == "docker":
	logger.info("Docker container gets autosigned.")
	sys.exit(0)

# Now we get actual work done
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)
if not w.getAssetBySerial(serial_number):
	logger.info("Serial number not found in inventory.")
	sys.exit(1)

logger.info("Found serial number in inventory. Approving.")
sys.exit(0)

Let’s take a look at some of the notable parts of the script:

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
This sets the basic log level. This script has both INFO and DEBUG logging, so if you’re trying to diagnose a problem or get more information from the process, you could change level=logging.INFO to level=logging.DEBUG. It’s much noisier, so best for testing and probably not ideal for production.

Migrating the logging to standard out so that you can use docker logs is a good candidate for optimization.

hostname = sys.argv[1]
The hostname for the client is the only command line argument passed to the script. In a test OS X default VM, this would be “mac.local”, for example.

certreq = sys.stdin.read()
The actual contents of the CSR gets passed in to stdin, so we need to read it and store it in a file.

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

Here, we make an outside call to openssl. Puppet documentation shows that we can manually parse the CSR for the custom attributes using OpenSSL, so we’re going to do just that in a subprocess. We’re going to pass in the contents of certreq into stdin in the subprocess call, so in essence we are doing this:
/usr/bin/openssl req -noout -text -in

Once we do some text parsing and line stripping (since the CSR is very noisy about linebreaks), we can pull the first custom attributes, the serial number:
trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
If there’s no line in the CSR containing that data, that means the CSR didn’t have our csr_attributes.yaml installed (and is almost certainly not something we recognize, or at least not in a desired state and should be addressed). Thus, reject.

trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
Our second attribute is the Facter virtual fact. If we don’t find that either, then we still have an incorrect CSR, and thus it gets rejected.

if physical_fact == "virtual" or physical_fact == "vmware":
This was mostly for my own convenience, but I decided it was safe to Puppetize any virtual machine, such as a VMWare Fusion VM (or ESXi, or whatever). As VMs tend to be transient, I didn’t want to spend time approving these certs constantly as I spun test VMs up and down. Thus, they get autosigned.

elif physical_fact == "docker":
If it’s a Docker container getting Puppetized, autosign as well, for mostly the same reasons as above.

Once the CSR is parsed for its contents and some basic sanity checks are put into place, we can now actually talk to WebHelpDesk.
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)

Parse the .plist we passed in to the Puppetmaster image earlier for the API key and URL of WebHelpDesk, and load up the API. Note the False at the end of the WHD() call – that’s to specify that we don’t want Verbose logging. If you’re trying to debug behavior, and want to see all the details in the log file, specify True here (or eliminate the extra variables just call whdcli.WHD(whd_prefs), since the other three variables are optional).

if not w.getAssetBySerial(serial_number):
This is the real meat, right here – w.getAssetBySerial() is the function call that checks to see if the serial number exists in WebHelpDesk’s asset inventory. If this serial number isn’t found, the function returns False, and thus we reject the CSR by exiting with status code 1.

Putting It All Together:

So, we’ve got WebHelpDesk in a Docker image, using our customized Postgres. We’ve got our new-and-improved Puppetmaster with WHD-CLI. We’ve got our client configuration install package. We have all the pieces to make it work, let’s assemble it into a nice machine:

  1. First, run the data container for the Postgres database for WHD:
    docker run -d --name whd-db-data --entrypoint /bin/echo macadmins/postgres-whd Data-only container for postgres-whd

  2. Run the Postgres database for WHD:
    docker run -d --name postgres-whd --volumes-from whd-db-data -e DB_NAME=whd -e DB_USER=whddbadmin -e DB_PASS=password macadmins/postgres

  3. Run WebHelpDesk:
    docker run -d -p 8081:8081 --link postgres-whd:db --name whd macadmins/whd

  4. Configure WebHelpDesk via the browser to use the external Postgres database (see the penultimate section on Running WebHelpDesk in Docker for details).

  5. Once WebHelpDesk is set up and you’re logged in, you need to generate an API key. Go to Setup -> Techs -> My Account -> Edit -> API Key: “Generate” -> Save.

  6. Copy and paste the API key into com.github.nmcspadden.whd-cli.plist as the value for the “apikey” key. If you haven’t cloned the repo for this project, you can obtain the file itself:
    curl -O https://raw.githubusercontent.com/macadmins/puppetmaster-whdcli/master/com.github.nmcspadden.whd-cli.plist

  7. Create a data-only container for Puppetmaster-WHDCLI:
    docker run -d --name puppet-data --entrypoint /bin/echo macadmins/puppetmaster-whdcli Data-only container for puppetmaster

  8. Run Puppetmaster-WHDCLI. Note that I’m passing in the absolute path to my whd-cli.plist file, so make sure you alter the path to match what’s on your file system:
    docker run -d --name puppetmaster -h puppet -p 8140:8140 --volumes-from puppet-data --link whd:whd -v /home/nmcspadden/com.github.nmcspadden.whd-cli.plist:/home/whdcli/com.github.nmcspadden.whd-cli.plist macadmins/puppetmaster-whdcli

  9. Complete the Puppetmaster setup:
    docker exec puppetmaster cp -Rf /etc/puppet /opt/

  10. Configure a client:

    1. Install Facter, Hiera, and Puppet on an OS X VM client (or any client, really – but I tested this on a 10.10.1 OS X VM).
    2. Install the CSRAttributes.pkg on the client.
    3. If your Puppetmaster is not available in the client’s DNS, you’ll need to add the IP address of your Docker host to /etc/hosts.
    4. Open a root shell (it’s important to run the Puppet agent as root for this test):
      sudo su
    5. Run the Puppet agent as root:
      # puppet agent --test
    6. The VM should generate a certificate signing request, send to the Puppet master, which parses the CSR and notices that it’s a virtual machine, and then autosigns it and send the cert back.
  11. You can check the autosign script’s log file on the Puppetmaster to see what it did:
    docker exec puppetmaster tail -n 50 /var/log/check_csr.out

Here’s sample output from a new OS X VM:
INFO:__main__:Start script
INFO:__main__:Hostname: testvm.local
INFO:__main__:Serial number: VM6TP23ntoj2
INFO:__main__:Virtual fact: vmware
INFO:__main__:Virtual machine gets autosigned.

Here’s sample output from that same VM, but I manually changed /etc/puppet/csr_attributes.yaml so that the virtual fact is “physical”:
INFO:__main__:Start script
INFO:__main__:Hostname: testvm.local
INFO:__main__:Serial number: VM6TP23ntoj2
INFO:__main__:Virtual fact: physical
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): whd
INFO:__main__:Serial number not found in inventory.

Try this on different kinds of clients: Docker containers (a good candidate is the Munki-Puppet container which needs to run Puppet to get SSL certs), physical machines, other platforms. Test it on a machine that is not in WebHelpDesk’s inventory and watch it get rejected from autosigning.

Troubleshooting:

Manually run the script:

If you get a CSR that gets rejected and you’re not sure why, you can manually run the check_csr.py script itself on the rejected (or rather, disapproved) CSR .pem file. Assuming the hostname is “testvm.local”:
docker exec -it puppetmaster /bin/bash to open a Bash shell on the container, then:
cat /opt/varpuppet/lib/puppet/ssl/ca/requests/testvm.local.pem | /opt/puppet/check_csr.py "testvm.local"

Then, you can check the logs to see what the output of the script is. Assuming you’re still in the Bash shell on the container:
tail -n 50 /var/log/check_csr.out

Test WHDCLI:

If you’re running into unexpected failures with the autosigning scripts, or you’re not getting the results you expect, you can try manually running the WHDCLI to see where the problem might be:
docker exec -it puppetmaster /usr/bin/python
Once you’re in the Python interpreter, load up WHD-CLI:

>>> import whdcli
>>> whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
>>> w = whdcli.WHD(whd_prefs)

If you get a traceback here, it’ll tell you the reason why it failed – perhaps a bad URL, bad API key, or some other HTTP authentication or access failure. Embarrassingly, in my first test, I forgot to Save in WebHelpDesk after generating an API key, and if you don’t hit the Save button, that API key disappears and never gets registered to your WHD account.

Assuming that succeeded, try doing a manual serial lookup, replacing it with an actual serial number you’ve entered into WHD:

>>> w.getAssetBySerial("serial")

The response here will tell you what to expect – did it find a serial number? It’ll give you asset details. Didn’t find a match? The response is just False.

Conclusions

Important Note: Although this post makes use of Docker as the basis for all these tools, you can use the WHD-CLI script with a Puppetmaster to accomplish the same thing. You’d just need to change the WHD URL in the whd-cli.plist file.

One of the best aspects of Docker is that you can take individual pieces, these separate containers, and combine them into amazing creations. Just like LEGO or Minecraft, you take small building blocks – a Postgres database, a basic Nginx server, a Tomcat server – and then you add features. You add parts you need.

Then you take these more complex pieces and link them together. You start seeing information flow between them, and seeing interactions that were previously more difficult to setup in a non-Docker environment.

In this case, we took separate pieces – WebHelpDesk, its database, and Puppetmaster, and we combined them for great effect. Combine this again with Munki-Puppet and now you’ve got a secure Munki SSL environment with your carefully curated Puppet signing policies. There are more pieces we can combine later, too – in future blog posts.

Running Munki with Puppet SSL Client Certificates

Previously, I showed how you can run Munki in a Docker container. Then, I talked about how to build Munki to use Puppet for SSL certificates.

Assuming you’ve got a running Puppetmaster image (which I talked about building previously), let’s run the Munki-Puppet image we just built.

Running the Container:

Run a data-only container to keep our data in:
docker run -d --name munki-data --entrypoint /bin/echo macadmins/munki-puppet Data-only container for munki

Run the Munki container by linking it to the Puppetmaster:
docker run -d --name munki --volumes-from munki-data -p 80:80 -p 443:443 -h munki --link puppetmaster:puppet macadmins/munki-puppet

The notable additions in this docker run command:
-p 443:443
Since we’re adding SSL support to the Nginx webserver, we want to make sure that the container is accessible at port 443, the default SSL port.
--link puppetmaster:puppet
The --link argument allows us to tell the Munki container that it can access any exposed ports from the Puppetmaster container by the DNS entry for “puppet”. Since the Puppet agent always tries to access “puppet” to check in, this means that the Munki container will have no trouble with Puppet.

The first step after running the container is to check in with puppet:
docker exec munki puppet agent --test
Verify that it receives a signed certificate from the Puppetmaster.

Now we’ve got a running Nginx container with our Munki repo, except it’s still only serving content at port 80. We need to tell Nginx to use our SSL configuration.

Using SSL with Munki:

We have an empty Munki repo, so we should populate it first.

Once the repo has some content, we need to add in the Nginx SSL configuration.

You’ll need to edit the provided munki-repo-ssl.conf file so that the name of the .pem certificate files matches what Puppet actually generated. For example, when you ran docker exec puppet agent --test above, you probably got output like this:

Info: Creating a new SSL key for munki.sacredsf.org
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for munki.sacredsf.org
Info: Certificate Request fingerprint (SHA256): [snip]
Info: Caching certificate for munki.sacredsf.org
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for ca
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for munki.sacredsf.org
Info: Applying configuration version '1422039029'
Info: Creating state file /var/lib/puppet/state/state.yaml

You can see the name of the certificate from the Puppetmaster:
docker exec puppetmaster puppet cert list -all

+ "munki.sacredsf.org" (SHA256) [snip]
+ "puppet"             (SHA256) [snip] (alt names: "DNS:puppet", "DNS:puppet.sacredsf.org")

To be even more thorough, look in the Munki’s Puppet certs directory:
docker exec munki ls -l /var/lib/puppet/ssl/certs/

total 8
-rw-r--r--. 1 puppet puppet 1984 Jan 23 18:18 ca.pem
-rw-r--r--. 1 puppet puppet 2021 Jan 23 18:52 munki.sacredsf.org.pem

That confirms the name of my cert is “munki.sacredsf.org.pem”, so let’s put that into munki-repo-ssl.conf:

server {
  listen 443;
  
  ssl	on;
  ssl_certificate	/var/lib/puppet/ssl/certs/munki.sacredsf.org.pem;
  ssl_certificate_key	/var/lib/puppet/ssl/private_keys/munki.sacredsf.org.pem;
  ssl_client_certificate	/var/lib/puppet/ssl/certs/ca.pem;
  ssl_crl	/var/lib/puppet/ssl/crl.pem;
  ssl_protocols	TLSv1.2 TLSv1.1 TLSv1;
  ssl_prefer_server_ciphers	on;
  ssl_ciphers	"EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
  ssl_verify_client    on;
  server_name munki;
  location /repo/ {
    alias /munki_repo/;
    autoindex off;
  }
}

The three important file paths that must be correct are ssl_certificate, ssl_certificate_key, and ssl_client_certificate. If any of these paths are wrong or can’t be found, Nginx will not start and your Docker container will immediately halt.

For reference, the ssl_protocols and ssl_ciphers are configured for perfect forward secrecy.

Otherwise, the configuration for Nginx for the Munki repo remains the same as the non-SSL version – we’re serving the file path /munki_repo as https://munki/repo/.

To get this new SSL configuration into the Nginx container, we’ll need to edit the existing configuration. Unfortunately, the base Nginx container is extremely minimal and doesn’t have vi or nano or anything. We could either install a text editor into the container, or just use a shell trick:

cat munki-repo-ssl.conf | docker exec -i munki sh -c 'cat > /etc/nginx/sites-enabled/munki-repo.conf'

Since we’ve changed the contents of a configuration file, we’ll need to restart Nginx. Let’s do that gracefully with Docker:
docker stop munki
docker start munki
Stopping the container will send a graceful “shutdown” signal to Nginx, and starting the container will bring it up as it expects.

Configure the clients to use Munki with SSL:

Detailed instructions on configuring Munki with SSL certificates can be found on the official wiki, but I’m going to recreate the steps here.

All of the following steps should be done on your OS X Munki client.

  1. If you haven’t already, run puppet agent --test as root to get a signed certificate.
  2. Copy the certs into /Library/Managed Installs/:
    1. sudo mkdir -p /Library/Managed\ Installs/certs
    2. sudo chmod 0700 /Library/Managed\ Installs/certs
    3. sudo cp /etc/puppet/ssl/certs/mac.local.pem /Library/Managed\ Installs/certs/clientcert.pem
    4. sudo cp /etc/puppet/ssl/private_keys/mac.local.pem /Library/Managed\ Installs/certs/clientkey.pem
    5. sudo cp /etc/puppet/ssl/certs/ca.pem /Library/Managed\ Installs/certs/ca.pem
  3. Change the ManagedInstalls.plist defaults:
    1. sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoURL "https://munki/repo"
    2. sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoCACertificate "/Library/Managed Installs/certs/ca.pem"
    3. sudo defaults write /Library/Preferences/ManagedInstalls ClientCertificatePath "/Library/Managed Installs/certs/clientcert.pem"
    4. sudo defaults write /Library/Preferences/ManagedInstalls ClientKeyPath "/Library/Managed Installs/certs/clientkey.pem"
    5. sudo defaults write /Library/Preferences/ManagedInstalls UseClientCertificate -bool TRUE
  4. Test out the client:

    sudo /usr/local/munki/managedsoftwareupdate -vvv --checkonly

Now you’ve got secure Munki communication from clients to server, using Puppet’s client certificates, all in Docker!

Building Munki with Puppet for SSL Client Certificates

Note: this is based on the README for the Munki-SSL docker container.

In a previous post, we ran a Docker container serving Munki repo content via Nginx. That works fine, but only serves insecure HTTP content. It’s generally in everyone’s best interest to use a secure connection between the Munki web server and its clients, and that’s described in detail: either through basic authentication or with SSL client certificates.

Using SSL client certificates is not a trivial matter, as it involves setting up a CA, or using a third-party CA to generate your client certificates, which involves a lot of work on both server and client end.

Thankfully, Sam Keeley has already written a great article about using Puppet, a configuration management tool, as the cornerstone for client-server certificate-based communication: https://www.afp548.com/2014/06/02/securing-a-munki-deployment-with-puppet-ssl-certificates/. I’ll be using this article as the basis for our configuration.

The general idea is that Puppet has its own CA, and it installs client certificates on each client you register to it, to guarantee secure communication between the Puppet master server and the clients. We can use the Puppet client certs to allow the Munki clients to communicate with the Munki webserver through certificate-based SSL.

We can simplify this process even further with the open-source Puppet built into a Docker container. See the build guide for a Docker Puppetmaster here.

Assuming you’ve got a running Puppetmaster container, let’s proceed to modify our Munki container to accommodate Puppet.

This is the process that was used to construct the macadmins/munki-puppet container, which is based on a branch of the original Munki container by groob.

Creating a Munki container that incorporates Puppet:

We have an existing Munki image, so we can use that as a base and just install Puppet on it. Here’s the Dockerfile:

FROM nmcspadden/munki
MAINTAINER Nick McSpadden nmcspadden@gmail.com
ENV PUPPET_VERSION 3.7.3
RUN apt-get update
RUN apt-get install -y ca-certificates
ADD https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb /puppetlabs-release-wheezy.deb
RUN dpkg -i /puppetlabs-release-wheezy.deb
RUN apt-get update
RUN apt-get install -y puppet=$PUPPET_VERSION-1puppetlabs1
ADD csr_attributes.yaml /etc/puppet/csr_attributes.yaml

The great thing about Dockerfiles is that they can be inherited from. Using the FROM directive, we take everything accomplished in the Munki container, and simply build on top of it.

ENV PUPPET_VERSION 3.7.3
Here, the puppet version is explicitly set so that we can get the same behavior every time we build the Docker image.

RUN apt-get update
RUN apt-get install -y ca-certificates
ADD https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb /puppetlabs-release-wheezy.deb
RUN dpkg -i /puppetlabs-release-wheezy.deb
RUN apt-get update
RUN apt-get install -y puppet=$PUPPET_VERSION-1puppetlabs1

In order to install the updated version of Puppet, we need to acquire the correct Puppetlabs repo and GPG keys. The Munki container is based on Debian wheezy, so we acquire the appropriate installer by ADDing the key package, and then use dpkg to install it. We’ll also grab the updated ca-certificates to guarantee that the certificate used to sign https://apt.puppetlabs.com/ is trusted.

Once we’ve installed the correct Puppetlabs repo and GPG key, we can install Puppet from that repo.

ADD csr_attributes.yaml /etc/puppet/csr_attributes.yaml
The CSR attributes file is described on Puppet’s website. It’s used to add extra information from a client to the CSR (Certificate Signing Request) generated on the client when the puppet agent runs for the first time and tries to get a certificate from the Puppetmaster. It can be used to write scripts that sign certificates based on certain kinds of information or policies.

In our Puppetmaster container, CSR requests get automatically signed, so this isn’t important yet. In our next iteration, we’ll see how we can put extra information from the client to use in policy-based request signing.

Building the Container:

You can either build the container by git cloning the repo:
docker build -t name/munki-puppet .
or you can pull the automated build container:
docker pull macadmins/munki-puppet

Once you’ve got the container built, let’s run it.

Building a Puppetmaster with Docker

This is based on the README I wrote for the macadmins/puppetmaster image.

Puppet is an industrial-strength cross-platform configuration management engine. Though you’ll find lots of existing Puppetmaster images on the Docker registry, this one will serve as the baseline for other expanded uses of Puppet – such as using it with Munki and SSL client certificates.

This is a walkthrough for building a Puppetmaster Docker container.

Building Puppetmaster into a Docker container:

Let’s start with our Dockerfile:

FROM centos:centos6

MAINTAINER nmcspadden@gmail.com

ENV PUPPET_VERSION 3.7.3

RUN rpm --import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
RUN yum install -y yum-utils && yum-config-manager --enable centosplus >& /dev/null
RUN yum install -y puppet-$PUPPET_VERSION
RUN yum install -y puppet-server-$PUPPET_VERSION
RUN yum clean all
ADD puppet.conf /etc/puppet/puppet.conf
VOLUME ["/opt/puppet"]
RUN cp -rf /etc/puppet/* /opt/puppet/
VOLUME ["/opt/varpuppet/lib/puppet"]
RUN cp -rf /var/lib/puppet/* /opt/varpuppet/lib/puppet/
EXPOSE 8140
ENTRYPOINT [ "/usr/bin/puppet", "master", "--no-daemonize", "--verbose" ]

FROM centos:centos6
This is based on a CentOS 6 container, for no other reason than that I’m already familiar with CentOS 6. I could just as easily have used Ubuntu or Debian or any of the other base image variants. I chose CentOS 6 over 7 due to major changes in 7 (such as the inclusion of systemd) that I’m not quite familiar with.

ENV PUPPET_VERSION 3.7.3
This makes it easy for me to enforce specific versioning of Puppet in my Dockerfile. As of writing time, Open-source Puppet is on 3.7.3, so that’s what we’ll install. It also means that future builds of this Dockerfile won’t suddenly change or update Puppet without specific admin intervention. It also means that we can easily update the Puppet version by only making one change in the Dockerfile.

RUN rpm --import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

As per Puppet’s installation instructions, we need to add the GPG key for Puppetlab’s yum repo, and then install the yum repo to get the updated Puppet from.

RUN yum install -y yum-utils && yum-config-manager --enable centosplus >& /dev/null
RUN yum install -y puppet-$PUPPET_VERSION
RUN yum install -y puppet-server-$PUPPET_VERSION
RUN yum clean all

Install the yum utils, install puppet, install the puppet server, clean up after ourselves.

ADD puppet.conf /etc/puppet/puppet.conf
Add in the Puppet configuration file. That will be explored later.

VOLUME ["/opt/puppet"]
RUN cp -rf /etc/puppet/* /opt/puppet/
VOLUME ["/opt/varpuppet/lib/puppet"]
RUN cp -rf /var/lib/puppet/* /opt/varpuppet/lib/puppet/

Here, we expose two volumes for sharing – /opt/puppet/ and /opt/varpuppet/lib/puppet/. These are important, as we’ll configure Puppet to use these for configuration and dynamic data in puppet.conf. That way, we can share this out to data-only containers so we don’t lose anything if we ever remove the Puppetmaster container.

In addition to creating those two directories, we’re also copying the contents from the default directories for Puppet (/etc/puppet/ and /var/lib/puppet/) into our new alternate directories, so they’ll be pre-populated.

EXPOSE 8140
Puppet runs on port 8140, so we need that port available to the outside world.

ENTRYPOINT [ "/usr/bin/puppet", "master", "--no-daemonize", "--verbose" ]
The entrypoint starts Puppet in master mode, with verbose logging, as the primary process in the container. This is what allows us to run the container in detached mode, and easily check the logs.

The Puppet Configuration file:

[agent]
    certname        = puppetmaster
    pluginsync      = true

[master]
    certname        = puppet
    confdir         = /opt/puppet
    vardir	    = /opt/varpuppet/lib/puppet
    basemodulepath  = $confdir/site-modules:$confdir/modules:/usr/share/puppet/modules
    factpath        = $confdir/facts:/var/lib/puppet/lib/facter:/var/lib/puppet/facts
    autosign        = true
    hiera_config    = $confdir/hiera.yaml
    rest_authconfig = $confdir/auth.conf
    ssldir          = $vardir/ssl
    csr_attributes  = $confdir/csr_attributes.yaml

The important things about the Puppet.conf to notice:

  1. confdir and vardir are set to custom directories located in /opt/. We shared these volumes earlier with the VOLUME directives in the Dockerfile, so that means this data will exist in a shareable form that can be linked to other Docker containers.

  2. autosign is true. That means all client certificates will be automatically signed on request. This makes a fine default for testing, but for production use, we’ll want to change this.

  3. csr_attributes is set to a file called “csr_attributes.yaml” which exists in in /opt/puppet/. This isn’t necessary for this particular demo, but it’ll play a part in the next iteration of our Puppet docker container.

You can build this container yourself (if you git clone the project) using:
docker build -t name/puppetmaster .
or you can pull the automated build from the Docker registry:
docker pull macadmins/puppetmaster

To use this container:

As always, we want a data-only container to keep all of Puppet’s configuration and dynamic data in. This is especially important as we need to preserve the Puppet certificates so that they’re not lost if the Puppetmaster container is removed:

docker run -d --name puppet-data --entrypoint /bin/echo macadmins/puppetmaster Data-only container for puppetmaster

Now run the container:
docker run -d --name puppetmaster -h puppet -p 8140:8140 --volumes-from puppet-data macadmins/puppetmaster
Here, we’ve set the hostname to “puppet.” The puppet agent will always try to reach the puppet master via DNS at “puppet”, so we’ll need to make that happen in DNS. As explained earlier, we’re mapping port 8140 to the localhost at 8140.

There’s a critical step that needs to happen next:
docker exec puppetmaster cp -Rf /etc/puppet /opt/
We need to populate the Puppet configuration directory with all the of the content in /etc/puppet/. Because of the order in which puppet.conf is read, /etc/puppet is populated with the default Puppet setup before it discovers that there’s a new confdir directive. Thus, /opt/puppet, while being used as the confdir for all Puppet configurations, does not start out with content in it. We need to fix that manually.

Puppet has started, let’s see a list of certs:
docker exec puppetmaster puppet cert list -all
Only one certificate exists so far – the puppetmaster itself.

Puppetizing a client:

I did all this on an OS X 10.10.1 VM, but this will work on any client.

  1. Install Puppet, Hiera, and Facter onto a client.

  2. Add the IP of your Docker host to /etc/hosts (or configure DNS so that your Docker host is reachable at “puppet”). For example, if your Docker host IP is 10.0.0.1:
    10.0.0.1 puppet

  3. Test puppet on client running as root:
    # puppet agent --test
    You should see the certificate request being generated and autosigned.

  4. Verify cert signing on puppetmaster docker container:
    docker exec puppetmaster puppet cert list -all
    You should see the certificate for the client’s hostname in the list.

  5. On the client, run:
    # puppet agent --test again to verify that cert exists and was confirmed.

Conclusion

We’ve got a working Puppet master in a Docker container! This will make a baseline Docker image we can expand upon.