Building Munki with Chef in Docker

I’ve previously written about building Munki in Docker, and then going a step further and using Puppet’s SSL client certificates for Munki in Docker. That method allowed us to set up a Munki server in a Docker container that used Puppet’s SSL certificates to offer a secure Munki connection with Puppet clients.

Continuing on the theme of testing out Chef for DevOps, I’d like to try a similar workflow. See my previous blog post about setting up SSL client certificates with Chef.

Once we have a Chef server configured similarly to the aforementioned blogpost, let’s build a Munki Docker container that utilizes Chef.

Chef already has existing containers to build on, but we’re not going to use those. Rather than rebuild an nginx server on top of a container already containing Chef, I’m instead going to add Chef to my existing Munki container.

Start with the basic Munki container:


FROM nginx
RUN mkdir -p /munki_repo
Run mkdir -p /etc/nginx/sites-enabled/
ADD nginx.conf /etc/nginx/nginx.conf
ADD munki-repo.conf /etc/nginx/sites-enabled/
VOLUME /munki_repo
EXPOSE 80

All that really needs to happen here is to make another Dockerfile that just adds Chef to this. We’re going to steal the method for this from Chef’s existing containers. The new Dockerfile:


FROM nmcspadden/munki
RUN apt-get -yqq update
RUN apt-get -yqq install curl lsb-release
RUN curl -L https://getchef.com/chef/install.sh | bash -s — -v 11.16.2 -P container
ADD client.rb /etc/chef/client.rb
ADD validation.pem /etc/chef/validation.pem

The upside to doing it this way: this is how Opscode builds their own Chef containers, so we’re doing exactly what they want and expect. The downside is that we have to install curl just to use it once, during build – and never again. This violates the principles of a small single-purpose Docker container, as I outlined in a previous blog post here. That being said, it’s easier just to do it the way Chef expects and just go with it.

Note the two files that have be to added – client.rb and validation.pem.

Client.rb is straightforward, it’s just the client configuration file for this container, telling it how to access Chef:


log_location STDOUT
chef_server_url "https://chef.sacredsf.org:443/organizations/ssh"
validation_client_name "ssh-validator"
ssl_verify_mode :verify_peer

The validation.pem is called the Chef validator. It’s the private key that gets created when you create an organization (note that the docs refer to it as file.key instead of file.pem, but it’s the same file). This file needs to be added to all clients before they can register with the Chef server, and this Munki container is no exception. You’ll need to copy your organization’s private key file from the Chef server or workstation to the Docker host to build this image.

As you can see in the client.rb file, I’m specifying the Chef server URL, the name of the validator (based on the organization I created, mentioned above), and enforcing SSL verification – which requires a trusted SSL certificate for your Chef server (meaning you can’t use the self-signed default certificate provided by the Chef server on first run).

With all the pieces in place, it’s time to build the container:
docker build -t "nmcspadden/munki-chef" .

Next post is where we run it!

Importing JAMF Casper Suite Data into WebHelpDesk Inventory with Docker

One of [WebHelpDesk]’s many useful features is the ability to set up discovery connections that pull data from other sources into its own database.

WebHelpDesk comes with a built in JAMF Casper Suite connection (which will be referred to as “JSS” from here on), but it only applies to Computers. If you have Casper for iOS, it doesn’t pull iOS devices. I have Casper only for iOS devices, with no Computers at all, so this discovery connection fails for me.

That’s not a big problem, though, because you can also set up a raw database or table view to pull data from instead. Since WHD can pull data from any database it can access, we just need to find a way to get that JSS data into a nice flat database.

The JSS does have its own MySQL database to store data in, but in general we don’t really want to access that directly. We want to keep our databases separate and isolated from each other, both for safety and robustness. Also, the tables and ways that Casper stores information into its database are private implementation details, and we can’t rely on those being the same between updates. It’s entirely possible that Casper 9.7, or Casper 10, will change existing database structure, and any discovery connection we set up in WebHelpDesk that pulls data from certain tables might break. Not much point setting up an automation that can’t be relied upon.

Instead, Docker makes it easy for us to set up a quick database for WHD to access. All that needs to be done is writing a script that pulls data from Casper and populates the database, so that WHD can use a discovery connection to access it.

Writing the script:

The repo for this script can be found on Github here.

The script makes use of Shea Craig’s incredible Python-JSS tool, which provides a Python command line interface to the JSS’s REST API, also documented on MacBrained.org.

The script requires the presence of two files:

The database preferences file needs to have credentials for access to a Postgres database to store the data in for WHD to pull from.

The JSS preferences file needs to have credentials for access to a Casper instance, along with a username and password with API permissions.

Rather than repost the entire script, I’m going to discuss two specific functions:

CreateCasperImportTable(conn)
This function takes an SQL connection (created by the psycopg2 connector) and then creates the basic Casper table that we’re going to populate, if it doesn’t already exist (thus providing idempotence).

SubmitSQLForDevice(thisDevice, conn, j)
This is the main meat of the script – it’s also known as an “upsert” from the Postgres manual. It takes a given device (i.e. a single device record polled from the JSS), creates a new function called merge_db that contains all the fields from that device that are relevant, and then updates the record in the new database table if it exists, otherwise creates it if not already found.

Since we have a working script that pulls data from the JSS via API and puts it into a Postgres container, it seems only natural to Dockerize it.

Creating a Docker image:

The Docker image can be pulled from the Docker hub here.

The Dockerfile:

FROM debian

MAINTAINER Nick McSpadden <nmcspadden@gmail.com>

ENV APP_DIR /home/jssi
ENV DB_HOST db
ENV DB_NAME jssimport
ENV DB_USER jssdbadmin
ENV DB_PASS password

ENV JSS_USER user
ENV JSS_PASS password
ENV JSS_URL https://casper:8443/

RUN apt-get update && apt-get install -y python-setuptools python-psycopg2 && apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ADD https://github.com/sheagcraig/python-jss/tarball/master /usr/local/python-jss/master.tar.gz
RUN tar -zxvf /usr/local/python-jss/master.tar.gz --strip-components=1 -C /usr/local/python-jss && rm /usr/local/python-jss/master.tar.gz
WORKDIR /usr/local/python-jss
RUN python /usr/local/python-jss/setup.py install

ADD https://github.com/nmcspadden/JSSImport/tarball/master $APP_DIR/master.tar.gz
RUN tar -zxvf /home/jssi/master.tar.gz --strip-components=1 -C /home/jssi/ && rm /home/jssi/master.tar.gz

ADD run.sh /run.sh
RUN chmod 755 /run.sh

CMD ["/run.sh"]

Aside from the epic amount of environmental variables, the Dockerfile is fairly simple. Install Python’s setuptools and pyscopg2 module, and then install python-jss by running setup.py install. Add the JSSImport script.

Finally, add the runtime execution script, which replaces the contents of the two preference files (the JSON and Plist files) with the environmental variables, and then executes the sync.

This container is not intended to be a daemonized running container. It executes the JSSImport script and then stops, so using –rm for each execution is ideal – we don’t really need it to linger around after finishing since it doesn’t do anything on its own.

Running the Docker image:

Start a data-only container for the Postgres database:
docker run --name "jssi-db-data" -d --entrypoint /bin/echo macadmins/postgres Data-only container for jssimport-db
Start the database, change variables as necessary:
docker run --name "jssimport-db" -d --volumes-from jssi-db-data -e DB_NAME=jssimport -e DB_USER=jssdbadmin -e DB_PASS=password --restart="always" macadmins/postgres

Run the container, which will execute the JSSPull script and then delete itself:
docker run --rm --name jssi --link jssimport-db:db -e DB_NAME=jssimport -e DB_USER=jssdbadmin -e DB_PASS=password -e JSS_USER=user -e JSS_PASS=password -e JSS_URL=https://casper.domain.com:8443 macadmins/jssimport
[Note: This assumes you have a running Casper install, or have a JSS container you can link. That’s outside the scope of this post.]

The jssimport-db database container is now populated with the mobile device list from the provided JSS, and can be sourced for WebHelpDesk’s discovery connections.

Configuring WebHelpDesk to use it:

You’ll want to run a WebHelpDesk docker container. For more information about that, see my previous blog post on running WebHelpDesk in Docker.

However, since we’re now adding in a new database, you’ll want to run the WebHelpDesk container with our additional jssimport-db database linked in:
docker run -d -p 8081:8081 --link postgres-whd:db --link jssimport-db:jdb --name "whd" macadmins/whd

Once you’ve done the basic configuration of WebHelpDesk (also covered in the previous post above), you can now configure a discovery connection for our JSSImport database:

  1. Connection Name: “Casper” (whatever you want)
  2. Discovery Tool: Database Table or View
    1. Database Type: PostgreSQL – uncheck Use Embedded Database
    2. Host: jdb
    3. Port: 5432
    4. Database Name: jssimport
    5. Username: jssdbadmin
    6. Password: password
    7. Schema: Public
    8. Table or View: casperimport
    9. Sync Column: serial

Next you’ll need to map the fields in the “Attribute Mapping” section so that the appropriate database columns get mapped to the appropriate WHD fields. (Example: “serial” maps to WHD field “Serial No.”, “macaddress” to WHD field “MAC address, etc.”)

Once attributes are mapped, save the connection, go back and hit “Sync Now” and watch the import.

The end result should be that WebHelpDesk now pulls in all the mobile devices from your Casper instance, through the roundabout means of API -> database -> discovery connection.

Using Puppet with WebHelpDesk to Sign Certs In, Yes, You Guessed It, Docker

In a previous post, I showed how to use Munki with Puppet SSL Client certificates in a Docker image.

In that example, the Puppetmaster image is set to automatically sign all certificate requests. Good for testing, but not a good idea for production use.

Instead, we should look into Puppet policy-based signing to sign requests only based on some credentials or criteria we control. This means that random nodes can’t come along and authenticate to the Puppet master, and it also means that the Puppet admin won’t have to manually sign every node’s certificate request. Manually signing works great for testing, but it quickly spirals out of control when you’re talking about dozens, or hundreds (or thousands) of machines.

Puppet’s policy-based autosigning allows us to execute a script. The exit code of that script determines whether a certificate is signed or not (exit code 0 means we should sign). So we need to write a script that will check something about the client that lets us determine it’s “ours” or “safe,” and sign accordingly – or reject.

Well, we have a really easy to way to do that – why not look up the client in inventory? We have WebHelpDesk, with its customized Postgres database, which can track inventory for us. If we’re using WebHelpDesk for inventory (as I am), then an autosign script that checks the WHD inventory for ownership would be an effective way to screen for cert requests.

One of WebHelpDesk’s best features, in my opinion, is its REST API, which allows us to make requests from WebHelpDesk’s backend in a more automated fashion than via the web interface. Using the REST API, we can develop scripts that will manage information for us – such as the one I wrote, WHD-CLI.

I’ve even made a separate Docker container for it (which is admittedly better documented than the original project), although we’re not actually going to use the container separately for this purpose (as there’s no way to get Puppet to use an autosign script that isn’t installed locally, so having it exist in a separate Docker container isn’t going to help us).

So, we have WebHelpDesk, which has inventory for our machines. We have a script, WHDCLI, which allows us to query WebHelpDesk for information about devices. We have the Puppetmaster container, which is running Puppet. Let’s combine them!

Building Puppetmaster with WHD-CLI installed:

The repo for this project is here. Start with the Dockerfile:

FROM macadmins/puppetmaster

MAINTAINER nmcspadden@gmail.com

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz
WORKDIR /home/requests
RUN python /home/requests/setup.py install
ADD https://github.com/nmcspadden/WHD-CLI/tarball/master /home/whdcli/master.tar.gz
RUN tar -zxvf /home/whdcli/master.tar.gz --strip-components=1 -C /home/whdcli && rm /home/whdcli/master.tar.gz
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install
ADD puppet.conf /etc/puppet/puppet.conf
ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
ADD check_csr.py /etc/puppet/check_csr.py
RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

FROM macadmins/puppetmaster
Since we have a nice Puppet master container already, we can use that as a baseline to add our WHD-CLI scripts onto.

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz

Use ADD to download the Requests project. Requests is an awesome Python library for handling HTTP/S requests and connections, much more robust and much more usable than urllib2 or urllib3. Unfortunately, it’s not a standard library, so we’ll need to download a copy of the module in tarball form, then extract and install it ourselves.

WORKDIR /home/requests
The WORKDIR directive changes the local present working directory to /home/requests before the next command. This is equivalent to doing cd /home/requests.

RUN python /home/requests/setup.py install
Now we use the Python setuptools to install Requests so it’s available system-wide, in the default Python path.

RUN git clone https://github.com/nmcspadden/WHD-CLI.git /home/whdcli
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install

Same thing happens here to WHD-CLI – clone the repo, change the working directory, and install the package.

ADD puppet.conf /etc/puppet/puppet.conf
In the Puppetmaster image, we already have a Puppet configuration file – but as I documented previously, it’s set to automatically sign all cert requests. Since we’re changing the behavior of the Puppet master, we need to change the configuration file to match our goals.

Here’s what the new puppet.conf looks like:

[agent]  
    certname        = puppetmaster  
    pluginsync      = true  
  
[master]  
    certname        = puppet  
    confdir	    = /opt/puppet  
    vardir	    = /opt/varpuppet/lib/puppet/  
    basemodulepath  = $confdir/site-modules:$confdir/modules:/usr/share/puppet/modules  
    factpath        = $confdir/facts:/var/lib/puppet/lib/facter:/var/lib/puppet/facts  
    autosign        = $confdir/check_csr.py  
    hiera_config    = $confdir/hiera.yaml  
    rest_authconfig = $confdir/auth.conf  
    ssldir          = $vardir/ssl  
    csr_attributes  = $confdir/csr_attributes.yaml  

The major change here is the autosign directive is no longer set to “true.” Now, it’s set to $confdir/check_csr.py, a Python script that will be used to determine whether or not a certificate request gets signed. Note also the use of csr_attributes = $confdir/csr_attributes.yaml directive – that’ll come into play in the script as well.

ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
Add in a default WHD-CLI configuration plist. This will be used by WHD-CLI to get API access to WebHelpDesk.

ADD check_csr.py /etc/puppet/check_csr.py
Here’s the actual script that will be run whenever a certificate request is received on the Puppet master. An in-depth look at it comes later.

RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

As we’ll see later in-depth, the script will log its results to a logfile in /var/log/check_csr.out. To prevent possible permissions and access issues, it’s best to create that file first, and make sure it has permissions where the Puppet master can read and write to it.

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

These last two commands are copies of those from the original Puppetmaster image. Since we’re adding in new stuff to /etc/puppet, it’s important for us to make sure all the appropriate files end up in the right place.

As usual, you can either build this image yourself from the source:
docker build name/puppetmaster-whdcli .
Or you can pull from the Docker registry:
docker pull macadmins/puppetmaster-whdcli

Crafting Custom CSR Attributes:

The goal of an autosign script is to take information from the client machines (the Puppet nodes) and determine if we can sign it based on some criteria. In this use case, we want to check if the client nodes are devices we actually own, or know about in some way. We have WebHelpDesk as an asset tracking system, that contains information about our assets (such as serial number, MAC address, etc.), and we already have a script that allows us to query WHD for such information.

So our autosigning script, check_csr.py, needs to do all of these things. According to Puppet documentation, the autosigning script needs to return 0 for a successful signing request, and non-zero for a rejection. A logical first choice would be to ask the client for its serial number, and then look up the serial number to see if the machine exists in inventory, and exit 0 if it does – otherwise reject the request.

The first question is, how do we get information from the client? This is where the csr_attributes.yaml file comes into play. See the Puppet documentation on it for full details.

In a nutshell, the csr_attributes.yaml file allows us to specify information from the node that goes into the CSR (certificate signing request), which can then be extracted by the autosigning script and parsed for relevance.

Specifically, we can use the CSR attributes to pull two specific facts: serial number, and whether or not the machine is physical, virtual, or a docker container.

This is the csr_attributes.yaml file that will be installed on clients:

---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: mySerialNumber  
  1.3.6.1.4.1.34380.1.2.1.2: facter_virtual  

The two extension_request prefixes are special Puppet OIDs that allow us to add attributes to the CSR – essentially they’re labels for what kind of data can be put into the CSR.

Here’s an example of what it looks like in a VMWare Fusion VM, after installation:

sh-3.2# cat /etc/puppet/csr_attributes.yaml   
---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: VMYNypomQeS5  
  1.3.6.1.4.1.34380.1.2.1.2: vmware  

The serial number has been replaced with what the VM reports, and the “virtual” fact is replaced by the word “vmware”, indicating that Facter recognizes this is a virtual machine from VMWare. This will be important in our script.

For convenience, I have a GitHub repo for installing these attributes (built with Whitebox Packages) available here. A release package is available for easy download.

The Autosigning Script:

The autosign script, when called from the Puppetmaster, is given two things. The hostname of the client requesting a certificate is passed as an argument to the script. Then, the contents of the CSR file itself is passed via stdin to the script. So our script needs to be able to parse an argument, and then read in what it needs from stdin.

The full script can be found on GitHub. Here’s a pared-down version of the script, with many of the logging statements removed for easier blog-ability:

#!/usr/bin/python

import sys
import whdcli
import logging
import subprocess

LOG_FILENAME = '/var/log/check_csr.out'

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
logger = logging.getLogger(__name__)

logger.info('Start script')

hostname = sys.argv[1]

if hostname == "puppet":
	logger.info("It's the puppetmaster, of course we approve it.")
	sys.exit(0)

certreq = sys.stdin.read()

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

lineList = output.splitlines()

strippedLineList = [line.lstrip() for line in lineList]
strippedLineList2 = [line.rstrip() for line in strippedLineList]

try:
	trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
except:
	logger.info("No serial number in CSR. Rejecting CSR.")
	sys.exit(1)
	
serial_number = strippedLineList2[trusted_attribute1+1]
logger.info("Serial number: %s", serial_number)	  

try:
	trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
except:
	logger.info("No virtual fact in CSR. Rejecting CSR.")
	sys.exit(1)

physical_fact = strippedLineList2[trusted_attribute2+1]

if physical_fact == "virtual" or physical_fact == "vmware":
	logger.info("Virtual machine gets autosigned.")
	sys.exit(0)
elif physical_fact == "docker":
	logger.info("Docker container gets autosigned.")
	sys.exit(0)

# Now we get actual work done
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)
if not w.getAssetBySerial(serial_number):
	logger.info("Serial number not found in inventory.")
	sys.exit(1)

logger.info("Found serial number in inventory. Approving.")
sys.exit(0)

Let’s take a look at some of the notable parts of the script:

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
This sets the basic log level. This script has both INFO and DEBUG logging, so if you’re trying to diagnose a problem or get more information from the process, you could change level=logging.INFO to level=logging.DEBUG. It’s much noisier, so best for testing and probably not ideal for production.

Migrating the logging to standard out so that you can use docker logs is a good candidate for optimization.

hostname = sys.argv[1]
The hostname for the client is the only command line argument passed to the script. In a test OS X default VM, this would be “mac.local”, for example.

certreq = sys.stdin.read()
The actual contents of the CSR gets passed in to stdin, so we need to read it and store it in a file.

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

Here, we make an outside call to openssl. Puppet documentation shows that we can manually parse the CSR for the custom attributes using OpenSSL, so we’re going to do just that in a subprocess. We’re going to pass in the contents of certreq into stdin in the subprocess call, so in essence we are doing this:
/usr/bin/openssl req -noout -text -in

Once we do some text parsing and line stripping (since the CSR is very noisy about linebreaks), we can pull the first custom attributes, the serial number:
trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
If there’s no line in the CSR containing that data, that means the CSR didn’t have our csr_attributes.yaml installed (and is almost certainly not something we recognize, or at least not in a desired state and should be addressed). Thus, reject.

trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
Our second attribute is the Facter virtual fact. If we don’t find that either, then we still have an incorrect CSR, and thus it gets rejected.

if physical_fact == "virtual" or physical_fact == "vmware":
This was mostly for my own convenience, but I decided it was safe to Puppetize any virtual machine, such as a VMWare Fusion VM (or ESXi, or whatever). As VMs tend to be transient, I didn’t want to spend time approving these certs constantly as I spun test VMs up and down. Thus, they get autosigned.

elif physical_fact == "docker":
If it’s a Docker container getting Puppetized, autosign as well, for mostly the same reasons as above.

Once the CSR is parsed for its contents and some basic sanity checks are put into place, we can now actually talk to WebHelpDesk.
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)

Parse the .plist we passed in to the Puppetmaster image earlier for the API key and URL of WebHelpDesk, and load up the API. Note the False at the end of the WHD() call – that’s to specify that we don’t want Verbose logging. If you’re trying to debug behavior, and want to see all the details in the log file, specify True here (or eliminate the extra variables just call whdcli.WHD(whd_prefs), since the other three variables are optional).

if not w.getAssetBySerial(serial_number):
This is the real meat, right here – w.getAssetBySerial() is the function call that checks to see if the serial number exists in WebHelpDesk’s asset inventory. If this serial number isn’t found, the function returns False, and thus we reject the CSR by exiting with status code 1.

Putting It All Together:

So, we’ve got WebHelpDesk in a Docker image, using our customized Postgres. We’ve got our new-and-improved Puppetmaster with WHD-CLI. We’ve got our client configuration install package. We have all the pieces to make it work, let’s assemble it into a nice machine:

  1. First, run the data container for the Postgres database for WHD:
    docker run -d --name whd-db-data --entrypoint /bin/echo macadmins/postgres-whd Data-only container for postgres-whd

  2. Run the Postgres database for WHD:
    docker run -d --name postgres-whd --volumes-from whd-db-data -e DB_NAME=whd -e DB_USER=whddbadmin -e DB_PASS=password macadmins/postgres

  3. Run WebHelpDesk:
    docker run -d -p 8081:8081 --link postgres-whd:db --name whd macadmins/whd

  4. Configure WebHelpDesk via the browser to use the external Postgres database (see the penultimate section on Running WebHelpDesk in Docker for details).

  5. Once WebHelpDesk is set up and you’re logged in, you need to generate an API key. Go to Setup -> Techs -> My Account -> Edit -> API Key: “Generate” -> Save.

  6. Copy and paste the API key into com.github.nmcspadden.whd-cli.plist as the value for the “apikey” key. If you haven’t cloned the repo for this project, you can obtain the file itself:
    curl -O https://raw.githubusercontent.com/macadmins/puppetmaster-whdcli/master/com.github.nmcspadden.whd-cli.plist

  7. Create a data-only container for Puppetmaster-WHDCLI:
    docker run -d --name puppet-data --entrypoint /bin/echo macadmins/puppetmaster-whdcli Data-only container for puppetmaster

  8. Run Puppetmaster-WHDCLI. Note that I’m passing in the absolute path to my whd-cli.plist file, so make sure you alter the path to match what’s on your file system:
    docker run -d --name puppetmaster -h puppet -p 8140:8140 --volumes-from puppet-data --link whd:whd -v /home/nmcspadden/com.github.nmcspadden.whd-cli.plist:/home/whdcli/com.github.nmcspadden.whd-cli.plist macadmins/puppetmaster-whdcli

  9. Complete the Puppetmaster setup:
    docker exec puppetmaster cp -Rf /etc/puppet /opt/

  10. Configure a client:

    1. Install Facter, Hiera, and Puppet on an OS X VM client (or any client, really – but I tested this on a 10.10.1 OS X VM).
    2. Install the CSRAttributes.pkg on the client.
    3. If your Puppetmaster is not available in the client’s DNS, you’ll need to add the IP address of your Docker host to /etc/hosts.
    4. Open a root shell (it’s important to run the Puppet agent as root for this test):
      sudo su
    5. Run the Puppet agent as root:
      # puppet agent --test
    6. The VM should generate a certificate signing request, send to the Puppet master, which parses the CSR and notices that it’s a virtual machine, and then autosigns it and send the cert back.
  11. You can check the autosign script’s log file on the Puppetmaster to see what it did:
    docker exec puppetmaster tail -n 50 /var/log/check_csr.out

Here’s sample output from a new OS X VM:
INFO:__main__:Start script
INFO:__main__:Hostname: testvm.local
INFO:__main__:Serial number: VM6TP23ntoj2
INFO:__main__:Virtual fact: vmware
INFO:__main__:Virtual machine gets autosigned.

Here’s sample output from that same VM, but I manually changed /etc/puppet/csr_attributes.yaml so that the virtual fact is “physical”:
INFO:__main__:Start script
INFO:__main__:Hostname: testvm.local
INFO:__main__:Serial number: VM6TP23ntoj2
INFO:__main__:Virtual fact: physical
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): whd
INFO:__main__:Serial number not found in inventory.

Try this on different kinds of clients: Docker containers (a good candidate is the Munki-Puppet container which needs to run Puppet to get SSL certs), physical machines, other platforms. Test it on a machine that is not in WebHelpDesk’s inventory and watch it get rejected from autosigning.

Troubleshooting:

Manually run the script:

If you get a CSR that gets rejected and you’re not sure why, you can manually run the check_csr.py script itself on the rejected (or rather, disapproved) CSR .pem file. Assuming the hostname is “testvm.local”:
docker exec -it puppetmaster /bin/bash to open a Bash shell on the container, then:
cat /opt/varpuppet/lib/puppet/ssl/ca/requests/testvm.local.pem | /opt/puppet/check_csr.py "testvm.local"

Then, you can check the logs to see what the output of the script is. Assuming you’re still in the Bash shell on the container:
tail -n 50 /var/log/check_csr.out

Test WHDCLI:

If you’re running into unexpected failures with the autosigning scripts, or you’re not getting the results you expect, you can try manually running the WHDCLI to see where the problem might be:
docker exec -it puppetmaster /usr/bin/python
Once you’re in the Python interpreter, load up WHD-CLI:

>>> import whdcli
>>> whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
>>> w = whdcli.WHD(whd_prefs)

If you get a traceback here, it’ll tell you the reason why it failed – perhaps a bad URL, bad API key, or some other HTTP authentication or access failure. Embarrassingly, in my first test, I forgot to Save in WebHelpDesk after generating an API key, and if you don’t hit the Save button, that API key disappears and never gets registered to your WHD account.

Assuming that succeeded, try doing a manual serial lookup, replacing it with an actual serial number you’ve entered into WHD:

>>> w.getAssetBySerial("serial")

The response here will tell you what to expect – did it find a serial number? It’ll give you asset details. Didn’t find a match? The response is just False.

Conclusions

Important Note: Although this post makes use of Docker as the basis for all these tools, you can use the WHD-CLI script with a Puppetmaster to accomplish the same thing. You’d just need to change the WHD URL in the whd-cli.plist file.

One of the best aspects of Docker is that you can take individual pieces, these separate containers, and combine them into amazing creations. Just like LEGO or Minecraft, you take small building blocks – a Postgres database, a basic Nginx server, a Tomcat server – and then you add features. You add parts you need.

Then you take these more complex pieces and link them together. You start seeing information flow between them, and seeing interactions that were previously more difficult to setup in a non-Docker environment.

In this case, we took separate pieces – WebHelpDesk, its database, and Puppetmaster, and we combined them for great effect. Combine this again with Munki-Puppet and now you’ve got a secure Munki SSL environment with your carefully curated Puppet signing policies. There are more pieces we can combine later, too – in future blog posts.

Customizing Postgres in Docker

The Docker registry provides an official Postgres container that can be used in a wide variety of situations. However, at its default configuration, Postgres does not accept remote connections from other IP addresses.

This presents a problem when used with something like WebHelpDesk, which needs a Postgres database to slave off of.

To address this issue, we’re going to create a new Docker container based on Postgres that changes the appropriate settings in pg_hba.conf.

The repo for this project can be found here.

The Dockerfile

FROM postgres
ENV DB_NAME database
ENV DB_USER admin
ENV DB_PASS password
ADD setupRemoteConnections.sh /docker-entrypoint-initdb.d/setupRemoteConnections.sh
RUN chmod 755 /docker-entrypoint-initdb.d/setupRemoteConnections.sh
ADD setup-database.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/setup-database.sh

FROM postgres
Start with the official Postgres image.

ENV DB_NAME database
ENV DB_USER admin
ENV DB_PASS password

These three environment variables are going to be passed to the setup_database.sh script. This technique was originally developed by Graham Gilbert for his Postgres docker container for the use of Sal. The purpose of these environment variables and the script are to create a starting database for us to use.

ADD setupRemoteConnections.sh /docker-entrypoint-initdb.d/setupRemoteConnections.sh
RUN chmod 755 /docker-entrypoint-initdb.d/setupRemoteConnections.sh

Add in the script to setup remote connections. The location of this script, /docker-entrypoint-initdb.d/ is a special directory provided by the default Postgres container to extend additional tasks on startup. All scripts located in this directory get run on startup – thus we can add Postgres configurations automatically. Chmod sets correct execute permissions on the script.

The setupRemoteConnection.sh script contains only one line of code:
sed -i '/host all all 127.0.0.1\/32 trust/a host all all 172.17.0.1\/16 trust' /var/lib/postgresql/data/pg_hba.conf
This sed statement modifies /var/lib/postgresql/data/pg_hba.conf to allow for remote access connections – see documentation here. It simply adds 172.17.0.1/16 to the list of trusted access locations that are allowed to make database updates. The 172.17.0.1/16 range is for Docker IP addresses – thus giving any other Docker container necessary access. This is specifically intended for use with WebHelpDesk, which requires it.

ADD setup-database.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/setup-database.sh

This script, as mentioned previously, was shamelessly stolen from Graham Gilbert – thanks for that. It sets up the initial database with correct role and privileges, using the environment variables.

The setup-database.sh script looks like this:

TEST=`gosu postgres postgres --single <<- EOSQL
   SELECT 1 FROM pg_database WHERE datname='$DB_NAME';
EOSQL`
echo "******CREATING DOCKER DATABASE******"
if [[ $TEST == "1" ]]; then
    # database exists
    # $? is 0
    exit 0
else
gosu postgres postgres --single<<- EOSQL
   CREATE ROLE $DB_USER WITH LOGIN ENCRYPTED PASSWORD '${DB_PASS}' CREATEDB;
EOSQL
gosu postgres postgres --single <<- EOSQL
   CREATE DATABASE $DB_NAME WITH OWNER $DB_USER TEMPLATE template0 ENCODING 'UTF8';
EOSQL
gosu postgres postgres --single <<- EOSQL
   GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
EOSQL
fi
echo ""
echo "******DOCKER DATABASE CREATED******"

The script executes on startup of the Docker container, and sets up the database we need to use by reading in the environment variables we pass in using the -e argument to docker run. We’ll see this in action later.

With this Dockerfile, and with our scripts ready, the container is ready to be built:
docker build -t name/postgres .
Or you can pull the automated build from the registry:
docker pull macadmins/postgres

Using the new Postgres database:

Running the database container by itself without anything special is simple:
docker run -d -e DB_NAME=db -e DB_USER=admin -e DB_PASS=password macadmins/postgres

Of course, we want to probably do a bit more than that. For example, it’s much easier to test and work with it if we can name it something we can refer to easily:
docker run -d -e DB_NAME=db -e DB_USER=admin -e DB_PASS=password --name postgres-test macadmins/postgres

You can then check to see if your database exists the way you expect by using psql:
docker exec -it postgres-test psql --dbname db --username admin
You’ll be at the psql prompt, and you can type \l to list all the databases. You’ll see the one we created, db, in the list (which does not format well, so I haven’t tried to copy and paste it here).

Looks good, right? Now the database is ready to be used by any application or service by linking the customized Postgres container to another one. We’ll do an example of this in another post.

An Alternative

Dockerfiles are pretty neat things. They allow us to do fun stuff, like take someone’s else’s image as a base and build more stuff on top of it. This is the basis for nearly all of the images I use – find someone else who did the hard work, like installing Nginx, or Apache, or a database like Postgres or MySQL, and then add the pieces I need to get the results I want.

I pointed out earlier that Graham Gilbert already has a great Postgres container that incorporates the database setup script. All I’m doing differently is configuring Postgres to allow remote connections as well.

So as an alternative to customizing from the base Postgres image, we can try just adding our changes to Graham’s Postgres container. It makes for a smaller Dockerfile:
FROM grahamgilbert/postgres
ENV DB_NAME database
ENV DB_USER admin
ENV DB_PASS password
ADD setupRemoteConnections.sh /docker-entrypoint-initdb.d/setupRemoteConnections.sh
RUN chmod 755 /docker-entrypoint-initdb.d/setupRemoteConnections.sh

We can build this alternative version with a new tag:
docker build nmcspadden/postgres:alt .

Which is better?

If we look purely at image file size, we see that both our original customized version, and the version based off of Graham’s come out the same:
bash-3.2$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nmcspadden/postgres alt 21bbfdc5857f 49 seconds ago 213.4 MB
macadmins/postgres latest 2e2d24011a32 2 days ago 213.4 MB

So there’s no particular size advantage to doing it the “alternative” way, except that the Dockerfile is a bit smaller and we do less wheel-reinvention.

But there are some pros and cons, and it depends on what you want to do.

By using Graham’s Postgres container, it’s easy for us to set up an automated build on the Docker registry that rebuilds ours every time Graham rebuilds his. If he ever updates his database setup script, or updates his container for any reason, ours gets rebuilt to incorporate his changes. This is both a pro and a con, because it means that our Postgres database’s behavior might change in the latest build without our knowledge (or approval). If Graham decides to change his container to do something different at startup, and we’re designing our Postgres databases around assuming a specific behavior happens at startup every time, we could be in trouble if an unexpected change occurs.

On the other hand, if it turns out we do want to incorporate Graham’s changes automatically, using his image as the basis for our build saves a lot of time – I don’t have to make any changes to my Dockerfile to upstream those changes into our builds.

The shorter answer for me is that I need to make sure the database setup occurs the way I expect it to every time I use it – and thus I manually recreate Graham’s setup in my own Dockerfile. That way Graham can make any changes to his Dockerfile he wants and it won’t affect our own builds.

Ultimately, we end up with a Postgres database (which accepts remote connections on startup) that we can easily use again and again.

Building WebHelpDesk in Docker

WebHelpDesk by SolarWinds is a ticketing system and asset tracking system. I use it specifically for its asset-tracking capabilities. One of its many perks is that it allows the aggregation of inventory from other databases through its Discovery Connections – a fact that I’ll make use of heavily in later posts.

In this post, I’m going to build WebHelpDesk into a Docker image, for portability and simplicity. This is based on the RHEL rpm installed on a CentOS 6 base.

The Dockerfile:

From the Dockerfile:

FROM centos:centos6  
ADD http://downloads.solarwinds.com/solarwinds/Release/WebHelpDesk/12.2.0/webhelpdesk-12.2.0-1.x86_64.rpm.gz /webhelpdesk.rpm.gz 
RUN gunzip -dv /webhelpdesk.rpm.gz
RUN yum install -y /webhelpdesk.rpm && rm /webhelpdesk.rpm && yum clean all
RUN cp /usr/local/webhelpdesk/conf/whd.conf.orig /usr/local/webhelpdesk/conf/whd.conf
RUN sed -i 's/^PRIVILEGED_NETWORKS=[[:space:]]*$/PRIVILEGED_NETWORKS=172.17.42.1/g' /usr/local/webhelpdesk/conf/whd.conf  
ADD run.sh /run.sh  
ADD supervisord.conf /home/docker/whd/supervisord.conf  
RUN yum install -y python-setuptools  
RUN easy_install supervisor  
RUN yum clean all  
EXPOSE 8081  
CMD ["/run.sh"]

We’re starting with CentOS 6, and building from there.

ADD http://downloads.solarwinds.com/solarwinds/Release/WebHelpDesk/12.2.0/webhelpdesk-12.2.0-1.x86_64.rpm.gz /webhelpdesk.rpm.gz 
RUN gunzip -dv /webhelpdesk.rpm.gz
RUN yum install -y /webhelpdesk.rpm && rm /webhelpdesk.rpm && yum clean all

First, ADD in the compressed RPM (Why is it compressed? I don’t know. According to support they have no plans to change that at this time) down, and then unzip it. Then use yum to install the RPM.

RUN cp /usr/local/webhelpdesk/conf/whd.conf.orig /usr/local/webhelpdesk/conf/whd.conf
Copy the default configuration into place.

RUN sed -i 's/^PRIVILEGED_NETWORKS=[[:space:]]*$/PRIVILEGED_NETWORKS=172.17.42.1/g' /usr/local/webhelpdesk/conf/whd.conf
This adds the Docker IP range to the list of privileged networks, thus giving any Docker IP access to update the database settings.

ADD run.sh /run.sh
ADD supervisord.conf /home/docker/whd/supervisord.conf
RUN yum install -y python-setuptools
RUN easy_install supervisor

We’re going to use supervisord to control and manage WebHelpDesk’s Tomcat-based web server. Supervisord will be in charge of starting the process and watching it to make sure it runs, which means that it will be the container’s primary process. The run.sh script is what will trigger the supervisord kickoff. We need to install supervisord using easy_install, which requires the Python setuptools.

EXPOSE 8081
By default, WebHelpDesk runs (unsecured) on port 8081. It can be configured to use port 8443 with SSL, but that’s a project for a later date.

CMD ["/run.sh"]
The CMD directive tells Docker to kick in the “run.sh” script on startup.

Build the Container:

You can find the repo for this project here. You can build the project yourself:
docker build -t name/whd .
Or you can pull the automated build from the Docker registry:
docker pull macadmins/whd

Running WebHelpDesk in a Docker container will be covered in the next post.

Building Munki with Puppet for SSL Client Certificates

Note: this is based on the README for the Munki-SSL docker container.

In a previous post, we ran a Docker container serving Munki repo content via Nginx. That works fine, but only serves insecure HTTP content. It’s generally in everyone’s best interest to use a secure connection between the Munki web server and its clients, and that’s described in detail: either through basic authentication or with SSL client certificates.

Using SSL client certificates is not a trivial matter, as it involves setting up a CA, or using a third-party CA to generate your client certificates, which involves a lot of work on both server and client end.

Thankfully, Sam Keeley has already written a great article about using Puppet, a configuration management tool, as the cornerstone for client-server certificate-based communication: https://www.afp548.com/2014/06/02/securing-a-munki-deployment-with-puppet-ssl-certificates/. I’ll be using this article as the basis for our configuration.

The general idea is that Puppet has its own CA, and it installs client certificates on each client you register to it, to guarantee secure communication between the Puppet master server and the clients. We can use the Puppet client certs to allow the Munki clients to communicate with the Munki webserver through certificate-based SSL.

We can simplify this process even further with the open-source Puppet built into a Docker container. See the build guide for a Docker Puppetmaster here.

Assuming you’ve got a running Puppetmaster container, let’s proceed to modify our Munki container to accommodate Puppet.

This is the process that was used to construct the macadmins/munki-puppet container, which is based on a branch of the original Munki container by groob.

Creating a Munki container that incorporates Puppet:

We have an existing Munki image, so we can use that as a base and just install Puppet on it. Here’s the Dockerfile:

FROM nmcspadden/munki
MAINTAINER Nick McSpadden nmcspadden@gmail.com
ENV PUPPET_VERSION 3.7.3
RUN apt-get update
RUN apt-get install -y ca-certificates
ADD https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb /puppetlabs-release-wheezy.deb
RUN dpkg -i /puppetlabs-release-wheezy.deb
RUN apt-get update
RUN apt-get install -y puppet=$PUPPET_VERSION-1puppetlabs1
ADD csr_attributes.yaml /etc/puppet/csr_attributes.yaml

The great thing about Dockerfiles is that they can be inherited from. Using the FROM directive, we take everything accomplished in the Munki container, and simply build on top of it.

ENV PUPPET_VERSION 3.7.3
Here, the puppet version is explicitly set so that we can get the same behavior every time we build the Docker image.

RUN apt-get update
RUN apt-get install -y ca-certificates
ADD https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb /puppetlabs-release-wheezy.deb
RUN dpkg -i /puppetlabs-release-wheezy.deb
RUN apt-get update
RUN apt-get install -y puppet=$PUPPET_VERSION-1puppetlabs1

In order to install the updated version of Puppet, we need to acquire the correct Puppetlabs repo and GPG keys. The Munki container is based on Debian wheezy, so we acquire the appropriate installer by ADDing the key package, and then use dpkg to install it. We’ll also grab the updated ca-certificates to guarantee that the certificate used to sign https://apt.puppetlabs.com/ is trusted.

Once we’ve installed the correct Puppetlabs repo and GPG key, we can install Puppet from that repo.

ADD csr_attributes.yaml /etc/puppet/csr_attributes.yaml
The CSR attributes file is described on Puppet’s website. It’s used to add extra information from a client to the CSR (Certificate Signing Request) generated on the client when the puppet agent runs for the first time and tries to get a certificate from the Puppetmaster. It can be used to write scripts that sign certificates based on certain kinds of information or policies.

In our Puppetmaster container, CSR requests get automatically signed, so this isn’t important yet. In our next iteration, we’ll see how we can put extra information from the client to use in policy-based request signing.

Building the Container:

You can either build the container by git cloning the repo:
docker build -t name/munki-puppet .
or you can pull the automated build container:
docker pull macadmins/munki-puppet

Once you’ve got the container built, let’s run it.

Building a Puppetmaster with Docker

This is based on the README I wrote for the macadmins/puppetmaster image.

Puppet is an industrial-strength cross-platform configuration management engine. Though you’ll find lots of existing Puppetmaster images on the Docker registry, this one will serve as the baseline for other expanded uses of Puppet – such as using it with Munki and SSL client certificates.

This is a walkthrough for building a Puppetmaster Docker container.

Building Puppetmaster into a Docker container:

Let’s start with our Dockerfile:

FROM centos:centos6

MAINTAINER nmcspadden@gmail.com

ENV PUPPET_VERSION 3.7.3

RUN rpm --import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
RUN yum install -y yum-utils && yum-config-manager --enable centosplus >& /dev/null
RUN yum install -y puppet-$PUPPET_VERSION
RUN yum install -y puppet-server-$PUPPET_VERSION
RUN yum clean all
ADD puppet.conf /etc/puppet/puppet.conf
VOLUME ["/opt/puppet"]
RUN cp -rf /etc/puppet/* /opt/puppet/
VOLUME ["/opt/varpuppet/lib/puppet"]
RUN cp -rf /var/lib/puppet/* /opt/varpuppet/lib/puppet/
EXPOSE 8140
ENTRYPOINT [ "/usr/bin/puppet", "master", "--no-daemonize", "--verbose" ]

FROM centos:centos6
This is based on a CentOS 6 container, for no other reason than that I’m already familiar with CentOS 6. I could just as easily have used Ubuntu or Debian or any of the other base image variants. I chose CentOS 6 over 7 due to major changes in 7 (such as the inclusion of systemd) that I’m not quite familiar with.

ENV PUPPET_VERSION 3.7.3
This makes it easy for me to enforce specific versioning of Puppet in my Dockerfile. As of writing time, Open-source Puppet is on 3.7.3, so that’s what we’ll install. It also means that future builds of this Dockerfile won’t suddenly change or update Puppet without specific admin intervention. It also means that we can easily update the Puppet version by only making one change in the Dockerfile.

RUN rpm --import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

As per Puppet’s installation instructions, we need to add the GPG key for Puppetlab’s yum repo, and then install the yum repo to get the updated Puppet from.

RUN yum install -y yum-utils && yum-config-manager --enable centosplus >& /dev/null
RUN yum install -y puppet-$PUPPET_VERSION
RUN yum install -y puppet-server-$PUPPET_VERSION
RUN yum clean all

Install the yum utils, install puppet, install the puppet server, clean up after ourselves.

ADD puppet.conf /etc/puppet/puppet.conf
Add in the Puppet configuration file. That will be explored later.

VOLUME ["/opt/puppet"]
RUN cp -rf /etc/puppet/* /opt/puppet/
VOLUME ["/opt/varpuppet/lib/puppet"]
RUN cp -rf /var/lib/puppet/* /opt/varpuppet/lib/puppet/

Here, we expose two volumes for sharing – /opt/puppet/ and /opt/varpuppet/lib/puppet/. These are important, as we’ll configure Puppet to use these for configuration and dynamic data in puppet.conf. That way, we can share this out to data-only containers so we don’t lose anything if we ever remove the Puppetmaster container.

In addition to creating those two directories, we’re also copying the contents from the default directories for Puppet (/etc/puppet/ and /var/lib/puppet/) into our new alternate directories, so they’ll be pre-populated.

EXPOSE 8140
Puppet runs on port 8140, so we need that port available to the outside world.

ENTRYPOINT [ "/usr/bin/puppet", "master", "--no-daemonize", "--verbose" ]
The entrypoint starts Puppet in master mode, with verbose logging, as the primary process in the container. This is what allows us to run the container in detached mode, and easily check the logs.

The Puppet Configuration file:

[agent]
    certname        = puppetmaster
    pluginsync      = true

[master]
    certname        = puppet
    confdir         = /opt/puppet
    vardir	    = /opt/varpuppet/lib/puppet
    basemodulepath  = $confdir/site-modules:$confdir/modules:/usr/share/puppet/modules
    factpath        = $confdir/facts:/var/lib/puppet/lib/facter:/var/lib/puppet/facts
    autosign        = true
    hiera_config    = $confdir/hiera.yaml
    rest_authconfig = $confdir/auth.conf
    ssldir          = $vardir/ssl
    csr_attributes  = $confdir/csr_attributes.yaml

The important things about the Puppet.conf to notice:

  1. confdir and vardir are set to custom directories located in /opt/. We shared these volumes earlier with the VOLUME directives in the Dockerfile, so that means this data will exist in a shareable form that can be linked to other Docker containers.

  2. autosign is true. That means all client certificates will be automatically signed on request. This makes a fine default for testing, but for production use, we’ll want to change this.

  3. csr_attributes is set to a file called “csr_attributes.yaml” which exists in in /opt/puppet/. This isn’t necessary for this particular demo, but it’ll play a part in the next iteration of our Puppet docker container.

You can build this container yourself (if you git clone the project) using:
docker build -t name/puppetmaster .
or you can pull the automated build from the Docker registry:
docker pull macadmins/puppetmaster

To use this container:

As always, we want a data-only container to keep all of Puppet’s configuration and dynamic data in. This is especially important as we need to preserve the Puppet certificates so that they’re not lost if the Puppetmaster container is removed:

docker run -d --name puppet-data --entrypoint /bin/echo macadmins/puppetmaster Data-only container for puppetmaster

Now run the container:
docker run -d --name puppetmaster -h puppet -p 8140:8140 --volumes-from puppet-data macadmins/puppetmaster
Here, we’ve set the hostname to “puppet.” The puppet agent will always try to reach the puppet master via DNS at “puppet”, so we’ll need to make that happen in DNS. As explained earlier, we’re mapping port 8140 to the localhost at 8140.

There’s a critical step that needs to happen next:
docker exec puppetmaster cp -Rf /etc/puppet /opt/
We need to populate the Puppet configuration directory with all the of the content in /etc/puppet/. Because of the order in which puppet.conf is read, /etc/puppet is populated with the default Puppet setup before it discovers that there’s a new confdir directive. Thus, /opt/puppet, while being used as the confdir for all Puppet configurations, does not start out with content in it. We need to fix that manually.

Puppet has started, let’s see a list of certs:
docker exec puppetmaster puppet cert list -all
Only one certificate exists so far – the puppetmaster itself.

Puppetizing a client:

I did all this on an OS X 10.10.1 VM, but this will work on any client.

  1. Install Puppet, Hiera, and Facter onto a client.

  2. Add the IP of your Docker host to /etc/hosts (or configure DNS so that your Docker host is reachable at “puppet”). For example, if your Docker host IP is 10.0.0.1:
    10.0.0.1 puppet

  3. Test puppet on client running as root:
    # puppet agent --test
    You should see the certificate request being generated and autosigned.

  4. Verify cert signing on puppetmaster docker container:
    docker exec puppetmaster puppet cert list -all
    You should see the certificate for the client’s hostname in the list.

  5. On the client, run:
    # puppet agent --test again to verify that cert exists and was confirmed.

Conclusion

We’ve got a working Puppet master in a Docker container! This will make a baseline Docker image we can expand upon.

Building Munki with Docker

Munki is an incredible tool for Mac software deployment, and the setup process is fairly straightfoward – configure a web server, create your repo, run the tools to populate it with software, and configure clients.

It’s the “configure a web server” aspect that may give some pause, as setting up and configuring Apache or Nginx has a bit of a learning curve, and many OS X admins may not necessarily have access to or control over network infrastructure or server infrastructure to do this easily, especially in education.

Docker simplifies this process quite a bit, because you can create a simple webserver with only a few commands that will give you exactly what you need to start serving Munki content – and you can easily transport this Docker webserver from host to host, as it’s designed to be portable and self-contained.

In this example, we’ll use my fork of the Macadmins/munki Docker image to build.

Start by cloning the repo:
git clone https://github.com/macadmins/docker-munki.git

The Dockerfile

Now, let’s take a look at the Dockerfile:

FROM nginx
RUN mkdir -p /munki_repo
RUN mkdir -p /etc/nginx/sites-enabled/
ADD nginx.conf /etc/nginx/nginx.conf
ADD munki-repo.conf /etc/nginx/sites-enabled/
VOLUME /munki_repo
EXPOSE 80

Let’s look at this line by line:
FROM nginx
This means that we’re using the official Nginx Docker image. It’s done all the hard work for us – it has Nginx already installed. All we need to do is add our pieces.

RUN mkdir -p /munki_repo
RUN mkdir -p /etc/nginx/sites-enabled/
RUN just runs a command – and in this case, we’re just creating two sets of directories. We’ll use /munki_repo to serve the actual Munki repo itself, and we’ll be adding configuration data into /etc/nginx/sites-enabled/.

ADD nginx.conf /etc/nginx/nginx.conf
ADD munki-repo.conf /etc/nginx/sites-enabled/
ADD copies files into the Docker image. We provided these two .conf files ahead of time, as they’re going to tell Nginx how to serve our content.

Let’s look at nginx.conf:

worker_processes 1;
http {
    include /etc/nginx/sites-enabled/*;
}
events {
  worker_connections 768;
}

This is a fairly straightforward configuration file that just tells Nginx to also include any configuration files stored in /etc/nginx/sites-enabled, which we created earlier with the RUN command.

Look at munki-repo.conf:

server {
  listen 80;
  server_name munki;
  location /repo/ {
    alias /munki_repo/;
    autoindex off;
  }
}

This configuration file tells Nginx to listen on port 80, the default web port. In addition, the server_name expects to be “munki”. We’re using the location /repo/, which means that content will be served from http://munki/repo (which is the default location that Munki expects), but that /repo/ is actually an alias to the path /munki_repo – which we created earlier in a RUN command with mkdir. Lastly, we’re turning off folder indexing, so visitors can’t list the contents of our directories.

VOLUME /munki_repo
This uses a Docker technique to “expose” as volume. More details about Docker volumes can be found here and here. The short explanation, without going into too much detail, is that this volume /munki_repo can be linked to other Docker containers, and the data inside can be accessed easily.

EXPOSE 80
The EXPOSE directive opens a port to the outside world. It means that this container will serve content on port 80, and thus can be accessed by its container ID, IP address, or DNS name at port 80 – the default web port. Since this is a web server, using port 80 is logical.

Building the Image

Now we can build this image. First navigate into the directory with the Dockerfile on your host:
cd docker-munki

Run the build command. Feel free to change the name to anything you want:
docker build -t "nmcspadden/munki" .

When it completes the build, you’ll have a new image called “nmcspadden/munki”. Use docker images to see it – you’ll see that it’s marked with the tag “latest” to indicate that it’s the most up to date version.

Running the image is covered in the next post.