Running Munki with Chef SSL Client Certificates in Docker

Previously, I wrote about building a Docker container for Munki with Chef installed. Having built that container, it’s now time to put it to use.

Assuming you’ve got a working Chef server set up, we can run our Munki-with-Chef container and register it.

Preparing the Server:

First, we need to set up the CA we’re going to use with Munki, via Chef. This will also assume you have a chef repo in ~/chef-repo/ that is set up according to the Chef documentation.

I’ve described the general process in a previous blog post here, but I’ve changed enough of it that I’m going to repeat a lot of it here.

  1. On the server/workstation, download the cookbook:
    knife cookbook site install x509
  2. Now delete it and clone my version:
    git clone --branch development https://github.com/nmcspadden/chef-cookbook-ssl.git x509
  3. Clone the MunkiSSL cookbook from Github:
    git clone https://github.com/nmcspadden/chef-cookbook-munkiSSL munkiSSL
  4. Upload all the cookbooks to the server:
    knife cookbook upload -a
  5. Create the ‘certificates’ data bag:
    knife data bag create certificates
  6. Create the CA (I’m storing it in my home directory for this example):
    chef-ssl makeca --dn '/CN=ChefCA' --ca-path /home/nmcspadden/chefCA
    Pick any passphrase you want.

Running the Container:

In this blog post, I’m going to call this container “munki2” so as not to interfere with my existing Munki container.

Prepare a data-only container to keep our data in:
docker run -d --name munki2-data --entrypoint /bin/echo nmcspadden/munki-chef Data-only container for munki2

Run the container:
docker run -d --name munki2 --volumes-from munki2-data -p 443:443 -h munki2.domain.com nmcspadden/munki-chef

This run command sets the open port to 443, for SSL connections, and uses the munki2-data data container to access the repo. In addition, I’ve set the hostname here manually as well, using the -h option.

If you are just testing this out and don’t have your Chef server entered into your DNS, you can fix that inside the container using the --add-host option like so:
docker run -d --name munki2 --volumes-from munki2-data -p 443:443 -h munki2.domain.com --add-host chef.domain.com:10.0.0.1 nmcspadden/munki-chef

The first step after running the container is to check in with Chef:
docker exec munki2 /usr/bin/chef-client --force-logger --runlist "recipe[x509::munki2_server]"

Here, we’re using the [x509::munki2_server] recipe to generate a private key and send a CSR to the Chef CA.

On the Chef server or workstation, you’ll need to sign the CSR:
chef-ssl autosign --ca-name="ChefCA" --ca-path=/home/nmcspadden/chefCA

Back on the Docker host, run the x509::munki2_server recipe again to receive the signed certificate:
docker exec munki2 /usr/bin/chef-client --force-logger --runlist "recipe[x509::munki2_server]"

Screenshot 2015-03-03 14.41.25

You can verify the certificate’s existence on the Chef server / workstation:
knife search certificates "host:munki2.domain.com" -a dn

Now that the certificates are present, it’s time to add in the new Nginx config file to tell the webserver to use client certificates:
cat munki-repo-ssl.conf | docker exec -i munki2 sh -c 'cat > /etc/nginx/sites-enabled/munki-repo.conf'

The Nginx configuration looks like this:


server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/munki2.domain.com.crt;
ssl_certificate_key /etc/ssl/munki2.domain.com.key;
ssl_client_certificate /etc/ssl/munki2_ca.crt;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_verify_client on;
server_name munki;
location /repo/ {
alias /munki_repo/;
autoindex off;
}
}

The three important file paths that must be correct are ssl_certificate, ssl_certificate_key, and ssl_client_certificate. If any of these paths are wrong or can’t be found, Nginx will not start and your Docker container will immediately halt.

For reference, the ssl_protocols and ssl_ciphers are configured for perfect forward secrecy.

Otherwise, the configuration for Nginx for the Munki repo remains the same as the non-SSL version – we’re serving the file path /munki_repo as https://munki2/repo/.

Now restart the container to reload the Nginx configuration:

docker stop munki2 && docker start munki2

We have a working Munki server on port 443, now it needs to be populated.

Configure the Clients to use Munki with SSL certificates:

If you are testing this out, you probably don’t have munki2 or Chef in your DNS entry. You’ll need to add them to your /etc/hosts file on the clients first:
10.0.0.1 munki2 munki2.domain.com
10.0.0.2 chef chef.domain.com

Detailed instructions on configuring Munki with SSL certificates can be found on the official wiki, but I’m going to recreate the steps here.

  1. On the client, run the [x509::munki2_client] recipe to generate a CSR:
    sudo chef-client --runlist "recipe[x509::munki2_client]"
  2. On the Chef Server or workstation, use chef-ssl to sign the CSR:
    chef-ssl autosign --ca-name="ChefCA" --ca-path=/home/nmcspadden/chefCA
  3. On the OS X client, run the recipe again to receive the signed certificate:
    sudo chef-client --runlist "recipe[x509::munki2_client]"
  4. With the client set up with its certificate, now it’s time to configure Munki. Run the [munkiSSL::munki] recipe:
    sudo chef-client --runlist "recipe[munkiSSL::munki]"
    This recipe copies the client certificates from /etc/ssl/ into /Library/Managed Installs/certs/ where Munki can use them.
  5. Change the ManagedInstalls.plist defaults:
    1. sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoURL "https://munki2/repo"
    2. sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoCACertificate "/Library/Managed Installs/certs/ca.pem"
    3. sudo defaults write /Library/Preferences/ManagedInstalls ClientCertificatePath "/Library/Managed Installs/certs/clientcert.pem"
    4. sudo defaults write /Library/Preferences/ManagedInstalls ClientKeyPath "/Library/Managed Installs/certs/clientkey.pem"
    5. sudo defaults write /Library/Preferences/ManagedInstalls UseClientCertificate -bool TRUE
  6. Finally, test out the client:
    sudo /usr/local/munki/managedsoftwareupdate -vvv --checkonly

MacBrained.org: Introducing Docker to Mac Management

Recently, I presented at the MacBrained.org February 2015 Meet-up at Facebook.

The topic was (and this should not come as a huge shock to anyone):
Introducing Docker to Mac Management.

A full recap of the event can be found on MacBrained.org.

I’d like to point out that most of the presentation was demoing actual Docker usage, and I didn’t cover a good chunk of the slides – so there’ll be some material in the slides I didn’t talk about.

The Keynote presentation, zip file of presentation, and PDF can be found on GitHub:
https://github.com/nmcspadden/Presentations/tree/master/MacBrained%20Feb%202015

Slide deck, thanks to MacBrained.org for posting:


Originally posted on SlideShare.

A recording of the event, thanks to MacBrained.org for posting the links:

Originally posted on Ustream.

Mike Arpaia’s talk on osquery was fascinating, and I’m already thinking of ways to incorporate it…

Incorporate Sal and JSS Data Into WebHelpDesk Inventory, using Docker

In previous posts, I covered:

WebHelpDesk, among its other features, makes a great inventory aggregate collector thanks to its use of discovery connections. Inventory data can be easily pulled from any flat database. Sal is a reporting engine for Munki that collects inventory data about OS X Munki clients, and JAMF Casper as an iOS MDM (referred to as Casper or “JSS” from here on out) stores inventory data about iOS clients.

We can set up scripts to pull data from Sal, using Sal-WHDImport, and from Casper, using JSSImport. This makes for a great triangle, allowing inventory aggregation into WebHelpDesk, and this is relatively trivial with Docker.

To save some time, I’ve incorporated the Sal-WHDImport script into Sal itself in a Dockerfile, available as the Sal-WHD container. We’ll be using this container below.

I’ve done the same thing with the JSSImport script, creating the JSSImport container.

Preparing Data Files:

Sal requires some modification in order to talk to WebHelpDesk. We’re going to use a plugin Graham Gilbert wrote called WHDImport to sync the Sal data into a single flat database for WebHelpDesk to pull from.

First, we’ll need to modify settings.py. On the Docker host:

  1. mkdir -p /usr/local/sal_data/settings/
  2. curl -o /usr/local/sal_data/settings/settings.py https://raw.githubusercontent.com/macadmins/sal/master/settings.py

Make the following changes to settings.py:
Add 'whdimport', (with the comma) to the end of the list of INSTALLED_APPS.

Next, we’ll clone a copy of MacModelShelf:
git clone https://github.com/nmcspadden/MacModelShelf.git /usr/local/sal_data/macmodelshelf

MacModelShelf was originally developed by Per Oloffson, but this version is my fork that uses a JSON database, which seems to improve cross-platform compatibility. The purpose of cloning a local copy is to keep the JSON database, which is automatically populated with model lookups. By keeping a local copy, we can safely spin up and down WebHelpDesk containers without losing any of our lookup data (which may save milliseconds in future lookups).

Run the Sal DB and Setup Scripts:

First, we create a data-only container for Sal’s Postgres database, and then run the Postgres database. We can specify all the variables at runtime using the -e arguments. The only thing you’ll need to change below is the password.

  1. docker run --name "sal-db-data" -d --entrypoint /bin/echo grahamgilbert/postgres Data-only container for postgres-sal
  2. docker run --name "postgres-sal" -d --volumes-from sal-db-data -e DB_NAME=sal -e DB_USER=saldbadmin -e DB_PASS=password --restart="always" grahamgilbert/postgres

Run the JSS Import DB:

We do the same thing with the JSS Import container’s database. Again, change the password only. Note that we’re using a slightly different Postgres container for this – the macadmins/postgres instead of grahamgilbert/postgres.

  1. docker run --name "jssi-db-data" -d --entrypoint /bin/echo macadmins/postgres Data-only container for jssimport-db
  2. docker run --name "jssimport-db" -d --volumes-from jssi-db-data -e DB_NAME=jssimport -e DB_USER=jssdbadmin -e DB_PASS=password --restart="always" macadmins/postgres

Run the WHD DB:

There’s a theme here – change the password for WebHelpDesk’s Postgres database.

  1. docker run -d --name whd-db-data --entrypoint /bin/echo macadmins/postgres Data-only container for postgres-whd
  2. docker run -d --name postgres-whd --volumes-from whd-db-data -e DB_NAME=whd -e DB_USER=whddbadmin -e DB_PASS=password --restart="always" macadmins/postgres

Run Temporary Sal to Prepare Initial Data Migration:

Load a temporary container just for the purpose of setting up Sal’s Django backend to incorporate the WHDImport addition.

Note that we’re using --rm with this docker run command, because this is intended only to be a transient container for the purpose of setting up the database. It will remove itself when complete, but the changes to the database will be permanent.

docker run --name "sal-loaddata" --link postgres-sal:db -e ADMIN_PASS=password -e DB_NAME=sal -e DB_USER=saldbadmin -e DB_PASS=password -it --rm -v /usr/local/sal_data/settings/settings.py:/home/docker/sal/sal/settings.py macadmins/salwhd /bin/bash

This opens a Bash shell. From that Bash shell:

  1. cd /home/docker/sal
  2. python manage.py syncdb --noinput
  3. python manage.py migrate --noinput
  4. echo "TRUNCATE django_content_type CASCADE;" | python manage.py dbshell | xargs
  5. python manage.py schemamigration whdimport --auto
  6. python manage.py migrate whdimport
  7. exit
  8. After exiting, the temporary “sal-loaddata” container is removed.

Run Sal and Sync the Database:

Load up the Sal container and run “syncmachines” to get started. Change the passwords here to match what you used previously:

  1. docker run -d --name sal -p 80:8000 --link postgres-sal:db -e ADMIN_PASS=password -e DB_NAME=sal -e DB_USER=saldbadmin -e DB_PASS=password -v /usr/local/sal_data/settings/settings.py:/home/docker/sal/sal/settings.py --restart="always" macadmins/salwhd
  2. docker exec sal python /home/docker/sal/manage.py syncmachines

Run JSSImport and Sync the Database:

Run the JSSImport container, which will pull the device list from Casper and sync it into the jssimport database.

If you haven’t already, set up an API-only user account in the JSS, and use those credentials below. Change the URL to match your Casper instance.

docker run --rm --name jssi --link jssimport-db:db -e DB_NAME=jssimport -e DB_USER=jssdbadmin -e DB_PASS=password -e JSS_USER=user -e JSS_PASS=password -e JSS_URL=https://casper.domain.com:8443 --restart="always" macadmins/jssimport

Although I haven’t tested this particular permutation, you could theoretically build a JSS Docker instance, and then link it to the jssimport container (--link jss:jss), and just use the URL -e JSS_URL=https://casper.

Run WHD with its data-only container:

Now run WebHelpDesk with its linked databases.

  1. docker run -d --name whd-data --entrypoint /bin/echo macadmins/whd Data-only container for whd
  2. docker run -d -p 8081:8081 --link postgres-sal:saldb --link postgres-whd:db --link jssimport-db:jdb --name "whd" --volumes-from whd-data --restart="always" macadmins/whd

WebHelpDesk now has direct access to three linked databases – its own Postgres database, as db; the Sal database, known as saldb; and the JSS Import database, known as jdb. This will make it trivially easy to pull the data it needs.

Configure WHD Through Browser:

  1. Open your web browser on the Docker host: http://localhost:8081
  2. Set up using Custom SQL Database:
    1. Database type: postgreSQL (External)
    2. Host: db
    3. Port: 5432
    4. Database Name: whd
    5. Username: whddbadmin
    6. Password: password
  3. Skip email customization
  4. Setup administrative account/password
  5. Skip the ticket customization

Setup Discovery Connections:

In WebHelpDesk, go to Setup > Assets > Discovery Connections. Make your two connections for Sal and the JSS.

  1. Setup discovery disconnection “Sal”:
    1. Connection Name: “Sal” (whatever you want)
    2. Discovery Tool: Database Table or View
    3. Database Type: PostgreSQL – uncheck Use Embedded Database
    4. Host: saldb
    5. Port: 5432
    6. Database Name: sal
    7. Username: saldbadmin
    8. Password: password
    9. Schema: Public
    10. Table or View: whdimport_whdmachine
    11. Sync Column: serial
  2. Setup discovery connection “Casper”:
    1. Connection Name: “Casper” (whatever you want)
    2. Discovery Tool: Database Table or View
    3. Database Type: PostgreSQL – uncheck Use Embedded Database
    4. Host: jdb
    5. Port: 5432
    6. Database Name: jssimport
    7. Username: jssdbadmin
    8. Password: password
    9. Schema: Public
    10. Table or View: casperimport
    11. Sync Column: serial

Now, you have a single web service that handles all inventory collection.

From here, if you wanted to schedule this for automation, you’d only need to run these two tasks regularly:

  1. docker exec sal python /home/docker/sal/manage.py syncmachines (since the sal container is daemonized and runs persistently).
  2. docker run --rm --name jssi --link jssimport-db:db -e DB_NAME=jssimport -e DB_USER=jssdbadmin -e DB_PASS=password -e JSS_USER=user -e JSS_PASS=password -e JSS_URL=https://casper.domain.com:8443 macadmins/jssimport (since this container is a fire-and-forget container that self-deletes on completion).

You could set up a crontab to run those two tasks nightly, and then set up WebHelpDesk’s internal syncs to its discovery connections to occur just an hour or so afterwards.

Once your inventory data is aggregated, you could use other tools like my WHD-CLI script to access WebHelpDesk via a Python interpreter, allowing for more scriptability. This is also available in a Docker container.

Using WHD-CLI, you have instant scriptable access to your inventory system, which could be used for lots of neat things, including a way to guarantee that a Puppetmaster only signs approved devices. Lots to explore!

Optimizing Dockerfiles for Smaller Sizes

After spending lots of time blogging about my Docker images, I took some hard looks at my Dockerfiles and noticed a recurring pattern.

I have a lot of Github projects for various scripts and tools, and my general strategy for making Docker images out of those tools / scripts / services is to build a Dockerfile that clones the git repo into the container on build. This is fantastic for guaranteeing I always have the most up to date scripts in my containers – but it means I have to install git in every container.

Git has a lot of dependencies and brings a lot along for the ride, including a full Perl installation. That’s a lot of extra beef to throw into a container, and it doesn’t really make any sense to do so if the primary process of the container doesn’t rely on Git.

Rather than installing Git, we can use ADD directives in the Dockerfile to add a file from a remote URL. Since these projects are hosted on Github, we can download a tarball or zipfile of the repo and install from there – thus avoiding the use of git clone completely.

After learning about this, I went back and examined some previous Docker containers and found several candidates for optimization.

In all of the following examples, these changes have now been merged into the master branches and the Docker images have been updated.

JSSImport

Docker-JSSImport contains a Dockerfile that installs git to clone a Github repository.

Take a look at the Dockerfile at the time. Here’s a cut down version of the important parts:

FROM ubuntu:14.04
<snip>
RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y python-setuptools
RUN apt-get install -y python-psycopg2
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN easy_install pip

RUN pip install python-jss

RUN git clone https://github.com/nmcspadden/JSSImport $APP_DIR

As you can see here, we install git from apt and then use git to clone the JSSImport script into a specific location. There’s no expectation that git will ever be used in the lifetime of this container, since this container’s specific purpose is to run a python script based on JSSImport, and then stop.

So git provides only unnecessary space usage, with no added benefit.

Instead, we can convert this to use an ADD directive instead. Look at an updated Dockerfile:

FROM debian
<snip>
RUN apt-get update && apt-get install -y python-setuptools python-psycopg2 && apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ADD https://github.com/sheagcraig/python-jss/tarball/master /usr/local/python-jss/master.tar.gz
RUN tar -zxvf /usr/local/python-jss/master.tar.gz --strip-components=1 -C /usr/local/python-jss && rm /usr/local/python-jss/master.tar.gz
WORKDIR /usr/local/python-jss
RUN python /usr/local/python-jss/setup.py install

ADD https://github.com/nmcspadden/JSSImport/tarball/master $APP_DIR/master.tar.gz
RUN tar -zxvf /home/jssi/master.tar.gz --strip-components=1 -C /home/jssi/ && rm /home/jssi/master.tar.gz

Rather than installing git at all, we instead use the ADD directive to directly add a tarball (a .tar.gz archive) of the entire repo located at $APP_DIR/master.tar.gz. Then we use the tar command to unzip and extract the contents of that source repo, and then remove the tarball.

This accomplishes the same thing as git clone, in that we always get an updated copy of the repo whenever this image is built, but we didn’t have to install git and all its dependencies.

In addition, we’ve removed pip from the install as well, and instead we install Python-JSS by using the setuptools’ setup.py install method. Since pip brings a lot of friends along with it, this is helpful savings.

By far, the biggest and most significant change is rebasing it off of Debian instead of Ubuntu. The base Debian image is almost 100 mb smaller than the base Ubuntu image, and there’s nothing in the Ubuntu image that we need for this image that Debian doesn’t have. Just by switching to Debian, we eliminate even more space storage, with no loss in functionality.

So how does this help us out? Take a look at the Docker image sizes before and after. “macadmins/jssimport” was the before and “nmcspadden/jssimport” is the after:

root@docker docker-jssimport]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
nmcspadden/jssimport     latest              1aa70b93ee78        34 minutes ago      141.5 MB
macadmins/jssimport      latest              5fb1da38fa7a        4 days ago          288.4 MB

We shaved off 140 megabytes by removing git and pip, and switching to Debian.

Munki-Puppet

The Munki-Puppet docker image is another example. Rather than install git, I installed wget to download a package to install. While wget is slimmer than git in terms of space taken up, it’s still unnecessary, since wget isn’t used at any point in the container’s script execution.

Here’s a snippet of what it looked like before:

RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y ca-certificates
RUN wget https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb
RUN dpkg -i puppetlabs-release-wheezy.deb

If we remove wget completely in favor of ADD, we get the updated Dockerfile:

RUN apt-get update
RUN apt-get install -y ca-certificates
ADD https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb /puppetlabs-release-wheezy.deb
RUN dpkg -i /puppetlabs-release-wheezy.deb

What are the size savings?

REPOSITORY                       TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
nmcspadden/munki-puppet   latest              0fd8d5026f84        6 minutes ago       186.5 MB
macadmins/munki-puppet    latest              270bece67420        3 days ago          191.4 MB

Only 6 MB difference this time. Not huge, but every byte counts, right?

Puppetmaster – WHDCLI

The Puppetmaster-WHDCLI project is another candidate for optimization, this one also using git in the Dockerfile.

Here’s the before:

RUN yum install -y git
RUN yum install -y python-setuptools
RUN yum clean all
RUN git clone git://github.com/kennethreitz/requests.git /home/requests
WORKDIR /home/requests
RUN python /home/requests/setup.py install
RUN git clone https://github.com/nmcspadden/WHD-CLI.git /home/whdcli
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install

Here’s the updated Dockerfile using ADD. Note that the default centos6 Docker image doesn’t actually come with tar, so I had to install tar manually instead of git. Tar is much smaller than git, so I still gain by doing this:

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz
WORKDIR /home/requests
RUN python /home/requests/setup.py install
ADD https://github.com/nmcspadden/WHD-CLI/tarball/master /home/whdcli/master.tar.gz
RUN tar -zxvf /home/whdcli/master.tar.gz --strip-components=1 -C /home/whdcli && rm /home/whdcli/master.tar.gz
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install

Size savings:

REPOSITORY                       TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
nmcspadden/puppetmaster-whdcli   latest              96eba2e80759        35 minutes ago      334.2 MB
macadmins/puppetmaster-whdcli    latest              4c9f1d4a3791        24 hours ago        554 MB

That’s a significant difference! We saved 220 MB from doing this.

WebHelpDesk

WebHelpDesk has a bit of an unfortunate Dockerfile, because SolarWinds doesn’t offer the RHEL RPM for download uncompressed. If the RPM was on the internet available as an .rpm file, we could install it directly from the internet and save time (and space!). Since it’s only available in .rpm.gz, we do in fact have to download and decompress it before installing. Regrettable use of space in a Docker image, but unless they change that, we don’t really have any other choice.

Previously, I used curl to download the rpm.gz file and then decompressed and installed it. Here’s the before version:

RUN curl -o webhelpdesk-12.2.0-1.x86_64.rpm.gz http://downloads.solarwinds.com/solarwinds/Release/WebHelpDesk/12.2.0/webhelpdesk-12.2.0-1.x86_64.rpm.gz
RUN gunzip webhelpdesk-12.2.0-1.x86_64.rpm.gz
RUN yum --enablerepo=base clean metadata
RUN yum install -y nano
RUN yum install -y webhelpdesk-12.2.0-1.x86_64.rpm
RUN rm webhelpdesk-12.2.0-1.x86_64.rpm

Here’s the updated Dockerfile that doesn’t rely on curl:

ADD http://downloads.solarwinds.com/solarwinds/Release/WebHelpDesk/12.2.0/webhelpdesk-12.2.0-1.x86_64.rpm.gz /webhelpdesk.rpm.gz 
RUN gunzip -dv /webhelpdesk.rpm.gz
RUN yum install -y /webhelpdesk.rpm && rm /webhelpdesk.rpm && yum clean all

Using ADD is much cleaner and saves a few steps, although I did end up combining the yum install and rm commands onto one line – no need to have separate layers for those instructions.

Size savings:

REPOSITORY                      TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
nmcspadden/whd                  latest              23109b9ef528        3 minutes ago       993.2 MB
macadmins/whd                   latest              849a80f1b702        4 days ago           1.038 GB

Not great, but 40 MB is still something.

Conclusions

The general theme here is to avoid adding in packages or tools in the Dockerfile just for building something, if we’re not going to use it. If we need to obtain files remotely, the ADD directive does that for us. Installing git, curl, wget, or some other remote download tool is a waste of space and time a Docker build.

Update:
Calem Hunter provided a fantastic link about optimizing Dockerfiles even further by chaining together commands as much as possible:
http://www.centurylinklabs.com/optimizing-docker-images/

Another round of optimizations coming soon…

Using Puppet with WebHelpDesk to Sign Certs In, Yes, You Guessed It, Docker

In a previous post, I showed how to use Munki with Puppet SSL Client certificates in a Docker image.

In that example, the Puppetmaster image is set to automatically sign all certificate requests. Good for testing, but not a good idea for production use.

Instead, we should look into Puppet policy-based signing to sign requests only based on some credentials or criteria we control. This means that random nodes can’t come along and authenticate to the Puppet master, and it also means that the Puppet admin won’t have to manually sign every node’s certificate request. Manually signing works great for testing, but it quickly spirals out of control when you’re talking about dozens, or hundreds (or thousands) of machines.

Puppet’s policy-based autosigning allows us to execute a script. The exit code of that script determines whether a certificate is signed or not (exit code 0 means we should sign). So we need to write a script that will check something about the client that lets us determine it’s “ours” or “safe,” and sign accordingly – or reject.

Well, we have a really easy to way to do that – why not look up the client in inventory? We have WebHelpDesk, with its customized Postgres database, which can track inventory for us. If we’re using WebHelpDesk for inventory (as I am), then an autosign script that checks the WHD inventory for ownership would be an effective way to screen for cert requests.

One of WebHelpDesk’s best features, in my opinion, is its REST API, which allows us to make requests from WebHelpDesk’s backend in a more automated fashion than via the web interface. Using the REST API, we can develop scripts that will manage information for us – such as the one I wrote, WHD-CLI.

I’ve even made a separate Docker container for it (which is admittedly better documented than the original project), although we’re not actually going to use the container separately for this purpose (as there’s no way to get Puppet to use an autosign script that isn’t installed locally, so having it exist in a separate Docker container isn’t going to help us).

So, we have WebHelpDesk, which has inventory for our machines. We have a script, WHDCLI, which allows us to query WebHelpDesk for information about devices. We have the Puppetmaster container, which is running Puppet. Let’s combine them!

Building Puppetmaster with WHD-CLI installed:

The repo for this project is here. Start with the Dockerfile:

FROM macadmins/puppetmaster

MAINTAINER nmcspadden@gmail.com

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz
WORKDIR /home/requests
RUN python /home/requests/setup.py install
ADD https://github.com/nmcspadden/WHD-CLI/tarball/master /home/whdcli/master.tar.gz
RUN tar -zxvf /home/whdcli/master.tar.gz --strip-components=1 -C /home/whdcli && rm /home/whdcli/master.tar.gz
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install
ADD puppet.conf /etc/puppet/puppet.conf
ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
ADD check_csr.py /etc/puppet/check_csr.py
RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

FROM macadmins/puppetmaster
Since we have a nice Puppet master container already, we can use that as a baseline to add our WHD-CLI scripts onto.

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz

Use ADD to download the Requests project. Requests is an awesome Python library for handling HTTP/S requests and connections, much more robust and much more usable than urllib2 or urllib3. Unfortunately, it’s not a standard library, so we’ll need to download a copy of the module in tarball form, then extract and install it ourselves.

WORKDIR /home/requests
The WORKDIR directive changes the local present working directory to /home/requests before the next command. This is equivalent to doing cd /home/requests.

RUN python /home/requests/setup.py install
Now we use the Python setuptools to install Requests so it’s available system-wide, in the default Python path.

RUN git clone https://github.com/nmcspadden/WHD-CLI.git /home/whdcli
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install

Same thing happens here to WHD-CLI – clone the repo, change the working directory, and install the package.

ADD puppet.conf /etc/puppet/puppet.conf
In the Puppetmaster image, we already have a Puppet configuration file – but as I documented previously, it’s set to automatically sign all cert requests. Since we’re changing the behavior of the Puppet master, we need to change the configuration file to match our goals.

Here’s what the new puppet.conf looks like:

[agent]  
    certname        = puppetmaster  
    pluginsync      = true  
  
[master]  
    certname        = puppet  
    confdir	    = /opt/puppet  
    vardir	    = /opt/varpuppet/lib/puppet/  
    basemodulepath  = $confdir/site-modules:$confdir/modules:/usr/share/puppet/modules  
    factpath        = $confdir/facts:/var/lib/puppet/lib/facter:/var/lib/puppet/facts  
    autosign        = $confdir/check_csr.py  
    hiera_config    = $confdir/hiera.yaml  
    rest_authconfig = $confdir/auth.conf  
    ssldir          = $vardir/ssl  
    csr_attributes  = $confdir/csr_attributes.yaml  

The major change here is the autosign directive is no longer set to “true.” Now, it’s set to $confdir/check_csr.py, a Python script that will be used to determine whether or not a certificate request gets signed. Note also the use of csr_attributes = $confdir/csr_attributes.yaml directive – that’ll come into play in the script as well.

ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
Add in a default WHD-CLI configuration plist. This will be used by WHD-CLI to get API access to WebHelpDesk.

ADD check_csr.py /etc/puppet/check_csr.py
Here’s the actual script that will be run whenever a certificate request is received on the Puppet master. An in-depth look at it comes later.

RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

As we’ll see later in-depth, the script will log its results to a logfile in /var/log/check_csr.out. To prevent possible permissions and access issues, it’s best to create that file first, and make sure it has permissions where the Puppet master can read and write to it.

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

These last two commands are copies of those from the original Puppetmaster image. Since we’re adding in new stuff to /etc/puppet, it’s important for us to make sure all the appropriate files end up in the right place.

As usual, you can either build this image yourself from the source:
docker build name/puppetmaster-whdcli .
Or you can pull from the Docker registry:
docker pull macadmins/puppetmaster-whdcli

Crafting Custom CSR Attributes:

The goal of an autosign script is to take information from the client machines (the Puppet nodes) and determine if we can sign it based on some criteria. In this use case, we want to check if the client nodes are devices we actually own, or know about in some way. We have WebHelpDesk as an asset tracking system, that contains information about our assets (such as serial number, MAC address, etc.), and we already have a script that allows us to query WHD for such information.

So our autosigning script, check_csr.py, needs to do all of these things. According to Puppet documentation, the autosigning script needs to return 0 for a successful signing request, and non-zero for a rejection. A logical first choice would be to ask the client for its serial number, and then look up the serial number to see if the machine exists in inventory, and exit 0 if it does – otherwise reject the request.

The first question is, how do we get information from the client? This is where the csr_attributes.yaml file comes into play. See the Puppet documentation on it for full details.

In a nutshell, the csr_attributes.yaml file allows us to specify information from the node that goes into the CSR (certificate signing request), which can then be extracted by the autosigning script and parsed for relevance.

Specifically, we can use the CSR attributes to pull two specific facts: serial number, and whether or not the machine is physical, virtual, or a docker container.

This is the csr_attributes.yaml file that will be installed on clients:

---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: mySerialNumber  
  1.3.6.1.4.1.34380.1.2.1.2: facter_virtual  

The two extension_request prefixes are special Puppet OIDs that allow us to add attributes to the CSR – essentially they’re labels for what kind of data can be put into the CSR.

Here’s an example of what it looks like in a VMWare Fusion VM, after installation:

sh-3.2# cat /etc/puppet/csr_attributes.yaml   
---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: VMYNypomQeS5  
  1.3.6.1.4.1.34380.1.2.1.2: vmware  

The serial number has been replaced with what the VM reports, and the “virtual” fact is replaced by the word “vmware”, indicating that Facter recognizes this is a virtual machine from VMWare. This will be important in our script.

For convenience, I have a GitHub repo for installing these attributes (built with Whitebox Packages) available here. A release package is available for easy download.

The Autosigning Script:

The autosign script, when called from the Puppetmaster, is given two things. The hostname of the client requesting a certificate is passed as an argument to the script. Then, the contents of the CSR file itself is passed via stdin to the script. So our script needs to be able to parse an argument, and then read in what it needs from stdin.

The full script can be found on GitHub. Here’s a pared-down version of the script, with many of the logging statements removed for easier blog-ability:

#!/usr/bin/python

import sys
import whdcli
import logging
import subprocess

LOG_FILENAME = '/var/log/check_csr.out'

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
logger = logging.getLogger(__name__)

logger.info('Start script')

hostname = sys.argv[1]

if hostname == "puppet":
	logger.info("It's the puppetmaster, of course we approve it.")
	sys.exit(0)

certreq = sys.stdin.read()

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

lineList = output.splitlines()

strippedLineList = [line.lstrip() for line in lineList]
strippedLineList2 = [line.rstrip() for line in strippedLineList]

try:
	trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
except:
	logger.info("No serial number in CSR. Rejecting CSR.")
	sys.exit(1)
	
serial_number = strippedLineList2[trusted_attribute1+1]
logger.info("Serial number: %s", serial_number)	  

try:
	trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
except:
	logger.info("No virtual fact in CSR. Rejecting CSR.")
	sys.exit(1)

physical_fact = strippedLineList2[trusted_attribute2+1]

if physical_fact == "virtual" or physical_fact == "vmware":
	logger.info("Virtual machine gets autosigned.")
	sys.exit(0)
elif physical_fact == "docker":
	logger.info("Docker container gets autosigned.")
	sys.exit(0)

# Now we get actual work done
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)
if not w.getAssetBySerial(serial_number):
	logger.info("Serial number not found in inventory.")
	sys.exit(1)

logger.info("Found serial number in inventory. Approving.")
sys.exit(0)

Let’s take a look at some of the notable parts of the script:

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
This sets the basic log level. This script has both INFO and DEBUG logging, so if you’re trying to diagnose a problem or get more information from the process, you could change level=logging.INFO to level=logging.DEBUG. It’s much noisier, so best for testing and probably not ideal for production.

Migrating the logging to standard out so that you can use docker logs is a good candidate for optimization.

hostname = sys.argv[1]
The hostname for the client is the only command line argument passed to the script. In a test OS X default VM, this would be “mac.local”, for example.

certreq = sys.stdin.read()
The actual contents of the CSR gets passed in to stdin, so we need to read it and store it in a file.

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

Here, we make an outside call to openssl. Puppet documentation shows that we can manually parse the CSR for the custom attributes using OpenSSL, so we’re going to do just that in a subprocess. We’re going to pass in the contents of certreq into stdin in the subprocess call, so in essence we are doing this:
/usr/bin/openssl req -noout -text -in

Once we do some text parsing and line stripping (since the CSR is very noisy about linebreaks), we can pull the first custom attributes, the serial number:
trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
If there’s no line in the CSR containing that data, that means the CSR didn’t have our csr_attributes.yaml installed (and is almost certainly not something we recognize, or at least not in a desired state and should be addressed). Thus, reject.

trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
Our second attribute is the Facter virtual fact. If we don’t find that either, then we still have an incorrect CSR, and thus it gets rejected.

if physical_fact == "virtual" or physical_fact == "vmware":
This was mostly for my own convenience, but I decided it was safe to Puppetize any virtual machine, such as a VMWare Fusion VM (or ESXi, or whatever). As VMs tend to be transient, I didn’t want to spend time approving these certs constantly as I spun test VMs up and down. Thus, they get autosigned.

elif physical_fact == "docker":
If it’s a Docker container getting Puppetized, autosign as well, for mostly the same reasons as above.

Once the CSR is parsed for its contents and some basic sanity checks are put into place, we can now actually talk to WebHelpDesk.
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)

Parse the .plist we passed in to the Puppetmaster image earlier for the API key and URL of WebHelpDesk, and load up the API. Note the False at the end of the WHD() call – that’s to specify that we don’t want Verbose logging. If you’re trying to debug behavior, and want to see all the details in the log file, specify True here (or eliminate the extra variables just call whdcli.WHD(whd_prefs), since the other three variables are optional).

if not w.getAssetBySerial(serial_number):
This is the real meat, right here – w.getAssetBySerial() is the function call that checks to see if the serial number exists in WebHelpDesk’s asset inventory. If this serial number isn’t found, the function returns False, and thus we reject the CSR by exiting with status code 1.

Putting It All Together:

So, we’ve got WebHelpDesk in a Docker image, using our customized Postgres. We’ve got our new-and-improved Puppetmaster with WHD-CLI. We’ve got our client configuration install package. We have all the pieces to make it work, let’s assemble it into a nice machine:

  1. First, run the data container for the Postgres database for WHD:
    docker run -d --name whd-db-data --entrypoint /bin/echo macadmins/postgres-whd Data-only container for postgres-whd

  2. Run the Postgres database for WHD:
    docker run -d --name postgres-whd --volumes-from whd-db-data -e DB_NAME=whd -e DB_USER=whddbadmin -e DB_PASS=password macadmins/postgres

  3. Run WebHelpDesk:
    docker run -d -p 8081:8081 --link postgres-whd:db --name whd macadmins/whd

  4. Configure WebHelpDesk via the browser to use the external Postgres database (see the penultimate section on Running WebHelpDesk in Docker for details).

  5. Once WebHelpDesk is set up and you’re logged in, you need to generate an API key. Go to Setup -> Techs -> My Account -> Edit -> API Key: “Generate” -> Save.

  6. Copy and paste the API key into com.github.nmcspadden.whd-cli.plist as the value for the “apikey” key. If you haven’t cloned the repo for this project, you can obtain the file itself:
    curl -O https://raw.githubusercontent.com/macadmins/puppetmaster-whdcli/master/com.github.nmcspadden.whd-cli.plist

  7. Create a data-only container for Puppetmaster-WHDCLI:
    docker run -d --name puppet-data --entrypoint /bin/echo macadmins/puppetmaster-whdcli Data-only container for puppetmaster

  8. Run Puppetmaster-WHDCLI. Note that I’m passing in the absolute path to my whd-cli.plist file, so make sure you alter the path to match what’s on your file system:
    docker run -d --name puppetmaster -h puppet -p 8140:8140 --volumes-from puppet-data --link whd:whd -v /home/nmcspadden/com.github.nmcspadden.whd-cli.plist:/home/whdcli/com.github.nmcspadden.whd-cli.plist macadmins/puppetmaster-whdcli

  9. Complete the Puppetmaster setup:
    docker exec puppetmaster cp -Rf /etc/puppet /opt/

  10. Configure a client:

    1. Install Facter, Hiera, and Puppet on an OS X VM client (or any client, really – but I tested this on a 10.10.1 OS X VM).
    2. Install the CSRAttributes.pkg on the client.
    3. If your Puppetmaster is not available in the client’s DNS, you’ll need to add the IP address of your Docker host to /etc/hosts.
    4. Open a root shell (it’s important to run the Puppet agent as root for this test):
      sudo su
    5. Run the Puppet agent as root:
      # puppet agent --test
    6. The VM should generate a certificate signing request, send to the Puppet master, which parses the CSR and notices that it’s a virtual machine, and then autosigns it and send the cert back.
  11. You can check the autosign script’s log file on the Puppetmaster to see what it did:
    docker exec puppetmaster tail -n 50 /var/log/check_csr.out

Here’s sample output from a new OS X VM:
INFO:__main__:Start script
INFO:__main__:Hostname: testvm.local
INFO:__main__:Serial number: VM6TP23ntoj2
INFO:__main__:Virtual fact: vmware
INFO:__main__:Virtual machine gets autosigned.

Here’s sample output from that same VM, but I manually changed /etc/puppet/csr_attributes.yaml so that the virtual fact is “physical”:
INFO:__main__:Start script
INFO:__main__:Hostname: testvm.local
INFO:__main__:Serial number: VM6TP23ntoj2
INFO:__main__:Virtual fact: physical
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): whd
INFO:__main__:Serial number not found in inventory.

Try this on different kinds of clients: Docker containers (a good candidate is the Munki-Puppet container which needs to run Puppet to get SSL certs), physical machines, other platforms. Test it on a machine that is not in WebHelpDesk’s inventory and watch it get rejected from autosigning.

Troubleshooting:

Manually run the script:

If you get a CSR that gets rejected and you’re not sure why, you can manually run the check_csr.py script itself on the rejected (or rather, disapproved) CSR .pem file. Assuming the hostname is “testvm.local”:
docker exec -it puppetmaster /bin/bash to open a Bash shell on the container, then:
cat /opt/varpuppet/lib/puppet/ssl/ca/requests/testvm.local.pem | /opt/puppet/check_csr.py "testvm.local"

Then, you can check the logs to see what the output of the script is. Assuming you’re still in the Bash shell on the container:
tail -n 50 /var/log/check_csr.out

Test WHDCLI:

If you’re running into unexpected failures with the autosigning scripts, or you’re not getting the results you expect, you can try manually running the WHDCLI to see where the problem might be:
docker exec -it puppetmaster /usr/bin/python
Once you’re in the Python interpreter, load up WHD-CLI:

>>> import whdcli
>>> whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
>>> w = whdcli.WHD(whd_prefs)

If you get a traceback here, it’ll tell you the reason why it failed – perhaps a bad URL, bad API key, or some other HTTP authentication or access failure. Embarrassingly, in my first test, I forgot to Save in WebHelpDesk after generating an API key, and if you don’t hit the Save button, that API key disappears and never gets registered to your WHD account.

Assuming that succeeded, try doing a manual serial lookup, replacing it with an actual serial number you’ve entered into WHD:

>>> w.getAssetBySerial("serial")

The response here will tell you what to expect – did it find a serial number? It’ll give you asset details. Didn’t find a match? The response is just False.

Conclusions

Important Note: Although this post makes use of Docker as the basis for all these tools, you can use the WHD-CLI script with a Puppetmaster to accomplish the same thing. You’d just need to change the WHD URL in the whd-cli.plist file.

One of the best aspects of Docker is that you can take individual pieces, these separate containers, and combine them into amazing creations. Just like LEGO or Minecraft, you take small building blocks – a Postgres database, a basic Nginx server, a Tomcat server – and then you add features. You add parts you need.

Then you take these more complex pieces and link them together. You start seeing information flow between them, and seeing interactions that were previously more difficult to setup in a non-Docker environment.

In this case, we took separate pieces – WebHelpDesk, its database, and Puppetmaster, and we combined them for great effect. Combine this again with Munki-Puppet and now you’ve got a secure Munki SSL environment with your carefully curated Puppet signing policies. There are more pieces we can combine later, too – in future blog posts.

Running WebHelpDesk in Docker

In a previous post, I walked through the process of creating a customized Postgres database container in Docker that accepts remote connections. The main purpose of that container is to use with WebHelpDesk, a Tomcat-based ticketing and asset tracking system.

WebHelpDesk does include an embedded Postgres database that it can use to keep track of its internal data (which is probably how many customers use it, including myself prior to Dockerizing it), but that gives us some problems. First off, we have to make sure we preserve all data in that Postgres database, so that it survives independent of the container. Secondly, the embedded Postgres version does not live in the same place as a “typical” Postgres install, and thus makes special configuration or tuning more awkward. By using an external database, we can exert far more control over the actual database’s running parameters (and make changes far more easily).

All of these are good reasons to use an external database, which is what we’re going to do with the macadmins/postgres image.

In this post, I’m going to run my Dockerized version of WebHelpDesk. This blog post is an extended version of the README for the WebHelpDesk Docker image.

Prepare the Data Container for the DB:

As always, we want to keep our data safe so that it’s independent of the container running the service. Run a data container for our customized Postgres image:
docker run -d --name whd-db-data --entrypoint /bin/echo macadmins/postgres Data-only container for postgres-whd
Now run the actual Postgres database, linking to our data container, passing in the appropriate environment variables (change the password, obviously):
docker run -d --name postgres-whd --volumes-from whd-db-data -e DB_NAME=whd -e DB_USER=whddbadmin -e DB_PASS=password macadmins/postgres

Prepare the data container for WHD:

Same deal applies to the WebHelpDesk data container:
docker run -d --name whd-data --entrypoint /bin/echo macadmins/whd Data-only container for whd
And the actual container itself:
docker run -d -p 8081:8081 --link postgres-whd:db --name whd --volumes-from whd-data macadmins/whd
Here, we use -p 8081:8081 to map port 8081 in the container to our localhost:8081.

Configure WHD Through Browser:

The container is now running, so we can access it via the web browser at http://localhost:8081/ on the Docker host.

The first time you launch WebHelpDesk, it goes through its initial setup.

This is where we get our chance to tell it not to use its embedded database, but instead use our linked Postgres database. Use the following parameters:
1. Database type: postgreSQL (External)
2. Host: db
3. Port: 5432
4. Database Name: whd
5. Username: whddbadmin
6. Password: password
Obviously, if you changed any of the -e DB_XXXX environmental variable values in the docker run command above, recreate those values above for the username, password, and database name. You can click the “Test” button to verify that the database connection works.

Note: if you try using a regular Postgres database, such as the default Postgres container, instead of the customized one, you’ll notice that the database connection will always fail.

You can skip email customization, it’s not required for setup.

Set up your preferred administrative account name & password. In the interest of best practices with security, consider using a username that isn’t “admin” for a production system.

Continue setup until you are asked to log in, and then use the credentials you specified above for name & password.

Some considerations

One significant note is that we’re running WebHelpDesk over HTTP, meaning it’s not secured. You’ll almost certainly want to configure WebHelpDesk for SSL before promoting into production use.

Additionally, you’ll probably want to get an actual SSL certificate, and not used a self-signed one.

Note that if you do set up SSL, Tomcat stores the private key for its SSL cert in its keystore, located at WebHelpDesk/conf/keystore.jks. This keystore will need to be preserved, because if the WebHelpDesk container is ever removed, so will that keystore, along with the private key that generated the CSR. If you spin another container up, your SSL certificate will most likely not be valid due to non-matching private keys in the keystore.

Configuring WebHelpDesk for SSL in Docker is a good topic for another blog post.

Running Munki with Puppet SSL Client Certificates

Previously, I showed how you can run Munki in a Docker container. Then, I talked about how to build Munki to use Puppet for SSL certificates.

Assuming you’ve got a running Puppetmaster image (which I talked about building previously), let’s run the Munki-Puppet image we just built.

Running the Container:

Run a data-only container to keep our data in:
docker run -d --name munki-data --entrypoint /bin/echo macadmins/munki-puppet Data-only container for munki

Run the Munki container by linking it to the Puppetmaster:
docker run -d --name munki --volumes-from munki-data -p 80:80 -p 443:443 -h munki --link puppetmaster:puppet macadmins/munki-puppet

The notable additions in this docker run command:
-p 443:443
Since we’re adding SSL support to the Nginx webserver, we want to make sure that the container is accessible at port 443, the default SSL port.
--link puppetmaster:puppet
The --link argument allows us to tell the Munki container that it can access any exposed ports from the Puppetmaster container by the DNS entry for “puppet”. Since the Puppet agent always tries to access “puppet” to check in, this means that the Munki container will have no trouble with Puppet.

The first step after running the container is to check in with puppet:
docker exec munki puppet agent --test
Verify that it receives a signed certificate from the Puppetmaster.

Now we’ve got a running Nginx container with our Munki repo, except it’s still only serving content at port 80. We need to tell Nginx to use our SSL configuration.

Using SSL with Munki:

We have an empty Munki repo, so we should populate it first.

Once the repo has some content, we need to add in the Nginx SSL configuration.

You’ll need to edit the provided munki-repo-ssl.conf file so that the name of the .pem certificate files matches what Puppet actually generated. For example, when you ran docker exec puppet agent --test above, you probably got output like this:

Info: Creating a new SSL key for munki.sacredsf.org
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for munki.sacredsf.org
Info: Certificate Request fingerprint (SHA256): [snip]
Info: Caching certificate for munki.sacredsf.org
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for ca
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for munki.sacredsf.org
Info: Applying configuration version '1422039029'
Info: Creating state file /var/lib/puppet/state/state.yaml

You can see the name of the certificate from the Puppetmaster:
docker exec puppetmaster puppet cert list -all

+ "munki.sacredsf.org" (SHA256) [snip]
+ "puppet"             (SHA256) [snip] (alt names: "DNS:puppet", "DNS:puppet.sacredsf.org")

To be even more thorough, look in the Munki’s Puppet certs directory:
docker exec munki ls -l /var/lib/puppet/ssl/certs/

total 8
-rw-r--r--. 1 puppet puppet 1984 Jan 23 18:18 ca.pem
-rw-r--r--. 1 puppet puppet 2021 Jan 23 18:52 munki.sacredsf.org.pem

That confirms the name of my cert is “munki.sacredsf.org.pem”, so let’s put that into munki-repo-ssl.conf:

server {
  listen 443;
  
  ssl	on;
  ssl_certificate	/var/lib/puppet/ssl/certs/munki.sacredsf.org.pem;
  ssl_certificate_key	/var/lib/puppet/ssl/private_keys/munki.sacredsf.org.pem;
  ssl_client_certificate	/var/lib/puppet/ssl/certs/ca.pem;
  ssl_crl	/var/lib/puppet/ssl/crl.pem;
  ssl_protocols	TLSv1.2 TLSv1.1 TLSv1;
  ssl_prefer_server_ciphers	on;
  ssl_ciphers	"EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
  ssl_verify_client    on;
  server_name munki;
  location /repo/ {
    alias /munki_repo/;
    autoindex off;
  }
}

The three important file paths that must be correct are ssl_certificate, ssl_certificate_key, and ssl_client_certificate. If any of these paths are wrong or can’t be found, Nginx will not start and your Docker container will immediately halt.

For reference, the ssl_protocols and ssl_ciphers are configured for perfect forward secrecy.

Otherwise, the configuration for Nginx for the Munki repo remains the same as the non-SSL version – we’re serving the file path /munki_repo as https://munki/repo/.

To get this new SSL configuration into the Nginx container, we’ll need to edit the existing configuration. Unfortunately, the base Nginx container is extremely minimal and doesn’t have vi or nano or anything. We could either install a text editor into the container, or just use a shell trick:

cat munki-repo-ssl.conf | docker exec -i munki sh -c 'cat > /etc/nginx/sites-enabled/munki-repo.conf'

Since we’ve changed the contents of a configuration file, we’ll need to restart Nginx. Let’s do that gracefully with Docker:
docker stop munki
docker start munki
Stopping the container will send a graceful “shutdown” signal to Nginx, and starting the container will bring it up as it expects.

Configure the clients to use Munki with SSL:

Detailed instructions on configuring Munki with SSL certificates can be found on the official wiki, but I’m going to recreate the steps here.

All of the following steps should be done on your OS X Munki client.

  1. If you haven’t already, run puppet agent --test as root to get a signed certificate.
  2. Copy the certs into /Library/Managed Installs/:
    1. sudo mkdir -p /Library/Managed\ Installs/certs
    2. sudo chmod 0700 /Library/Managed\ Installs/certs
    3. sudo cp /etc/puppet/ssl/certs/mac.local.pem /Library/Managed\ Installs/certs/clientcert.pem
    4. sudo cp /etc/puppet/ssl/private_keys/mac.local.pem /Library/Managed\ Installs/certs/clientkey.pem
    5. sudo cp /etc/puppet/ssl/certs/ca.pem /Library/Managed\ Installs/certs/ca.pem
  3. Change the ManagedInstalls.plist defaults:
    1. sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoURL "https://munki/repo"
    2. sudo defaults write /Library/Preferences/ManagedInstalls SoftwareRepoCACertificate "/Library/Managed Installs/certs/ca.pem"
    3. sudo defaults write /Library/Preferences/ManagedInstalls ClientCertificatePath "/Library/Managed Installs/certs/clientcert.pem"
    4. sudo defaults write /Library/Preferences/ManagedInstalls ClientKeyPath "/Library/Managed Installs/certs/clientkey.pem"
    5. sudo defaults write /Library/Preferences/ManagedInstalls UseClientCertificate -bool TRUE
  4. Test out the client:

    sudo /usr/local/munki/managedsoftwareupdate -vvv --checkonly

Now you’ve got secure Munki communication from clients to server, using Puppet’s client certificates, all in Docker!

Running Munki in Docker

The base of this article was taken from my README.

In the previous post, I built a container that serves static files at http://munki/repo using Nginx.

Now that we have build the Docker image, let’s put it to use.

Data Containers

We’re going to hook up the Munki image to a data-only container. Data-only containers are a way of keeping data portable and not tying it to specific locations or configurations on the host OS. This data-only container serves one purpose: to store the contents of the Munki repo and share it with other containers. The benefit to doing this is that we can raise and destroy as many Munki server containers as we want without losing any data (such as the contents of the Munki repo).

Creating a Data Container:

Create a data-only container to host the Munki repo:
docker run -d --name munki-data --entrypoint /bin/echo nmcspadden/munki Data-only container for munki

Let’s deconstruct this command into individual pieces:
docker run
That’s the base command for running a docker image.

-d
This runs the container in “detached” or “daemon” mode, where it runs in the background until we stop it (or it halts execution for some reason).

--name munki-data
Normally, when you run a Docker container, it picks a random human-readable name. Otherwise, you can refer to a container by its numerical ID number. By using --name, we can provide a meaningful name that we can refer to later. This container’s name is “munki-data”.

--entrypoint /bin/echo
The entrypoint for a Docker container is a command that executes upon running of the container. This is the technique Docker containers use to run in the background – by auto-executing a command. More information about entrypoints can be found here and here. This particular entrypoint is /bin/echo – meaning it will simply echo something out and then halt the Docker image.

nmcspadden/munki
The name of the image I’m running.

Data-only container for munki
Although it’s a bit hard to see from syntax, this phrase is actually the argument to /bin/echo. The ultimate goal of this container is to execute this command on startup:
/bin/echo Data-only container for munki

For more info on data containers read Tom Offermann‘s blog post and the official documentation.

Note that after you run this image, the Docker container is stopped – check docker ps vs. docker ps -a. That’s okay – the data-only container doesn’t need to be running to contain data. It doesn’t use any system resources (except for file system space), and can be accessed for data by other containers while not running.

Run the Munki container:

We now have a data-container to store our data in, let’s run the Munki container and start the Nginx webserver.

docker run -d --name munki --volumes-from munki-data -p 80:80 -h munki nmcspadden/munki

Let’s deconstruct the new pieces of this command:

--volumes-from munki-data
The --volumes-from argument tells the Docker image to use any exposed volumes from another container – specifically, use the shared volume /munki_repo from the data container named “munki-data”. This means that /munki_repo is actually the same volume in both containers. What happens in one container to /munki_repo will be reflected in the other. This means that the Munki container can make changes and/or serve content from /munki_repo, but it also means that we can get rid of the Munki container without losing any data – it’s all still stored in the data-container named “munki-data.”

-p 80:80
This maps port 80 from the container to port 80 on the host. In other words, it means we can access this container by going to http://localhost:80/ on the Docker host.

-h munki
The -h argument tells the Munki container that its hostname is “munki”. If your Dockerhost has a search domain configured in its DNS settings, it’ll append the search domain onto the hostname. This argument is technically optional as none of the services we’re using in this example make use of it.

nmcspadden/munki
The name of the image we’re running.

Check with docker ps to make sure the image is running. If you access http://localhost:80/, you should see the default Nginx welcome page.

Populate the Munki server:

Now we have an empty Munki repo running from Nginx. We should populate this repo with content.

Docker makes this nice and easy, because we can instantly spin up a new container with the functionality we want and just link our existing containers. Since both Munki and munki-data have an exposed volume – /munki_repo/ – we can use a Samba/SMB container to share out this directory to an OS X client, where we can use the Munki tools to follow the Demonstration Setup.

For this task, we’ll use my SMB-Munki container:

  1. docker pull nmcspadden/smb-munki
  2. docker run -d -p 445:445 --volumes-from munki-data --name smb nmcspadden/smb-munki
  3. You will need to change permissions on the mounted share, as it’s currently owned by root. We want a simple SMB configuration where the guest is allowed read/write permissions. You can do this easily using docker exec:
    docker exec smb chown -R nobody:nogroup /munki_repo
    docker exec smb chmod -R ugo+rwx /munki_repo
    Check permissions now with docker exec smb ls -alF /munki_repo to make sure.
  4. From an OS X client, Go -> Connect to Server -> smb://docker_host_IP/ and authenticate as Guest to the Public share, which will mount as /Volumes/public/.
  5. Populate the Munki repo using the usual tools – munkiimport, manifestutil, makecatalogs, etc. Try making a site_default manifest.

Anyone with SMB familiarity will note that the SMB configuration is rather weak on security and allows anyone to simply log in and make changes. We’re not hosting an SMB server – it’s better to think of the Docker container like running an application. We quit when we’re done. To prevent unwanted access, we can simply stop the SMB container:
docker stop smb
That gracefully exits the SMB service and kicks off all clients. If we wanted to run it again, we simply start it:
docker start smb

Once you’ve got the Demonstration setup (or any setup) completed, you should be able to access a manifest via web browser easily:
http://localhost/repo/manifests/site_default

In the next post, I’ll talk about using Puppet to secure the repo with SSL.