Local-Only Manifests in Munki

A while back, there was a discussion on Munki-Dev floating the idea of local-only manifests. After some long discussion, the final Pull Request was created and merged.

The idea behind local-only manifests is simple: if you specify a LocalOnlyManifest key in the preferences, Munki will look for that manifest name in /Library/Managed Installs/manifests. If it finds it, it’ll look for any managed_installs and managed_uninstalls specified inside, and concatenate those with whatever it gets from the Munki server. It’s an extra place to specify managed installs and uninstalls that is unique to the client.

Essentially, what it does is move the unique-client logic from the server to the client. As you scale upwards in client numbers, having huge numbers of unique server-side manifests induces significant overhead – potentially 10,000+ unique manifests in your Munki server’s manifests directory gets unwieldy. With the uniqueness moved client-side, the server only has to provide the common manifests.

There’s a lot of neat things you can do with this idea, so let’s explore some of them!

Hang Out With The Locals

While the basic idea of the local-only manifest is simple, the implementation has some fun details you can take advantage of.

Local-only manifests do not have any catalogs of their own. Instead, they inherit from whatever catalog is provided by the manifest given from the ClientIdentifier key. Thus, if your main manifest uses the catalog “release”, any items specified in the local-only manifest must also be in the “release” catalog (or they will simply be treated like adding any item to a manifest when it is not in a catalog – which is to say that you will receive warnings).

Local-only manifests also don’t have their own conditional items. This is where interaction with third-party tools really begins to shine, but we’ll explore that later.

Because this is a unique manifest, you get the benefits that “real” manifests get. You can specify items to be installed here that are not provided as optional items in the server-side manifest (as long as they’re in the catalog). You can still get the server’s provided list of optional installs, and use the local-only manifest to determine what items become managed installs or removals.

This doesn’t absolve the Munki admin of taking care, though. It’s still possible for an item to be specified as a managed install in one manifest and a managed uninstall in another manifest – and therefore trigger a collision. Local-only manifests are just as vulnerable to that as server-side manifests, and it’s easy for a client to contravene the server-side manifest and result in undefined (or undesireable) behavior.

It’s my recommendation, therefore, that you split the purposes and logic behind the server-side and local-only manifests into separate functions – optional vs. mandatory.

One Manifest To Rule Them All

Because of the slightly limited nature of local-only manifests, it’s important to think of them as addenda to server-side manifests. The way to mentally separate these functions is to also separate “mine” vs. “yours” – the things I, the Munki admin, want your machine to have vs. the things you, the client, want your machine to have (or not have).

The easiest way to accomplish this is to completely remove managed_installs and managed_uninstalls from your server-side manifest. The server-side manifest thus becomes the self-service list and gatekeeper to all optional software. The Munki admins determine what software is available because they control the optional installs list as well as the catalogs, but the clients now have essentially free customizability without needing any ability to modify the servers.

Because the unique aspects of clients are now done client-side and not server-side, this allows an external management mechanism, like Chef or Puppet, to control what Munki manages on a client, without needing the ability to make changes to the repo. If your repo is in source control (and it should be!), this means that the only commits to the repo’s manifests are done by the Munki admins, and will only involve changes that generally affect the whole fleet.

Whence Does This Mystical Manifest Come From?

The local-only manifest moves the work from maintaining the manifest relationships on the server to maintaining them on the client. This is really only beneficial if you already have a mechanism in place to manage these files – such as a config management tool (Chef, Puppet, etc.).

Facebook CPE handles this with our cpe_munki cookbook for Chef. In addition to managing the installation and configuration of Munki, we also create a local-only manifest on disk and tell clients to use it. Manifests are just plists, and plists are just structured-data representations of dictionaries/hashes.

Nearly every programming language offers a mechanism for interacting with dictionaries/hashes in relatively easy ways, and Ruby (in both Chef and Puppet) allows for simple abstractions here.

Abstracting Local Manifests Into Simple Variables

I’m going to use pseudo-Ruby via Chef as the base for this, but the same principles will apply to any scripting language or tool.

The Process in pseudocode:


# Our managed installs and uninstalls:
my_list_of_managed_installs = [
'GoogleChrome',
'Firefox',
]
my_list_of_managed_uninstalls = [
'MacKeeper',
]
# Read the file from the Managed Installs manifests directory
local = readInLocalManifestOnDisk('/Library/Managed Installs/manifests/extra_packages')
# Assign our local managed installs
local['managed_installs'] = my_list_of_managed_installs
# Assign our local managed uninstalls
local['managed_uninstalls'] = my_list_of_managed_uninstalls
# Write back to disk
writeLocalManifestToDisk(local, '/Library/Managed Installs/manifests/extra_packages')

The point of the pseudocode above is to show how simple it is to abstract out what amounts to a complex process – deciding what software is installed or removed on a machine – and reduce it to simply two arrays.

To add something to be installed on your client, you add to the local managed installs variable. Same for removals and its equivalent variable.

What you now have here is a mechanism by which you can use any kind of condition or trigger as a result of your config management engine to determine what gets installed on individual clients.

Use Some Conditioning, It Makes It All Smooth

Veteran Munki admins are very familiar with conditional items. Conditions can be used to place items in appropriate slots – managed installs/uninstalls, optionals, etc. They’re an extremely powerful aspect of manifests, and allows for amazing and complex logic and customization. You can also provide your own conditions using admin-provided conditionals, which essentially allow you to script any logic you want for this purpose.

Conditions in Munki are critical to success, but NSPredicates can be difficult and unintuitive. Admin-provided conditionals are a convenient way to get around complex NSPredicate logic by scripting what you want, but they require multiple steps:

  1. You have to write the scripting logic,
  2. You have to deploy the conditional scripts to the clients
  3. You still have to write the predicates into the manifest.

They’re powerful but require some work to utilize.

In the context of a local-only manifest, though, all of the logic for determining what goes in is determined entirely your management system. So there’s technically no client-side evaluation of predicates happening, because that logic is handled by the management engine whenever it runs. This unifies your logic into a single codebase which makes maintaining it easy, with less moving parts overall.

Some Code Examples

This is all implemented in Chef via IT CPE’s cpe_munki implementation, but here I’m going to give some examples of how to take this abstraction and use it.

In Chef, the local-only managed_installs is expressed as a node attribute, which is essentially a persistent variable throughout an entire Chef run. This array represents an array of strings – a list of all the item names from Munki that will be added to managed installs.

Thus, adding items in Chef is easy as pie:

node.default['cpe_munki']['local']['managed_installs'] << 'GoogleChrome'

Same goes for managed uninstalls:

node.default['cpe_munki']['local']['managed_uninstalls'] << 'MacKeeper'

Additionally, we specify in the Munki preferences that we have a local-only manifest called “extra_packages”:

{
 'DaysBetweenNotifications' => 90,
 'InstallAppleSoftwareUpdates' => true,
 'LocalOnlyManifest' => 'extra_packages',
 'UnattendedAppleUpdates' => true,
 }.each do |k, v|
   node.default['cpe_munki']['preferences'][k] = v
 end

After a Chef run, you’ll see the file in /Library/Managed Installs/manifests:

$ ls -1 /Library/Managed\ Installs/manifests
 SelfServeManifest
 client_manifest.plist
 extra_packages
 prod

If you look inside that file, you’ll see a plist with your managed installs and removals:

 


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
<plist version="1.0">
<dict>
<key>managed_installs</key>
<array>
<string>AnyConnect</string>
<string>Atom</string>
<string>Firefox</string>
<string>GoogleChrome</string>
<string>It Technical Support</string>
<string>iTerm2</string>
</array>
<key>managed_uninstalls</key>
<array>
<string>Tableau8</string>
</array>
</dict>
</plist>

When managedsoftwareupdate runs, it will concatenate the server-side manifest with the local-manifest, as described above. The sample plist above will ensure that six items are always going to be installed by Munki on my machine, and that “Tableau8” will always attempt to uninstall if needed.

With a setup like this, anyone who can submit code to the Chef repo can easily configure their machine for whatever settings they want, and thus users have individual control over their own machines without needing the ability to access any of the server manifests.

Even If You Don’t Have Config Management

You can still benefit from local-only manifests without needing config management. Manifests, including local ones, are just plists, and there are lots of ways to manipulate plists already available.

You could also add items to your local manifest using defaults:

$ sudo defaults write /Library/Managed\ Installs/manifests/extra_packages managed_installs -array-add "GoogleChrome"

Note the issue mentioned above, though, which is that it’s trivial for someone to add an item name that doesn’t exist in the catalog. Should that happen, the Munki client would generate warnings to your reporting engine. The benefits of using an external config management is the ability to lint or filter out non-existent items and thus prevent such warnings.

Summary

Ultimately, the benefits here are obvious. Clients have the ability to configure themselves without needing any access to the Munki repo. In addition, your users and customers don’t even need to have an understanding of manifests or how they work in order to get results. The entire interaction they’ll have with Munki will be understanding that items added to managed_installs get installed, and items added to managed_uninstalls get removed.

Stay tuned for a follow-up blog post about how this fits into Facebook’s overall managed Munki strategy, and how source control plays an important role in this process.

Self-Service Adobe CC in Munki

Some Context

The following section is primarily a “state of the world” discussion of current Adobe licensing and deployment methods. If you’d rather skip the wall of text and go straight to the technical details, click here.

Among the many common tasks of a Munki admin, dealing with Adobe will be one that consistently generates sighs, groans, and binge drinking. Veteran Munki admins are no stranger to the constant supply of hilarity provided by deploying Adobe packages, and it’s a common topic of discussion.  As of writing time, there are 697 results for “Adobe” on Munki-Dev.

The Munki wiki itself has pages devoted to handling Adobe products all the way back to CS3.  I wrote a significant chunk of the current wiki page on handling Adobe CC, and that was back when the 2015 versions were the first CC products to deal with.

Now, of course, it’s all changed again as Adobe has introduced new “hyperdrive” style packages from Creative Cloud Packager (CCP), which required yet more work from the Munki developers to accommodate. While the actual installer package might be slightly more sane and operate slightly faster, the overall process for generating and deploying them hasn’t changed much.

As you might infer from all of this, packaging, preparing, and deploying Adobe software has been an ongoing struggle, with no signs of lightening up.

Licensing Is My Favorite Thing, Just Like Sausage Made Of Balsa Wood

For the release of the Adobe CC products, Adobe also introduced a new licensing style – “named” as opposed to the previous “serialized.” CCP allowed you to generate packages that would install the products in either Named or Serialized format, but they required completely different work on the backend.

“Serialized” Adobe products are what most admins are used to, and most admins are likely deploying, due to the Byzantine nature of Adobe licensing for enterprises.

From a technical point of view, though, “Serialized” is a simple concept – you install the product itself, and then you install the license as well. The license on the computer is an opaque black box that Adobe manages that determines what software is or isn’t allowed to run, or maybe will expire in 32,767 days. When you install new products, you reapply the license. Simple in concept.

Oh, except for the part where uninstalling a single serialized product would remove the license for all serialized products.

What’s In A Name?

“Named” licenses are also simple in concept, and actually more simple in execution as well. A “named” license product is only available to a user via an Adobe ID, through the Creative Cloud Desktop App (CCDA). This requires a fundamentally different licensing agreement with Adobe than “serialized” licenses, which is why most Munki admins and Apple techs in general don’t have much control over it – we aren’t usually the ones who sign the Dump Trucks Full Of Money™ agreements with vendors. Someone in Upper Management™ usually makes those decisions, and often without any input from the people who have to do the bulk of the work.

If you’re lucky enough to have an ETLA style agreement with Adobe, or Creative Cloud For Teams, you can probably use “named” licenses. The fun part is that you can have license agreements for both “named” and “serialized”, either together, or separate, that may expire or require renewal at different times.

The good news, though, is that “named” licensing doesn’t really require that much extra work. There’s no license package that needs to be installed on the client, and Adobe’s CCDA basically does all the work for determining what software users are allowed to use. From a technical standpoint, this is much easier for both users and IT operators, because there’s just less surface area for things to go wrong.

54u142

With “named” licensing and the CCDA, there aren’t real “releases” anymore. Rather than releasing yearly (or more) product cycles like the old “Creative Suite” 1-6, product changes are released in smaller increments more regularly, and the CCDA keeps things up to date without the admins having to necessarily rebuild packages every time.

Although there’s no official word on this, my suspicion (and this is entirely my personal opinion) is that “serialized” licensing will eventually disappear. We’re already seeing products released only on CCDA via named licensing (Adobe Experience Manager), which to me sounds like a death knell for the old “build serial packages and send them off” system.

So if you read the writing on the wall that way, the future for building serialized packages via CCP seems grim (as if the present use of CCP wasn’t already dystopian enough). I’m frustrated enough with CCP, Adobe packages, and “Adobe setup error 79” that I’m actually looking forward to a named-license only environment.

But of course, we don’t want to lose the functionality we get with Munki. Allowing users to decide what software they get and allowing them to pick things on-demand is one of the most useful features of Munki itself!

Now that I’ve spent 800 words covering the context, let’s talk about implementation.

Craft Your Casus Belli, Claim Your Rightful Domain

The ultimate goal of this process is to set up named licensing, get our users loaded or synced up into it, and provide access to the software entitlements we’ve paid for.

There’s lots of ways to go about this, but as is Facebook custom, we like solving problems by over-engineering the living daylights out of them. So my methodology is to try and set up all the pieces I need for self service by utilizing Adobe’s User Management API. We want this process to be as user-driven as possible, mostly so that I don’t have to do all the work.

The Org-Specific Technical Stuff

If you aren’t already familiar with it, the Adobe Enterprise Dashboard is the central location for managing Adobe named licenses. In order to maximize our integration, we want to use Federated IDs, where accounts are linked to our Active Directory (AD) infra. There’s various pros and cons to this, but if you’ve already got an AD + SAML setup, this is a good use case for it.

Step one in this phase of the process is Claiming Your Domain, where we claim ownership over the domain matching the email addresses we expect our users to authenticate with. This does require submitting a claim to Adobe, and they verify it and provide a TXT record that must be served by your outward-facing DNS (so Adobe can verify that you own the domain you say you do).

Once your domain is claimed and set up, we wanted to utilize our Single Sign On (SSO) capability. Adobe uses Okta to connect to an SAML 2.0-compatible SSO environment, so you and the team that manages your identity settings will need to do some work with Adobe to make that work.

The details of this process are documented in the links above, and is generally specific to your organization, so there’s no need to go into details here.

Learning To Fly (with the API)

Despite me covering it in three paragraphs, the above section took me the most amount of work – mostly because so much that was out of my control. Once you get past the difficult setup phase, the implementation of the User Management API becomes relatively painless – if you’re familiar with Python.

The good news is that the API is very thoroughly documented.

In order to utilize the API, you need a few pieces:

  • A certificate registered in the API
  • The private key for the cert for the API to auth with
  • The domain variables provided by the API certificate tool
  • Three custom Python modules – pyjwt, requests, cryptography
  • Python (2 or 3) – system Python is fine

Certified Genius

First, you’ll need to set up a new Integration in the Adobe I/O portal.

If you don’t have a certificate and its private key already available, you can generate a self-signed one:

$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out certificate_pub.crt

You can then upload this cert into the Adobe I/O portal.

Adobe doesn’t actually verify the cert for anything except confirmation that the private key and public key match, so there’s no technical reason in terms of the API why you can’t keep using it. It’s always a good practice to use a real certificate, but for initial testing, this works just fine.

Upload the cert to your Integration, and it’ll provide you with the values you’ll need for crafting your config file below.

Once you’ve got a cert and the private key, you can start writing the API script.

SNAAAAAAKE, OH IT’S A SNAAAKE

Adobe’s sample scripts are quite thorough, and they use Python, which works perfectly for Mac admins. The downside, though, is that you’ll need to install three custom modules on any client who is going to use this script to access your API.

There’s a couple of ways to handle this, so it’s up to you to decide which one you want to pursue.

You can do it via pip:

sudo /usr/bin/python -m ensurepip
sudo /usr/bin/python -m pip install --ignore-installed --upgrade requests
sudo /usr/bin/python -m pip install --ignore-installed --upgrade pyjwt
sudo /usr/bin/python -m pip install --ignore-installed --upgrade cryptography

You can download the source for each of those modules and build it manually, and then copy the built modules into a central location on the client where you can load them:

cd PyJWT-1.4.2
python setup.py build

Whatever method you prefer to use, you need to be able to run the Python interpreter and import each of those modules (specifically jwt and requests) successfully to use the API sample scripts.

The Config File

Next up is the crafting of your config file:

[server]
host = usermanagement.adobe.io
endpoint = /v2/usermanagement
ims_host = ims-na1.adobelogin.com
ims_endpoint_jwt = /ims/exchange/jwt

[enterprise]
domain = my domain
org_id = my organization id
api_key = my api key/client id
client_secret = my api client secret
tech_acct = my api client technical account
priv_key_filename = my private key filename from above

The values for the [enterprise] section are all provided by the Integration when you upload the cert you created.

For example, for Facebook, it might look something like this:

[enterprise]
domain = facebook
org_id = ABC123@AdobeOrg
api_key = abc123
client_secret = abc-123-456
tech_acct = abc123@techacct.adobe.com
priv_key_filename = private.key

The priv_key_filename must simply be the name (not the path!) of the file that contains your private key that you generated earlier.

Start Your Script

Most of the start of this script is ripped straight from the samples page:


#!/usr/bin/python
"""Adobe API tools."""
import sys
import time
import json
import os
try:
import jwt
import requests
except ImportError:
sys.exit(0)
if sys.version_info[0] == 2:
from ConfigParser import RawConfigParser
from urllib import urlencode
from urllib import quote
if sys.version_info[0] >= 3:
from configparser import RawConfigParser
from urllib.parse import urlencode

The good news is that this (theoretically) works in both Python 2 or 3 (NOTE: I have not tested this in Python 3).

The initial part of the script just gets us the setup we need to make calls later. We’ll use jwt to create the JSON Web Token (which itself uses cryptography to use the “RS256” hashing algorithm to sign the token with the private key), and requests to make it easy to send GET and POST requests to the API endpoint.

You could write your own GET/POST tools, or use urllib2 or any pure Python method of accomplishing the same thing; requests isn’t technically a requirement. It just dramatically simplifies the process, and Adobe’s sample code uses it, so I decided to stick with their solution for now.

The Config Data

Before we can use the API, we’ll need to set up all the required variables and create the access token, the JSON web token, and the config data read from the file we created earlier. The Adobe sample documentation does this directly in a script, but I wanted to make it a bit more modular (i.e. I use functions).  It’s a little bit cleaner this way.

First, let’s parse the private key and user config:


def get_private_key(priv_key_filename):
"""Retrieve private key from file."""
priv_key_file = open(priv_key_filename)
priv_key = priv_key_file.read()
priv_key_file.close()
return priv_key
def get_user_config(filename=None):
"""Retrieve config data from file."""
# read configuration file
config = RawConfigParser()
config.read(filename)
config_dict = {
# server parameters
'host': config.get("server", "host"),
'endpoint': config.get("server", "endpoint"),
'ims_host': config.get("server", "ims_host"),
'ims_endpoint_jwt': config.get("server", "ims_endpoint_jwt"),
# enterprise parameters used to construct JWT
'domain': config.get("enterprise", "domain"),
'org_id': config.get("enterprise", "org_id"),
'api_key': config.get("enterprise", "api_key"),
'client_secret': config.get("enterprise", "client_secret"),
'tech_acct': config.get("enterprise", "tech_acct"),
'priv_key_filename': config.get("enterprise", "priv_key_filename"),
}
return config_dict

Next, we’ll need to craft the JSON web token, which needs to be fed the config data we read from the file earlier, and signed with the private key:


def prepare_jwt_token(config_data, priv_key):
"""Construct the JSON Web Token for auth."""
# set expiry time for JSON Web Token
expiry_time = int(time.time()) + 60 * 60 * 24
# create payload
payload = {
"exp": expiry_time,
"iss": config_data['org_id'],
"sub": config_data['tech_acct'],
"aud": "https://&quot; + config_data['ims_host'] + "/c/" +
config_data['api_key'],
"https://&quot; + config_data['ims_host'] + "/s/" + "ent_user_sdk": True
}
# create JSON Web Token
jwt_token = jwt.encode(payload, priv_key, algorithm='RS256')
# decode bytes into string
jwt_token = jwt_token.decode("utf-8")
return jwt_token

Yes, thank you, I realize “jwt_token” is redundant now that I look at it, but I’m not changing my code, dangit.

With the JWT available, we can craft the access token. This is where requests really comes in handy:


def prepare_access_token(config_data, jwt_token):
"""Generate the access token."""
# Method parameters
url = "https://&quot; + config_data['ims_host'] + config_data['ims_endpoint_jwt']
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"Cache-Control": "no-cache"
}
body_credentials = {
"client_id": config_data['api_key'],
"client_secret": config_data['client_secret'],
"jwt_token": jwt_token
}
body = urlencode(body_credentials)
# send http request
res = requests.post(url, headers=headers, data=body)
# evaluate response
if res.status_code == 200:
# extract token
access_token = json.loads(res.text)["access_token"]
return access_token
else:
# print response
print(res.status_code)
print(res.headers)
print(res.text)
return None

With all of these functions ready, it’s really easy to combine them together in a single convenient generate_config() function, which can be used by other public functions to handle all the messy work. The purpose of this function is to load up the config data and private key from a specific location on disk (rather than having to continually paste all of this into the Python interpreter).


def generate_config(userconfig=None, private_key_filename=None):
"""Return tuple of necessary config data."""
# Get userconfig data
if userconfig:
user_config_path = userconfig
else:
# user_config_path = raw_input('Path to config file: ')
user_config_path = '/opt/facebook/adobeapi_usermanagement.config'
if not os.path.isfile(str(user_config_path)):
print('Management config not found!')
sys.exit(1)
# Get private key
if private_key_filename:
priv_key_path = private_key_filename
else:
# priv_key_path = raw_input('Path to private key: ')
priv_key_path = '/opt/facebook/adobeapi_private.key'
if not os.path.isfile(str(priv_key_path)):
print('Private key not found!')
sys.exit(1)
priv_key = get_private_key(priv_key_path)
# Get config data
config_data = get_user_config(user_config_path)
# Get the JWT
jwt_token = prepare_jwt_token(config_data, priv_key)
# Get the access token
access_token = prepare_access_token(config_data, jwt_token)
if not access_token:
print("Access token failed!")
sys.exit(1)
return (config_data, jwt_token, access_token)

Here, we’ve simply stored the private key and config file in /opt/facebook for easy retrieval. Feel free to replace this path with anything you like. The idea is that these two files – the private key and the config file – will be present on all the client systems that will be making these API calls.

Our config functions are all set up and good to go, so now it’s time to write the functions to actually interact with the Adobe API itself.

Let’s Ask the API For Some Data

All of the Adobe API queries use common headers in their requests. To save ourselves some time, and avoiding having to retype the same thing repeatedly, let’s use a convenient function to return the headers we need:


def headers(config_data, access_token):
"""Return the headers needed."""
headers = {
"Content-type": "application/json",
"Accept": "application/json",
"x-api-key": config_data['api_key'],
"Authorization": "Bearer " + access_token
}
return headers

Now we have all the config pieces we need, let’s ask for some important pieces of data from the API – the product configuration list, the user list, and data about a specific user.


def _product_list(config_data, access_token):
"""Get the list of product configurations."""
page = 0
result = {}
productlist = []
while result.get('lastPage', False) is not True:
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/groups/" + config_data['org_id'] + "/" + str(page)
res = requests.get(url, headers=headers(config_data, access_token))
if res.status_code == 200:
# print(res.status_code)
# print(res.headers)
# print(res.text)
result = json.loads(res.text)
productlist += result.get('groups', [])
page += 1
return productlist
def _user_list(config_data, access_token):
"""Get a list of all users."""
page = 0
result = {}
userlist = []
while result.get('lastPage', False) is not True:
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/users/" + config_data['org_id'] + "/" + str(page)
res = requests.get(url, headers=headers(config_data, access_token))
if res.status_code == 200:
# print(res.status_code)
# print(res.headers)
# print(res.text)
result = json.loads(res.text)
userlist += result.get('users', [])
page += 1
return userlist
def _user_data(config_data, access_token, username):
"""Get the data for a given user."""
userlist = _user_list(config_data, access_token)
for user in userlist:
if user['email'] == username:
return user
return {}

In order to control how much data is sent back from these queries (which can result in rather large sets of data), Adobe automatically paginates each request. These two functions both start at page 0 and continue to loop until the resulting request contains lastPage = True. Just keep in mind each individual request will only give you a subset of the data.

With a list of product configurations, a list of all users, and the ability to ask for data on any specific user, we actually have nearly all of the data we’ll ever need. Rather than combining these pieces ourselves, we can also query some more specifics.

Here’s how to get a list of all users who currently have a specific product configuration entitlement:


def _users_of_product(config_data, product_config_name, access_token):
"""Get a list of users of a specific configuration."""
page = 0
result = {}
userlist = []
while result.get('lastPage', False) is not True:
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/users/" + config_data['org_id'] + "/" + str(page) + "/" + \
quote(product_config_name)
res = requests.get(url, headers=headers(config_data, access_token))
if res.status_code == 200:
# print(res.status_code)
# print(res.headers)
# print(res.text)
result = json.loads(res.text)
userlist += result.get('users', [])
page += 1
return userlist

With that data, it’s also easy to get a list of all products a given user has:


def _products_per_user(config_data, access_token, username):
"""Return a list of products assigned to user."""
user_info = _user_data(config_data, access_token, username)
return user_info.get('groups', [])

Enough Asking, It’s Time For Some Action!

With the above code, we’ve got the ability to ask for just about all the available data that we might care about. Now it’s time to start making some requests to the API that will allow us to make changes.

Hello, Goodbye, Mr. User

The obvious first choice here is the ability to create and remove a user. When I say “create a user”, I really mean “add a federated ID to our domain.” This is different than creating an Adobe ID (and see the links far above to see Adobe’s explanation of the difference between account types). Adobe does provide documentation for creating both types of accounts.


def _add_federated_user(
config_data, access_token, email, country, firstname, lastname
):
"""Add user to domain."""
add_dict = {
'user': email,
'do': [
{
'createFederatedID': {
'email': email,
'country': country,
'firstname': firstname,
'lastname': lastname,
}
}
]
}
body = json.dumps([add_dict])
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/action/" + config_data['org_id']
res = requests.post(
url,
headers=headers(config_data, access_token),
data=body
)
if res.status_code != 200:
print(res.status_code)
print(res.headers)
print(res.text)
else:
results = json.loads(res.text)
if results.get('notCompleted') == 1:
print("Not completed!")
print(results.get('errors'))
return False
if results.get('completed') == 1:
print("Completed!")
return True
def _remove_user_from_org(config_data, access_token, user):
"""Remove user from organization."""
add_dict = {
'user': user,
'do': [
{
'removeFromOrg': {}
}
]
}
body = json.dumps([add_dict])
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/action/" + config_data['org_id']
res = requests.post(
url,
headers=headers(config_data, access_token),
data=body
)
if res.status_code != 200:
print(res.status_code)
print(res.headers)
print(res.text)
else:
results = json.loads(res.text)
if results.get('notCompleted') == 1:
print("Not completed!")
print(results.get('errors'))
return False
if results.get('completed') == 1:
print("Completed!")
return True

You Get An Entitlement, YOU Get An Entitlement!

The next obvious choice is adding and removing product configurations to and from users:


def _add_product_to_user(config_data, products, user, access_token):
"""Add product config to user."""
add_dict = {
'user': user,
'do': [
{
'add': {
'product': products
}
}
]
}
body = json.dumps([add_dict])
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/action/" + config_data['org_id']
res = requests.post(
url,
headers=headers(config_data, access_token),
data=body
)
if res.status_code != 200:
print(res.status_code)
print(res.headers)
print(res.text)
else:
results = json.loads(res.text)
if results.get('notCompleted') == 1:
print("Not completed!")
print(results.get('errors'))
return False
if results.get('completed') == 1:
print("Completed!")
return True
def _remove_product_from_user(config_data, products, user, access_token):
"""Remove products from user."""
add_dict = {
'user': user,
'do': [
{
'remove': {
'product': products
}
}
]
}
body = json.dumps([add_dict])
url = "https://&quot; + config_data['host'] + config_data['endpoint'] + \
"/action/" + config_data['org_id']
res = requests.post(
url,
headers=headers(config_data, access_token),
data=body
)
if res.status_code != 200:
print(res.status_code)
print(res.headers)
print(res.text)
else:
results = json.loads(res.text)
if results.get('notCompleted') == 1:
print("Not completed!")
print(results.get('errors'))
return False
if results.get('completed') == 1:
print("Completed!")
return True

If you’ve been looking carefully, you’ll note that all of these functions start with _, indicating that they’re intended to be private module functions. Although Python doesn’t really enforce this, the reason is because I wrote this module to have internal data functions, and external/public convenience functions.

The public functions are all meant to be completely independent. The necessary work of generating the config data (the access token, JWT, etc.) should be abstracted away from the public use of these tools, and therefore we need internal functions to do all this work for us, and external public functions that others can call without needing to understand what they do.

We’ve covered all the private module functions, so now let’s get into the convenient public functions.

I’m Doing It For The Publicity

The public functions here should represent common queries that someone might want to use this module for.

Let’s start by providing a convenient list of Adobe product configurations:


def get_product_list():
"""Get list of products."""
(config_data, jwt_token, access_token) = generate_config()
productlist = _product_list(config_data, access_token)
products = []
for product in productlist:
products.append(product['groupName'])
return products

Take a look at this function, because you’ll see this same general strategy in all the rest of the public functions. We generate the config on the first line – by reading from the files on disk, and crafting the pieces we need on-demand. The config tuple is then used to feed the internal functions (in this case, _product_list() ). The end result is we get a nice Python list of all the product configurations, without any other unnecessary data.

We can do the same thing with users:


def get_user_list():
"""Get list of user emails."""
(config_data, jwt_token, access_token) = generate_config()
userlist = _user_list(config_data, access_token)
names = []
for user in userlist:
names.append(user['email'])
return names

Note that these two functions are essentially identical.

Straightforward request: does a user exist in our domain? Does this user already have a federated ID?


def user_exists(user):
"""Does the user exist already as a federated ID?"""
(config_data, jwt_token, access_token) = generate_config()
result = _user_data(
config_data,
access_token,
user,
)
if result.get('type') == 'federatedID':
return True
return False

Note that the above function can be slightly misleading. It only returns True if the user’s type is “federated ID”. This doesn’t technically answer the question of “does this user exist at all”, but specifically answers “does this federated ID exist”?

Another useful query: does the user have a specific product entitlement?


def does_user_have_product(target_user, product):
"""Return True/False if a user has the specified product."""
(config_data, jwt_token, access_token) = generate_config()
membership = _products_per_user(config_data, access_token, target_user)
return product in membership

While we’re on the topic of user management, here are public functions for adding and removing users:


def add_user(email, firstname, lastname, country='US'):
"""Add federated user account."""
(config_data, jwt_token, access_token) = generate_config()
result = _add_federated_user(
config_data,
access_token,
email,
country,
firstname,
lastname,
)
return result
def remove_user(email):
"""Remove user account."""
(config_data, jwt_token, access_token) = generate_config()
result = _remove_user_from_org(
config_data,
access_token,
email,
)
return result

Finally, we get the last pieces we want – public functions to add and remove product entitlements to users:


def add_products(desired_products, target_user):
"""Add products to specific user."""
(config_data, jwt_token, access_token) = generate_config()
productlist = _product_list(config_data, access_token)
userlist = _user_list(config_data, access_token)
names = []
for user in userlist:
names.append(user['email'])
products = []
for product in productlist:
products.append(product['groupName'])
if target_user not in names:
print("Didn't find %s in userlist" % target_user)
return False
for product in desired_products:
if product not in products:
print("Didn't find %s in product list" % product)
return False
result = _add_product_to_user(
config_data,
desired_products,
target_user,
access_token,
)
return result
def remove_products(removed_products, target_user):
"""Remove products from specific user."""
(config_data, jwt_token, access_token) = generate_config()
productlist = _product_list(config_data, access_token)
userlist = _user_list(config_data, access_token)
names = []
for user in userlist:
names.append(user['email'])
products = []
for product in productlist:
products.append(product['groupName'])
if target_user not in names:
print("Didn't find %s in userlist" % target_user)
return False
for product in removed_products:
if product not in products:
print("Didn't find %s in product list" % product)
return False
result = _remove_product_from_user(
config_data,
removed_products,
target_user,
access_token,
)
return result

This module, all together, creates the adobe_tools Python module.

So… What Do I Do With This?

We have a good start here, but this is just the code to interact with the API. The ultimate goal is a user-driven self-service interaction with the API so that users can add themselves and get whatever products they want.

In order for Munki to make use of this, this module, along with the usermanagement.config and private.key files above, needs to be installed on your clients. There are a few different ways to make that happen, but shipping custom Python modules is outside the scope of this post. Suffice to say, let’s assume that you get to the point where opening up the Python interpreter and typing import adobe_tools works.

We’re going to use Munki to make that happen, but we’ll need a little bit more code first.

Adding A User And Their Product On-Demand

Before we get into the Munki portion, let’s solve the first problem: easily adding a product to a user. We have all the building blocks in the module above, but now we need to put it together into a cohesive script.

This is the “add_adobe.py” script:


#!/usr/bin/python
"""Add Adobe products to user on-demand."""
import sys
# If you need to make sure this is always in your path, use:
# sys.path.append('/path/to/your/lib')
# Example:
# sys.path.append('/opt/facebook/lib')
import adobe_tools
target_product = sys.argv[1]
def getconsoleuser():
"""Get the current console user."""
from SystemConfiguration import SCDynamicStoreCopyConsoleUser
cfuser = SCDynamicStoreCopyConsoleUser(None, None, None)
return cfuser[0]
me = getconsoleuser()
email = "%s@domain.com" % me
# I'm cheating a bit here, just go with it
firstname = me
lastname = me
country = 'US'
def log(message):
"""Log with tag."""
print (
'CPE-add_adobe',
str(message)
)
# Do I exist as a user?
if not adobe_tools.user_exists(email):
log("Creating account for %s" % email)
# Add the user
success = adobe_tools.add_user(email, firstname, lastname, country)
if not success:
log("Failed to create account for %s" % email)
sys.exit(1)
# Does the user already have the product?
log("Checking to see if %s already has %s" % (email, target_product))
already_have = adobe_tools.does_user_have_product(email, target_product)
if already_have:
log("User %s already has product %s" % (email, target_product))
sys.exit(0)
# Add desired product
log("Adding %s entitlement to %s" % (target_product, email))
result = adobe_tools.add_products([target_product], email)
if not result:
log("Failed to add product %s to %s" % (target_product, email))
sys.exit(1)
log("Done.")

You run this script and pass it a product configuration. It detects the current logged in user, and if that user doesn’t already have a federated ID, it creates one. Then it checks to see if the user already has that product entitlement, and if not, it adds that product to the user.

There’s a bit of handwaving done there, and some assumptions made – especially in regards to the logged in user and the email account. If you already have an existing mechanism for obtaining this data (such as code for doing LDAP queries, or some other endpoint/database you can query for this info), you can easily add that in.

This script needs to go somewhere accessible on your clients, so put it anywhere you think makes sense – /usr/local/bin, or /usr/local/libexec, or /opt/yourcompany/bin or anything like that. That’s up to you.

Feeding the Munki

At this point, we’ve got four items on the clients that we need:

  • /opt/facebook/lib/adobe_tools.py
  • /opt/facebook/bin/add_adobe.py
  • /opt/facebook/usermanagement.config
  • /opt/facebook/private.key

We’ve made the simple assumption that /opt/facebook/lib is in the Python PATH (as shown in the gist above, we can use a simple sys.path.append() to ensure that).

The only part left is providing the actual Munki items for users to interact with via Managed Software Center.app.

Although it isn’t covered in depth on the wiki, we can use Munki “nopkg” type items to simply run scripts without installing any packages. We’re going to combine this with using OnDemand style items so that users can click the “Install” button to get results done, but there’s no persistent state being checked. This essentially means we run the script every time the user clicks the button, which is why it’s important to be idempotent.

With everything on the client, our pkginfo is quite simple:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
<plist version="1.0">
<dict>
<key>OnDemand</key>
<true/>
<key>autoremove</key>
<false/>
<key>catalogs</key>
<array>
<string>testing</string>
</array>
<key>category</key>
<string>Adobe</string>
<key>display_name</key>
<string>Add Adobe Photoshop CC To My Panel</string>
<key>icon_name</key>
<string>AdobePhotoshopCC2015.png</string>
<key>installer_type</key>
<string>nopkg</string>
<key>minimum_os_version</key>
<string>10.11.0</string>
<key>name</key>
<string>AdobeCCAPI_Photoshop</string>
<key>postinstall_script</key>
<string>#!/bin/sh
/usr/bin/python /opt/facebook/bin/add_adobe.py "Default Photoshop CC – 0 GB Configuration"
</string>
<key>requires</key>
<array>
<string>AdobeCreativeCloudDesktopApp</string>
</array>
<key>unattended_install</key>
<true/>
<key>version</key>
<string>1.0</string>
</dict>
</plist>

Note that Adobe Creative Cloud Desktop App is listed as a requirement. That’s not entirely true, but I think it makes a bit more sense for the user that they get all the pieces they need to actually use the software after clicking the Install button.

I’ve also added in the icon for Photoshop CC, although that’s purely cosmetic.

Add this pkginfo to your repo, run makecatalogs, and try it out!  Logs look something like this:

CPE-add_adobe[85246]: Checking to see if nmcspadden@fb.com already has Default Photoshop CC - 0 GB Configuration
CPE-add_adobe[85250]: Adding Default Photoshop CC - 0 GB Configuration entitlement to nmcspadden@fb.com
CPE-add_adobe[85263]: Done.

After that, log into Adobe CCDA and the software will be listed there for installation.

Now Add Them All!

Add one of these pkginfos for each of your product configurations that you want users to select. The end result looks kind of nice:

screenshot-2016-10-17-12-43-48-copy

After clicking all of the buttons, CCDA looks very satisfied:

screenshot-2016-10-17-12-51-04-copy

Self-service Adobe CCDA app selection, using Munki and the Adobe User Management API. No more packaging, no more CCP!

 

Some Caveats and Criticisms

Despite the niftiness of this approach, there’s some issues to be aware of.

The API Key Is A Megaphone

The main problem with this approach is that the API private key has no granular access over what it can and can’t do. The only thing you can’t do with the API private key is make a given user a “System Administrator” on the Enterprise dashboard. But you can add and remove user accounts, add and remove product entitlements to users, and make users product admins of whatever they want.

In most cases, this isn’t a huge deal, but there’s some potential for mischief here. If every single client machine has the private key and necessary config data to make requests to the API, any single client can do something like “remove all users from the domain.” What happens to your data stored in Creative Cloud if your federated ID is removed? I imagine we’d probably prefer not to find out the nuances of having your account removed while using it.

There are some different ideas to address this, though. Instead of storing the key and usermanagement config file on the disk persistently, we could potentially query an endpoint hosted internally for them and use them for the duration of the script. In this theoretical scenario, you could control access to that endpoint, perhaps requiring users to authenticate ahead of time, or logging / controlling access to it.

Throttling Requests

One thing I didn’t mention above at all is that the number of requests in a given time frame need to be throttled. Adobe has great documentation on this, including some exponential back-off code samples. We didn’t implement any of this in this initial proof-of-concept, but if you’re going to roll this to a large production environment, you’ll almost certainly need to handle the return value indicating “too many requests.”

Munki State-Checks

If you wanted to take this further, we could actually turn off OnDemand for these Munki items. Using an installcheck_script, we could query whether or not a given product was added for a given user, and that would change the state of the “Add Photoshop CC To My Panel” to installed, and thus the button in Munki would correspond to “Add or Remove this app from my account.”

Generally, what I suspect is that most users will probably never particularly want to remove a product entitlement from themselves, since it doesn’t actually correspond to what’s installed or not. So changing Munki to reflect state probably doesn’t accomplish too much.

No Way To Trigger Installs

The only major feature request I really wish existed was a way to trigger CCDA into installing a product entitlement. All we can do is add or remove the entitlements to user accounts, but we can’t actually install them for the user (through CCDA).

You could build a Named license package through CCP and actually distribute that directly in your Munki repo, but then you’re essentially back to the same point you were before: you still need to add the entitlement to the user, you still need to package each release / new version of the product, and you still need close to 60 GB (or more!) to store all of the CC packages. About the only thing you’re doing differently compared to serialized licenses is that you don’t have to worry about the serialization package anymore.

You can trigger updates using Remote Update Manager, but that doesn’t provide a mechanism to “Install Photoshop from CCDA.” So no matter what we do, we still rely on the user to log in to CCDA and press the button.

Bandwidth vs. Network

Because this method relies on the user installing from CCDA, that means the Adobe software is being deployed from the Internet. That means internet bandwidth is used to install these, not local network bandwidth. For orgs with smaller internet pipes, this could be significant cost or time sinks.

As I mentioned above, if bandwidth is an issue, you could package up the named licenses with CCP and distribute them via Munki. That would allow you to use your local network bandwidth rather than internet pipes.

 

Final Summary

Well, it works.

Getting Started With CPE Chef

NOTE: This post does NOT include any information about setting up a Chef server. There is quite a bit of documentation on Chef’s own site as well as blog posts (including my own older ones around the internet for setting up a Chef server and getting that infrastructure started. This article can be done entirely in Chef Local Mode (which obviously does not require a Chef server), or with an existing Chef infrastructure.

Introduction

Facebook has recently open-sourced a number of its Mac-specific Chef cookbooks. These are the actual tools we use to manage certain features, using Chef’s config management model. In this blog post, I’m going to discuss how to use them, how to benefit from them, and what features they offer.

Target Audience

The target for this blog post is a Mac admin with a budding interest in config management. I will endeavor to explain things in a way that does not require a deep understanding of Chef, so please don’t run away screaming if you aren’t already a user of some config management system (like Chef, Puppet, etc.).  The goal here is to show what kind of benefits we get from using a system like this that aren’t really offered by other tools.

I’m new to Chef, what do I need to know?

Unsurprisingly, there are lots of results for a Google search of “Getting started with Chef”. I’ll generally point people to the official “basic idea” documentation on Chef’s website.

For this article, let me give you a brief rundown of Chef (which I may eventually spin into a new blog post).

Chef is a config management system that is structured as a set of operations that need to happen, which then may or may not trigger based on certain other conditions you’ve specified. Ultimately, each cookbook contains a (sometimes series of) recipe(s) – which tells Chef what operations to do – that is bolstered by helper code (libraries, resources, etc.).

The API Model

At Facebook, we try to design our cookbooks using an “API model.” That model is based on the idea that you have a set of variables (in Chef, they’re called attributes) that have basic sane defaults or values, and those variables can be overridden.

Each “API” cookbook will generally not do much on its own (or at least shouldn’t do anything harmful) unless the default values in the attributes are set to something useful.

Thus, the overall idea behind Facebook Chef is that you have a series of cookbooks that each do basic management operations – such as install profiles, launch daemons, manage a specific setting, etc. – based on what other cookbooks have put into those attributes.

The basic Chef model

The basic understanding of Chef you’ll need for this blog post is about Chef’s phases.  Chef has, essentially, two primary phases, compile time and run time:

  1. Compile time – first, Chef goes through all the cookbooks and loads up all the attributes it will use (think of these as “variables” that exist throughout the Chef run).
  2. Compile time part two – Chef builds a list of all the resources (think of them as “actions” that use these attributes for data) it will need to execute, in order.
  3. Run time (a.k.a. convergence) – Chef goes through the list of resources and executes all of them in order.

Facebook’s API model, as described above, is based on the idea that most interaction with these cookbooks will be entirely based on overriding the attributes with values you want. These values are gathered at compile time, and then consumed at run time. By using this philosophy, we can make some cool implementations of dynamic management routines.

I recommend reading through the Quick Start guide on Facebook’s Github repo to get a basic idea of how to use it.

Getting Your Feet Wet

The basic structure of CPE Chef

The first place we start, using Facebook CPE Chef, is in the cpe_init cookbook. This will be the jump-off point for everything else that happens. As documented in the Quick Start guide, we’ll be using cpe_init as the cookbook that triggers all other cookbooks (which is provided by the quickstart.json file).

If you take a peek in cpe_init::mac_os_x_init.rb, you’ll see the overall cookbook run list that will actually happen – these are all the cookbooks that will run. On lines 18-22, the first item in the run list is cpe_init::company_init.rb.

company_init is where all the natural “overrides” are going to take place, where you can customize what you want to have happen on your client machines. As described in the “API model” section above, we’re going to use this recipe to set the values of the attributes to useful data, which will then be consumed by the API cookbooks during run time.

For this blog post, this will generally be the only file you’ll need or want to edit to see results.

Start with a clean slate

Let’s start with something simple. For now, take the default company_init and remove everything after line 21. You’ll need to keep lines 18-20 in order for the cpe_launchd and cpe_profiles cookbooks to function, though, and we’re going to be using them. Go ahead and replace the three occurrences of “MYCOMPANY” with whatever you want:

node.default['organization'] = 'pretendco'
node.default['cpe_launchd']['prefix'] = 'com.pretendco.chef'
node.default['cpe_profiles']['prefix'] = 'com.pretendco.chef'

QUICK CHEF TIPIn Chef parlance, node refers to the machine itself during a Chef run. node is a dictionary / hash of key/value pairs containing data about the node that will last throughout the entire Chef run. Attributes from cookbooks are stored as keys in this node object, and can be accessed the way any dictionary/hash value is normally accessed – node[key]. Attributes are normally set in the attributes::default.rb part of a cookbook. To change the value of an attribute during a recipe, you’ll need to use node.default[key]. Trying to change a value without using node.default will result in a Chef compile error.

Let’s start with a simple example – setting a profile that controls that the screensaver behavior.

Using cpe_screensaver to dynamically create a ScreenSaver profile

Controlling the ScreenSaver is relatively easy for Mac Admins – most of the relevant settings we’d want to manage can be done with a configuration profile that manages the com.apple.screensaver preference domain. Profiles are easy to install with most Mac management tools (MDM, Munki, etc.), so this is a simple win for Mac admins.

With Chef, we have a nice little toy called cpe_profiles, which allows us to dynamically specify what profiles we want installed, which are also dynamically created each time Chef runs. But we’ll get to the value of dynamic configuration soon.

The cpe_screensaver cookbook essentially does one thing – it generates a profile (in Ruby hash form) to manage the settings specified in its attributes, which is then fed to the cpe_profiles cookbook. cpe_profiles creates and installs all the profiles it was given at the end of the run.

In a bit more detail, cpe_screensaver sets up the namespace for the attributes we can override. You can see these in the cpe_screensaver::attributes file. It contains these three attributes:

default['cpe_screensaver']['idleTime'] = 600
default['cpe_screensaver']['askForPassword'] = 1
default['cpe_screensaver']['askForPasswordDelay'] = 0

QUICK CHEF TIP: The attributes file declares its attributes (and appropriate namespace) using the default[key] syntax. This both declares the existence of, and sets the default value for a node attribute, which can then be accessed during recipes with node[key], and modified during recipes with node.default[key].

For the screensaver, these three attributes correspond to keys we see in com.apple.screensaver. The idleTime attribute determines how much idle time (in seconds) must pass before the screensaver activates; the askForPassword attribute is a boolean determining whether or not unlocking the screensaver requires a password; and the askForPasswordDelay is how much time must pass (in seconds) after the screensaver locks before prompting for a password.

By default, we are mandating a value of 10 minute idle time lock, which requires a password immediately after locking.

Let’s alter these values and then do our first Chef-zero run. In your company_init.rb file, we can override these attributes:

node.default['cpe_screensaver']['idleTime'] = 60
node.default['cpe_screensaver']['askForPassword'] = 0
node.default['cpe_screensaver']['askForPasswordDelay'] = 0

Save the changes, and run Chef-zero:

cd /Users/Shared/IT-CPE/chef
sudo chef-client -z -j quickstart.json

This will initiate a “local-only” Chef run (also known as a “Chef zero” run, where it creates its own local Chef server on demand and runs Chef against it).

Some relevant snippets of Chef output:

Recipe: cpe_screensaver::default
 * ruby_block[screensaver_profile] action run
 - execute the ruby block screensaver_profile

<snip>

Recipe: cpe_profiles::default
 * cpe_profiles[Managing all of Configuration Profiles] action run
 Recipe: <Dynamically Defined Resource>
 * osx_profile[com.pretendco.chef.screensaver] action install
 - install profile com.pretendco.chef.screensaver

In the (admittedly verbose) Chef output, you’ll see the section where cpe_profiles applies the “com.pretendco.chef.screensaver”. You can also verify this in System Preferences -> Profiles and see the Screen Saver settings being managed.

Success!

How does it work?

The interaction between your company_init changes, cpe_screensaver , and cpe_profiles is the core concept behind our API model.

To understand how we got to the point of a profile being installed, let’s go through the route that the Chef took:

Compile Time

  1. Assemble recipes – cpe_init was called (thanks to the quickstart.json), which gave Chef a list of recipes to run. Among these recipes, company_init is going to be run first (as it is first it the runlist). cpe_screensaver is added to the list, and finally cpe_profiles comes last. (This order is very important).
  2. Attributes – since Chef has a list of recipes it wants to run, it now goes through all the attributes files and creates the namespaces for each of the attributes. This is where cpe_screensaver‘s attributes are created and set to default values (which are specified in the cpe_screensaver::attributes file). At the same time, cpe_profiles also creates its namespace and attribute for node['cpe_profiles'].
  3. Assemble resources – now that all the attributes have been created with their default values, Chef identifies all the resources that are going to be run. This is also where all non-resource code gets processed, including attribute overrides (anything with node.default for example). This is the point where the node attributes for cpe_screensaver  are changed by cpe_init::company_init.
    The first resource (relevant to our example) that is going to be run is that of cpe_screensaver, whose default recipe contains a ruby_block on line 16.
    cpe_profiles is last in the runlist, but it contains two resources that are going to be executed: the cpe_profiles:run default action and the cpe_profiles:clean_up action. (These are custom resources with custom actions, defined in the “cpe_profiles/resources” folder).

At the end of compile time, the resource run list will look like this:

  • cpe_screensaver::ruby_block
  • cpe_profiles::run
  • cpe_profiles::clean_up

Run Time

  1. Run the cpe_screensaver ruby_block – the resource run list is executed in order, and first in the list is this block.
    This ruby_block essentially does one thing – it creates a Ruby hash that will be used to create a mobileconfig plist file, and then assigns this mobileconfig plist to the cpe_profiles node attribute. In the profile payload, it sets the preference keys for the screensaver to the value of whatever is currently in the equivalent node attributes. Since those were just assigned in the company_init recipe, this profile will be created with the values we want.
  2. Run the cpe_profiles::run action – this action iterates through each object (mobileconfig plist) in the cpe_profiles node attribute(node['cpe_profiles']['com.pretendco.screensaver']), and then writes that plist to disk as a .mobileconfig file, and then installs that profile (using /usr/bin/profiles). This part of the run is where the profile is actually installed.
  3. Run the cpe_profiles::cleanup action – in this example, it won’t do anything, but this will remove any profiles matching the prefix that are currently installed but not listed in the node attribute.

This is what makes the API model powerful – the interaction of multiple cookbooks together creates the desired state on the machine. By itself, cpe_profiles doesn’t do anything to the node. By itself, cpe_screensaver doesn’t do anything to the node. Similarly, by itself, cpe_init::company_init doesn’t do anything either.

Yet, similar in concept to a “model-view-controller” design model (used throughout Apple development), it’s a chain reaction of inputs and outputs. The model is set up by the attributes of all the cookbooks, whose data is then filled in by the company_init recipe. The cpe_screensaver takes on the role of a controller in this analogy, in that it takes data from the company_init and makes useful data that it feeds to cpe_profiles. Then, the cpe_profiles recipe actually interacts with the node and installs the profiles (which would be similar to the “view”, which is where the user sees interaction happen).

Awesome! Where do we go from here?

Hopefully this covered the basic underlying concept behind the API model used by CPE Chef. What we did here is dynamically generate a ScreenSaver profile simply by overriding three attribute variables. With this kind of framework in place, we can do a lot of really cool things.

Part two coming soon!

A Grim Tableau

One of the perks of working at a huge enterprise tech company is that I get to play with expensive enterprise software. In a shining example of naive optimism, I walked into the doors of Facebook expecting relationships with great software vendors, who listen to feedback, work with companies to develop deployment methods, and do cool things to make it easy to use their software that I couldn’t even have imagined.

The horrible bitter truth is that enterprise vendors are just as terrible at large-scale deployment as educational software vendors, except they cost more and somehow listen less.

One such vendor here is Tableau, a data visualization and dashboard engine. The data scientists here love it, and many of its users tell me the software is great. It’s expensive software – $2000 a seat for the Professional version that connects to their Tableau Server product. I’ll trust them that the software does what they want and has many important features, but it’s not something I use personally. Since our users want it, however, we have to deploy it.

And that’s why I’m sad. Because Tableau doesn’t really make this easy.

Enough Editorializing

As of writing time, the version of Tableau Desktop we are deploying is 9.3.0.

We deploy Tableau Desktop to connect with Tableau Server. I’ve been told by other users that using Tableau Desktop without Server is much simpler, as users merely have to put in the license number and It Just Works™. This blog post will talk about the methods we use of deploying and licensing the Tableau Desktop software for Professional use with Server.

 

Installing Tableau

The Tableau Desktop installer itself can be publicly downloaded (and AutoPkg recipes exist). It’s a simple drag-and-drop app, which is easy to do.

If you are using Tableau Desktop with Tableau Server, the versions are important. The client and server versions must be in lockstep. Although I’m not on the team that maintains the Tableau Servers, the indication I get (and I could be wrong, so please correct me if so) is that backwards compatibility is problematic. Forward compatibility does not work – Tableau Desktop 9.1.8, for example, can’t be used with Tableau Server 9.3.0.

When a new version of Tableau comes out, we have to upgrade the server clusters, and then upgrade the clients. Until all the servers are upgraded, we often require two separate versions of Tableau to be maintained on clients simultaneously.

Our most recent upgrade of Tableau 9.1.8 to 9.3.0 involved this exact upgrade process. Since it’s just a drag-and-drop app, we move the default install location of Tableau into a subfolder in Applications. Rather than:

/Applications/Tableau.app

We place it in:

/Applications/Tableau9.1/Tableau.app
/Applications/Tableau9.3/Tableau.app

This allows easier use of simultaneous applications, and doesn’t pose any problem.

As we use Munki to deploy Tableau, it’s easy to install the Tableau dependencies / drivers, for connecting to different types of data sources, with the update_for relationship for things like the PostgresSQL libraries, SimbaSQL server ODBC drivers, Oracle Libraries, Vertica drivers, etc. Most of these come in simple package format, and are therefore easy to install. We have not noticed any problems running higher versions of the drivers with lower versions of the software – i.e. the latest Oracle Library package for 9.3 works with Tableau 9.1.8.

Since most of these packages are Oracle related, you get the usual crap that you’d expect. For example, the Oracle MySQL ODBC driver is hilariously broken. It does not work. At all. The package itself is broken. It installs a payload in one location, and then runs a postinstall script that assumes the files were installed somewhere else. It will never succeed.  The package is literally the same contents as the tar file, except packaged into /usr/local/bin/. It’s a complete train wreck, and it’s pretty par for what you’d expect from Oracle these days.

Licensing Tableau

Tableau’s licensing involves two things: a local-only install of FLEXnet Licensing Agent, and the License Number, which can be activated via the command line. Nearly all of the work for licensing Tableau can be scripted, which is the good part.

The first thing that needs to happen is the installation of the FLEXnet Licensing package, which is contained inside Tableau.app:

/usr/sbin/installer -pkg /Applications/Tableau9.3/Tableau.app/Contents/Installers/Tableau\ FLEXNet.pkg -target /

Licensing is done by executing a command line binary inside Tableau.app called custactutil.

You can check for existing licenses using the -view switch:

/Applications/Tableau9.3/Tableau.app/Contents/Frameworks/FlexNet/custactutil -view

To license the software using your license number:
/Applications/Tableau9.3/Tableau.app/Contents/Frameworks/FlexNet/custactutil -activate XXXX-XXXX-XXXX-XXXX-XXXX

The Struggle is Real

I want to provide some context as to the issues with Tableau licensing.

Tableau licensing depends on the FLEXnet Licensing Agent to store its licensing data, which it then validates with Tableau directly. It does not have a heartbeat check, which means it does not validate that it is still licensed after its initial licensing. When you license it, it uses up one of your counts of seats that you’ve purchased from Tableau.

The main problem, though, is that Tableau generates a computer-specific hash to store your license against. So your license is tied to a specific machine, but that hash is not readable nor reproducible against any hardware-specific value that humans can use. In other words, even though you have a unique hash for each license, there’s no easy way to tell which computer that hash actually represents. There’s no tie to the serial number, MAC address, system UUID, etc.

Uninstalling Tableau / Recovering Licenses

The second problem, related to the first, is that the only way to get your license back is to use the -return flag:

/Applications/Tableau9.3/Tableau.app/Contents/Frameworks/FlexNet/custactutil -return <license_number>

What happens to a machine that uses up a Tableau license and then gets hit by a meteor? It’s still using that license. Forever. Until you tell Tableau to release your license, it’s being used up. For $2000.

So what happens if a user installs Tableau, registers it, and then their laptop explodes? Well, the Tableau licensing team has no way to match that license to a specific laptop. All they see is a license hash being used up, and no identifiable information. $2000.

This makes it incredibly difficult to figure out which licenses actually are in use, and which are phantoms that are gone. Since the license is there forever until you remove it, this makes keeping track of who has what a Herculean task.  It also means you are potentially paying for licenses that are not being used, and it’s nearly impossible to figure out who is real and who isn’t.

One way to mitigate this issue is to provide some identifying information in the Registration form that is submitted the first time Tableau is launched.

Registering Tableau

With the software installed and licensed, there’s one more step. When a user first launches Tableau, they are asked to register the software and fill out the usual fields:

Screen Shot 2016-04-22 at 10.07.51 AM

This is an irritating unskippable step, BUT there is a way to save some time here.

The registration data is stored in a plist in the user’s Preferences folder:
~/Library/Preferences/com.tableau.Registration.plist

The required fields can be easily pre-filled out by creating this plist by prepending the field name with “Data”, as in these keys:

 <key>Data.city</key>
 <string>Menlo Park</string>
 <key>Data.company</key>
 <string>Facebook</string>
 <key>Data.country</key>
 <string>US</string>
 <key>Data.department</key>
 <string>Engineering/Development</string>
 <key>Data.email</key>
 <string>email@domain.com</string>
 <key>Data.first_name</key>
 <string>Nick</string>
 <key>Data.industry</key>
 <string>Software &amp; Technology</string>
 <key>Data.last_name</key>
 <string>McSpadden</string>
 <key>Data.phone</key>
 <string>415-555-1234</string>
 <key>Data.state</key>
 <string>CA</string>
 <key>Data.title</key>
 <string>Engineer</string>
 <key>Data.zip</key>
 <string>94025</string>

If those keys are pre-filled before launching Tableau, the fields are pre-filled out when you launch Tableau.

This saves some time for the user to avoid filling out the forms. All the user has to do is hit the “Register” button.

Once Registration has succeeded, Tableau writes a few more keys to this plist – all of which are hashed and unpredictable.

The Cool Part

In order to help solve the licensing problem mentioned before, we can put some identifying information into the registration fields. We can easily hijack, say, the “company” field as it’s pretty obvious what company these belong to. What if we put the username AND serial number in there?

<key>Data.company</key>
 <string>Facebook:nmcspadden:VMcpetest123</string>

Now we have a match-up of a license hash to its registration data, and that registration data gives us something useful – the user that registered it, and which machine they installed on. Thus, as long as we have useful inventory data, we can easily match up whether or not a license is still in use if someone’s machine is reported lost/stolen/damaged, etc.

The Post-Install Script

We can do all of this, and the licensing, in a Munki postinstall_script for Tableau itself:


#!/usr/bin/python
"""License Tableau."""
import os
import sys
import re
import subprocess
import pwd
import FoundationPlist
def run_subp(command, input=None):
"""
Run a subprocess.
Command must be an array of strings, allows optional input.
Returns results in a dictionary.
"""
# Validate that command is not a string
if isinstance(command, basestring):
# Not an array!
raise TypeError('Command must be an array')
proc = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
(out, err) = proc.communicate(input)
result_dict = {
"stdout": out,
"stderr": err,
"status": proc.returncode,
"success": True if proc.returncode == 0 else False
}
return result_dict
def getconsoleuser():
'''Uses Apple's SystemConfiguration framework to get the current
console user'''
from SystemConfiguration import SCDynamicStoreCopyConsoleUser
cfuser = SCDynamicStoreCopyConsoleUser(None, None, None)
return cfuser[0]
tableau_dir = '/Applications/Tableau9.3/Tableau.app/Contents'
tableau_binary = "%s/MacOS/Tableau" % tableau_dir
cust_binary = "%s/Frameworks/FlexNet/custactutil" % tableau_dir
current_license = 'XXXX-XXXX-XXXX-XXXX-XXXX'
# Add in the registration data
registration = dict()
# Get the system serial number. For simplicity, this is abstracted out.
# This could be easily done by using subprocess to run:
# `system_profiler SPHardwareDataType`
# and searching for 'Serial Number'
serial = get_serial()
username = getconsoleuser()
# For simplicity, these values are hardcoded.
# You will need to have some way of looking up this information
# from your own directory source.
registration['Data.email'] = "email@domain.com"
registration['Data.first_name'] = "Nick"
registration['Data.last_name'] = "McSpadden"
registration['Data.company'] = 'Facebook:%s:%s' % (serial, username)
registration['Data.city'] = "Menlo Park"
registration['Data.country'] = "US"
registration['Data.department'] = "Engineering/Development"
registration['Data.industry'] = "Software & Technology"
registration['Data.phone'] = "650-555-1234"
registration['Data.state'] = "CA"
registration['Data.title'] = "Engineer"
registration['Data.zip'] = "94025"
# For simplicity, assume home directory in /Users
home_dir = os.path.join('/Users', username)
FoundationPlist.writePlist(
registration,
'%s/Library/Preferences/com.tableau.Registration.plist' % home_dir
)
os.chmod(
'%s/Library/Preferences/com.tableau.Registration.plist' % home_dir,
0644
)
os.chown(
'%s/Library/Preferences/com.tableau.Registration.plist' % home_dir,
pwd.getpwnam(username).pw_uid,
-1
)
info_plist = os.path.join(tableau_dir, 'Info.plist')
version = FoundationPlist.readPlist(info_plist)['CFBundleShortVersionString']
# Install the licensing agent
# install_pkg() is a convenience function to call subprocess with
# /usr/sbin/installer
# Not provided in this post.
install_pkg(
"\"%s/Installers/Tableau\ FLEXNet.pkg\"" % tableau_dir, untrusted=True
)
# Execute the binary to get current licenses (if any)
cust_output = run_subp([cust_binary, '-view'])['stdout']
if current_license in cust_output:
print "Already licensed, exiting."
print (
'Tableau-Success',
(
'Machine is already licensed. Cusactutil Stdout:%s (Username: %s, '
'Serial: %s, Version: %s)' % (cust_output, username, serial, version)
)
)
sys.exit(0)
# Activate Tableau and log failures
apply_license_cmd = [tableau_binary, '-activate', current_license]
shell_out = run_subp(apply_license_cmd)
if not shell_out['success']:
print >> sys.stderr, (
'Tableau-Fail',
(
'Applying license failed with error code: %s (Username: %s, Serial: %s, '
'Version: %s)' % (shell_out['status'], username, serial, version)
)
)
else:
# Check for fulfillment id and log results
cusactutil_stdout = run_subp([cust_binary, '-view'])['stdout']
fulfillment_id = re.search(
'Fulfillment ID: (FID[a-z0-9_]*)',
cusactutil_stdout
)
if fulfillment_id:
print (
'Tableau-Success',
(
'License activated and fulfillment id applied. %s (Username: %s, '
'Serial: %s, Version: %s)' % (
fulfillment_id.group(0), username, serial, version
)
)
)
else:
print >> sys.stderr, (
'Tableau-Fail',
(
'License activated but no fulfillment id. Cusactutil Stdout: %s '
'(Username: %s, Serial: %s, Version: %s)' % (
cusactutil_stdout, username, serial, version
)
)
)

Some Good News

The better news is that as of Tableau 9.3, by our request, there’s now a way to pre-register the user so they don’t have to do anything here and never see this screen (and thus never have an opportunity to change these fields, and remove or alter the identifying information we’ve pre-populated).

Registration can be done by passing the -register flag to the main binary:

/Applications/Tableau9.3/Tableau.app/Contents/MacOS/Tableau -register

There are some caveats here, though. This is not a silent register. It must be done from a logged-in user, and it must be done in the user context. It can’t be done by root, which means it can’t be done by Munki’s postinstall_script. It doesn’t really help much at all, sadly. Triggering this command actually launches Tableau briefly (it makes a call to open and copies something to the clipboard). It does pretty much everything we don’t want silent flags to do.

It can be done with a LaunchAgent, though, which runs completely in the user’s context.

Here’s the outline of what we need to accomplish:

  • Tableau must be installed (obviously)
  • The Registration plist should be filled out
  • A script that calls the -register switch
  • A LaunchAgent that runs that script
  • Something to install the Launch Agent, and then load it in the current logged-in user context
  • Clean up the LaunchAgent once successfully registered

The Registration Script, and LaunchAgent

The registration script and associated LaunchAgent are relatively easy to do.

The registration script in Python:


#!/usr/bin/python
"""Register Tableau with a pre-filled Registration plist."""
import os
import sys
import subprocess
# You'll need to get this into your path if you don't have it
import FoundationPlist
reg_plist = os.path.join(
os.path.expanduser('~'), 'Library', 'Application Support',
'com.tableau.Registration.plist'
)
if (
not os.path.exists(reg_plist) or
not os.path.exists('/Applications/Tableau9.3')
):
print "DOES NOT EXIST: %s" % reg_plist
sys.exit(1)
thePlist = FoundationPlist.readPlist(reg_plist)
keys = thePlist.keys()
if len(keys) > 12:
# Something other than the Data keys is present, so it's registered
sys.exit(0)
cmd = [
'/Applications/Tableau9.3/Tableau.app/Contents/MacOS/Tableau',
'-register'
]
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = proc.communicate()
print out
if err:
print err

Assuming we place this script in, let’s say, /usr/local/libexec/tableau_register.py, here’s a LaunchAgent you could use to invoke it:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.facebook.tableauregister</string>
<key>LimitLoadToSessionType</key>
<array>
<string>Aqua</string>
</array>
<key>ProgramArguments</key>
<array>
<string>/usr/local/libexec/tableau_register.py</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>

The LaunchAgent obviously goes in /Library/LaunchAgents/com.facebook.tableauregister.plist.

If you’re playing along at home, be sure to test the registration script itself, and then the associated LaunchAgent.

Loading the LaunchAgent as the logged in user

With the registration script and associated LaunchAgent ready to go, we now need to make sure it gets installed and loaded as the user.

Installing the two files is easy, we can simply package those up:


mkdir -p /tmp/tableauregister/Library/LaunchAgents
mkdir -p /tmp/tableauregister/usr/local/libexec
cp tableau_register.py /tmp/tableauregister/usr/local/libexec/
cp com.facebook.tableauregister.plist /tmp/tableauregister/Library/LaunchAgents/
chmod 644 /tmp/tableauregister/Library/LaunchAgents/com.facebook.tableauregister.plist
chmod 755 /tmp/tableauregister/usr/local/libexec/tableau_register.py
pkgbuild –root /tmp/tableauregister –identifier "com.facebook.tableau.register" –version 1.0 tableauregister.pkg

Import the tableau_register.pkg into Munki and mark it as an update_for for Tableau.

Now comes the careful question of how we load this for the logged in user. Thanks to the wonderful people of the Macadmins Slack, I learned about launchctl bootstrap (which exists in 10.10+ only). bootstrap allows you to load a launchd item in the context you specify – including the GUI user.

Our postinstall script needs to:

  1. Determine the UID of the logged in user
  2. Run launchctl bootstrap in the context of that user
  3. Wait for Tableau to register (which can take up to ~15 seconds)
  4. Verify Tableau has registered by looking at the plist
  5. Unload the LaunchAgent (if possible)
  6. Remove the LaunchAgent

Something like this should do:


#!/usr/bin/python
"""Load the Tableau registration launchd."""
import os
import time
import sys
import platform
import pwd
import subprocess
# You'll need to get this into your path if you don't have it
import FoundationPlist
def run_subp(command, input=None):
"""
Run a subprocess.
Command must be an array of strings, allows optional input.
Returns results in a dictionary.
"""
# Validate that command is not a string
if isinstance(command, basestring):
# Not an array!
raise TypeError('Command must be an array')
proc = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
(out, err) = proc.communicate(input)
result_dict = {
"stdout": out,
"stderr": err,
"status": proc.returncode,
"success": True if proc.returncode == 0 else False
}
return result_dict
def getconsoleuser():
'''Uses Apple's SystemConfiguration framework to get the current
console user'''
from SystemConfiguration import SCDynamicStoreCopyConsoleUser
cfuser = SCDynamicStoreCopyConsoleUser( None, None, None )
return cfuser[0]
uid = pwd.getpwnam(getconsoleuser()).pw_uid
launcha = '/Library/LaunchAgents/com.facebook.CPE.tableauregister.plist'
cmd = [
'/bin/launchctl', 'bootstrap',
'gui/%s' % uid,
launcha
]
# Bootstrap the registration launch agent
result = run_subp(cmd)
if not result['success']:
print >> sys.stderr, ('CPE-TableauRegister: Failed to load launch agent.')
sys.exit(1)
# Wait 15 seconds for Tableau to register
time.sleep(15)
# For simplicity, I'm making an assumption about the home directory
reg_path = os.path.join(
'/Users', getconsoleuser(),
'Library', 'Preferences',
'com.tableau.Registration.plist'
)
iterations = 0
while True:
if iterations > 10:
# We waited almost a minute and it's still not registered
print >> sys.stderr, ('CPE-TableauRegister: Unregistered after 10 tries.')
sys.exit(1)
reg_plist = FoundationPlist.readPlist(reg_path)
if len(reg_plist.keys()) > 12:
# More than 12 keys means it's registered
break
time.sleep(5)
iterations += 1
# Once registered, we can remove the launch agent
# On 10.11, we can use 'launchctl bootout' to unload the launch agent first
currentOS = int(platform.mac_ver()[0].split('.')[1])
if currentOS >= 11:
unload_cmd = [
'/bin/launchctl', 'bootout',
'gui/%s' % uid,
launcha
]
result = run_subp(unload_cmd)
if not result['success']:
print >> sys.stderr, ('CPE-TableauRegister: Failed to unload launch agent.')
os.remove(launcha)

Caveats

Note that launchctl bootout only exists on 10.11, not 10.10. For Mavericks users, simply deleting the LaunchAgent will have to suffice. There’s no huge risk here, as it will disappear the next time the user logs out / reboots.

This process does make certain assumptions, though. For one thing, it assumes that there’s only one user who cares about Tableau. Generally speaking, it’s uncommon for us that multiple users will sign into the same machine, much less have multiple users with different software needs on the same machine, so that’s not really a worry for me.

Tableau themselves make this assumption. If one user installs and registers Tableau, it’s registered and installed for all user accounts on that machine. Whoever gets there first “wins.” Tableau considers this a “device” license, thankfully, not a per-user license. In a lab environment where devices aren’t attached to particular users, this may be a win because the admin need only register it to their own department / administrative account / whatever.

Another simple assumption made here is that the user’s home directory is in /Users. I did this for simplicity in the script, but if this isn’t true in your environment, you’ll need to either hard-code the usual path for your clients’ home directories in, or find a way to determine it at runtime.

Lastly, this all assumes this is happening while a user is logged in. This works out okay if you make Tableau an optional install only, which means users have to intentionally click it in Managed Software Center in order to install. If you plan to make Tableau a managed install in Munki, you’ll need to add some extra code to make sure this doesn’t happen while there’s no user logged in. If that’s the case, you might want to consider moving some of the postinstall script for Tableau into the registration script invoked by the LaunchAgent.

Putting It Together

The overall process will go like this:

  1. Install Tableau Desktop 9.3.
  2. Postinstall action for Tableau Desktop 9.3: pre-populate the Registration plist, install FLEXnet, and license Tableau.
  3. Update for Tableau Desktop 9.3: install all associated Tableau drivers.
  4. Update for Tableau Desktop 9.3: install the LaunchAgent and registration script.
  5. Postintall action for Tableau Registration: use launchctl bootstrap to load the LaunchAgent into the logged-in user’s context.
    1. Loading the LaunchAgent triggers Tableau to pre-register the contents of the Registration plist.
    2. Unload / remove the LaunchAgent.

Thus, when the user launches Tableau for the first time, it’s licensed and registered. Tableau now has a match between the license hash and a specific user / machine for easy accounting later, and the user has nothing in between installing and productivity.

What A Load of Crap

It’s frankly bananas that we have to do this.

I understand software development is hard, and enterprise software is hard, but for $2000 a copy, I kind of expect some sort of common sense when it comes to mass deployment and licensing.

Licensing that gets lost unless you uninstall it? No obvious human-readable match-up between hardware and the license number generated by hashing? Charging us year after year for licenses we can’t easily tell are being used, because there’s no heartbeat check in their implementation of FLEXNet?

Why do I have to write a script to license this software myself? Why do I have to write a separate script and a LaunchAgent to run it, because your attempt at silent registration was only ever tested in one single environment, where a logged in user manually types it into the Terminal?

Nothing about this makes sense, from a deployment perspective. It’s “silent” in the sense that I’ve worked around all the parts of it that aren’t silent and automated, by fixing the major gaps in Tableau’s implementation of automated licensing.  That still doesn’t fix the problem of matching up license counts to reality, for those who installed Tableau before we implemented the registration process. Tableau has been of no help trying to resolve these issues, and why would they? We pay them The Big Bucks™ for these licenses we may not be using. We used them at one point, though, so pay up!

This is sadly par for the course for the big enterprise software companies, who don’t seem to care that much about how hard they make it for admins. Users love the products and demand it, and therefore management coughs up the money, and that means us admins who have to spend the considerable time and energy figuring out how to make that happen are the ones who have to suffer. And nobody particularly cares.

Isn’t enterprise great?

Troubleshooting an Obscure DeployStudio Runtime Error

DeployStudio is an old hat classic ’round these parts, and many Mac admins are familiar with its foibles and idiosyncrasies. For those of you who haven’t moved on to Imagr yet, this sad story about troubleshooting DeployStudio may encourage you to hop onto the gravy train and off the failboat.

The story starts with a simple premise: my NetBoot clients would start up DeployStudio Runtime, but then would throw a repository access error when trying to mount the DS repository (which I have configured to be served via SMB):
12637256_10204432329603258_97004526_o

DeployStudio doesn’t like when this happens. It also doesn’t give you very useful information about what happens, because the repository access error triggers the “timeout until restart” countdown. If your trigger is an unforgiving number, i.e. 0 or 1 seconds, this will result in an instant reboot without you being able to troubleshoot the environment at all.

There’s nothing really useful in the log about why it failed, or how, either. Not very helpful, there, DeployStudio.

I’m troubleshooting this remotely, so I don’t have physical access to these machines. I’m doing all this relayed through messages to the local field technicians.

What we know at this point: DS Runtime can’t mount the SMB repo share.

Step 1: Verify DS’s Repo share

Simplest thing: check the server to make sure SMB is running and that DS knows about it. That’s simple enough to do in the System Preferences’ DeployStudio pane, which will show the status of the service and the address of the DS repository it’s offering.

Just for kicks, let’s try restarting the DS service.

Did that work? Nope.

Step 2: Verify SMB sharing

Okay, DS thinks it’s fine. Maybe SMB itself is freaking out?
sudo serveradmin stop smb
sudo serveradmin start smb

Let’s try mounting the share directly on another client:
mkdir -p /Volumes/DS/
mount -t smbfs //username@serverIP/DeployStudio /Volumes/DS
ls /Volumes/DS/

Works fine. Well, gee, that’s both good news and disconcerting news, because if the share works fine on other clients, why are these DS clients not mounting it?

???

So at this point, we know the SMB share works on some clients, fails on other clients, but is otherwise configured correctly on the server. We approach Hour 3 of All Aboard The Fail Boat.

Okay, just in case, let’s try rebuilding the NBI using DS Assistant. Did that fix it? Nope.

Ping test from broken client to server. No packet loss. Connection looks solid.

Telnet test from broken client to server on SMB port. It connects. No firewall, no network ACLs, no change in VLAN, no weird stuff.

Packet capture. Spanning tree set up between ports to carefully monitor traffic. Why are 60% of these clients failing to mount the share, but 40% still working?

Tear your hair out in frustration. Move on to hour 4.

A Glimmer of Hope

Time to get ugly. We need more data to determine what’s happening, and part of that is figuring out the difference between successful SMB authentications and failed ones. To see that, we need log data.

Hat tip to Rich Trouton for helpfully pointing me to this link:
http://web.stanford.edu/group/macosxsig/blog/2011/08/enable-logging-with-107-smbx-w.html

SMB logging sounds good. On 10.10, the above link is an easy solution – just unload the SMB launchd, edit the plist to add in the -debug and -stdout options, reload on the launchd, and watch the system log.

On 10.11, it’s a bit more work – your best bet would be to disable Apple’s launchd for SMB, make a copy of it with a different identifier, and load that (hat tip to @elios in MacAdmins Slack for this).

Once we’ve got logging enabled, let’s look very carefully at a success vs. a failure.
Success:


Feb 1 14:50:58 server.facebook.com digest-service[36275]: digest-request: init request
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request: init return domain: FACEBOOK server: F5KP60PFF9VN indomain was: <NULL>
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request: uid=0
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request netr: failed user=FACEBOOK\username DC status code c000006d
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request: netr failed with -1073741715 proto=ntlmv2
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request: od failed with 2 proto=ntlmv2
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request: user=FACEBOOK\username
Feb 1 14:50:59 server.facebook.com digest-service[36275]: digest-request kdc: ok user=F5KP60PFF9VN\username proto=ntlmv2 flags: NEG_KEYEX, ENC_128, NEG_VERSION, NEG_TARGET_INFO, NEG_NTLM2, NEG_ALWAYS_SIGN, NEG_NTLM, NEG_SIGN, NEG_TARGET, NEG_UNICODE

view raw

success.log

hosted with ❤ by GitHub

Failure:


Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request: uid=0
Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request netr: failed user=FACEBOOK\username DC status code c000006d
Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request: netr failed with -1073741715 proto=ntlmv2
Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request: od failed with 2 proto=ntlmv2
Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request: user=FACEBOOK\username
Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request: kdc failed with -1561745592 proto=ntlmv2
Feb 1 14:52:18 server.facebook.com digest-service[36275]: digest-request: guest failed with -1561745590 proto=ntlmv2

view raw

fail.log

hosted with ❤ by GitHub

This seems to be the key indicator of success:
kdc: ok user=F5KP60PFF9VN\username proto=ntlmv2
Compare that to the failure log:
kdc failed with -1561745592 proto=ntlmv2

Hmm, what the heck error code is that?

Googling got me to one specific hint, which is what gave the solution away:

Linux cifs mount with ntlmssp against an Mac OS X (Yosemite
10.10.5) share fails in case the clocks differ more than +/-2h:

The clock!

Well, That Was Obvious In Hindsight

I needed to verify the clock on one of the affected machines. Sure enough, the technician confirmed that the date was December 31, 1969. Definitely a bit more than 2 hours difference to the server.

In my defense, I’d like to remind you that I was troubleshooting this remotely and therefore couldn’t have noticed this without someone telling me and yes I’m rationalizing my failures stop looking at me like that I’m hideous don’t even look at me

The real question, then, is why this was happening. DeployStudio NBIs, when built via DeployStudio Assistant or Per Oloffson’s excellent AutoDSNBI, use an NTP server to sync up the date and time to prevent precisely this problem. What went wrong here?

The next silly thing: it turns out we changed our NTP server, and I simply failed to notice. The old NTP server didn’t resolve anymore, and that’s why any client that happened to have an expired clock battery (and therefore set back to the default time) failed to sync back up.

So the 60% fail rate we were seeing was essentially random luck against a pile of old machines, some of whom had been powered off for so long the clock battery ran out and the system time was reset.

Rebuilding the NBIs with the correct NTP server fixed the problem immediately.

The lesson from all of this?

Check the damn clock.

Adding JNLP files to Java Deployment Rulesets

Several Java updates back, Oracle introduced a feature to Java called Deployment Rulesets, which allowed enterprise deployment managers to whitelist specific sites to be able to run Java applets without providing warnings or errors to the end users.

There’s lots of good documentation about the general process, so I won’t cover it here. Check these out if this is new to you:

The Issue

I got a request from a user to add a certain site to the Deployment Ruleset, so I did the usual thing:

  <rule>
    <id location="*.domain.com" />
    <action permission="run" />
  </rule>

Except it didn’t work.

This site, rather than running the Java applet via the web, instead downloads a JNLP file. This JNLP file is essentially a bookmark that then downloads other .jar files into the Oracle cache, and then runs them locally, with the same Deployment rules.

The Deployment rules for JNLP are a bit harder and more stringent than normal Java web apps. The simple URL isn’t sufficient to make it work.

After scouring around for some details on this, I did find a helpful post in Oracle’s community detailing how to use the certificate hash to approve all jar files from that domain instead. That way, as long as the same cert was used (which is generally the case), users would have permission to launch jar files that were downloaded and signed with that cert.

So the next obvious question is: how do we find the cert? Luckily, Oracle documents that too:
Get the Certificate Hash

Problem is, I didn’t know what jar file it was talking about. I only had a .jnlp file to work with.

The Cache

Thanks to a hat tip from Michael Lynn on this, otherwise I’d have been flabbergasted. When the .jnlp file is loaded by Java Web Start, it downloads all the jar files it needs into the Oracle cache.

Thanks to Oracle’s documentation, that’s located here:
~/Library/Application Support/Oracle/Java/Deployment/cache

Unfortunately, the cache isn’t very helpful. Inside the cache was a directory named 6.0, and inside there was a bunch of directories numbered 1-50. Inside each of those directories were pairs of files, named with random numbers, one with no extension, one with an .idx extension.

The hat tip from Michael Lynn is that those files without extensions actually are the .jar files, just unlabelled. If you’re lucky, you may be able to sort them by modification or creation time, to see which ones you actually want to work with. If you’re unlucky, there’s a way to figure out more precisely what file to look for:

  1. Open this file in a text editor:
    /Library/Application\ Support/Oracle/Java/Deployment/deployment.properties
  2. Add/change the following settings:
deployment.trace.level=all
deployment.javapi.lifecycle.exception=true
deployment.trace=true
deployment.log=true
  1. Run the .jnlp file, which will proceed to download the .jar files it needs (or validate them inside the cache folder).
  2. When you encounter the Deployment Rule Set violation exception, look in the logs folder:
    ~/Library/Application\ Support/Oracle/Java/Deployment/log/
  3. The last modified log will contain a ton of data, but somewhere in there will be the security message indicating a violation. It will look something like this (despite being a .log file, it’s actually XML):
<record>
  <date>2016-01-14T20:42:44</date>
  <millis>1452832964163</millis>
  <sequence>1036</sequence>
  <logger>com.sun.deploy</logger>
  <level>FINE</level>
  <class>com.sun.deploy.trace.LoggerTraceListener</class>
  <method>print</method>
  <thread>11</thread>
  <message>security: JUT Record:
    javaws application denied [Java applets for this domain have been blocked. Contact Help Desk for questions.]
    http://domain.com/JavaClient/:  app_model=*a whole lot of garbage*
</message>
</record>

This message is the “final” failure message indicating that the URL (in this example, "http://domain.com/JavaClient/:") failed, and produced the message specified by your Ruleset.xml default response (in this example, “Java applets for this domain have been blocked…”).

From here, you need to scroll farther back in the records to see the exact file that triggered this reaction:

<record>
  <date>2016-01-14T20:42:43</date>
  <millis>1452832963045</millis>
  <sequence>1029</sequence>
  <logger>com.sun.deploy</logger>
  <level>FINE</level>
  <class>com.sun.deploy.trace.LoggerTraceListener</class>
  <method>print</method>
  <thread>11</thread>
  <message>security: Validating cached jar url=http://domain.com/JavaClient/pcclient.jar ffile=/Users/nmcspadden/Library/Application Support/Oracle/Java/Deployment/cache/6.0/58/586c64fa-40bd2f37 com.sun.deploy.cache.CachedJarFile@660395a5
</message>
</record>

Bingo!

Finally, we got the path of the exact file that, in our example above, is actually “pcclient.jar”, downloaded into the cache.

Once you’ve identified one of the cached jar files to work with, you can actually extract the certificate hash:
keytool -printcert -jarfile filename | more

This will get you output that looks like this:

keytool -printcert -jarfile ~/Library/Application\ Support/Oracle/Java/Deployment/cache/6.0/2/1b2c3982-2e369813
Signer #1:

Signature:

Owner: CN=..., O=..., L=..., ST=..., C=...
Issuer: CN=CA, OU=ou, O=Symantec Corporation, C=US
Serial number: <serial>
Valid from: Tue Feb 10 16:00:00 PST 2015 until: Wed Apr 11 16:59:59 PDT 2018
Certificate fingerprints:
MD5: <md5hash>
SHA1: <sha1hash>
SHA256: 89:14:B8:4A:F8:B3:2A:0D:3B:A1:49:28:D9:B1:6F:D6:CE:E4:2A:42:62:EB:C4:71:A1:E8:22:AE:84:8C:38:F1
Signature algorithm name: SHA256withRSA
Version: 3

That SHA256 hash is what you’re looking for, just without colons.

You can now add that hash directly to your Java ruleset with the certificate attribute:

  <rule>
    <id>
      <certificate hash="8914B84AF8B32A0D3BA14928D9B16FD6CEE42A4262EBC471A1E822AE848C38F1" />
    </id>
    <action permission="run" />
  </rule>  

Test your JNLP and see if that works!

Introducing Facebook’s AutoPkg Script

AutoPkg Wrapper Scripts

There are myriad AutoPkg wrapper scripts/tools available out there:

They all serve the same basic goal – run AutoPkg with a selection of recipes, and trigger some sort of notification / email / alert when an import succeeds, and when a recipe fails. This way, admins can know when something important has happened and make any appropriate changes to their deployment mechanism to incorporate new software.

Everything Goes In Git

Facebook is, unsurprisingly, big on software development. As such, Facebook has a strong need for source control in all things, so that code and changes can always be identified, reviewed, tested, and if necessary, reverted. Source control is an extremely powerful tool for managing differential changes among flat text files – which is essentially what AutoPkg is.

Munki also benefits strongly, as all of Munki configuration is based solely on flat XML-based files. Pkginfo files, catalogs, and manifests all benefit from source control, as any changes made to the Munki repo will involve differential changes in (typically) small batches of lines relative to the overall sizes of the catalogs.
Obvious note: binaries packages / files have a more awkward relationship with git and source control in general. Although it’s out of the scope of this blog post, I recommend reading up on Allister Banks’ article on git-fat on AFP548 and how to incorporate large binary files into a git repo.

Git + Munki

At Facebook, the entire Munki repo exists in git. When modifications are made or new packages are imported, someone on the Client Platform Engineering team makes the changes, and then puts up a differential commit for team review. Another member of the team must then review the changes, and approve. This way, nothing gets into the Munki repo that at least two people haven’t looked at. Since it’s all based on git, merging changes from separate engineers is relatively straightforward, and issuing reverts on individual packages can be done in a flash.

AutoPkg + Munki

AutoPkg itself already has a great relationship with git – generally all recipes are repos on GitHub, most within the AutoPkg GitHub organization, and adding a new repo generally amounts to a git clone.

My initial attempts to incorporate AutoPkg repos into a separate git repo were a bit awkward. “Git repo within a git repo” is a rather nasty rabbit hole to go down, and once you get into git submodules you can see the fabric of reality tearing and the nightmares at the edge of existence beginning to leak in. Although submodules are a really neat tactic, regulating the updating of a git repo within a git repo and successfully keeping this going on several end point machines quickly became too much work for too little benefit.

We really want to make sure that AutoPkg recipes we’re running are being properly source controlled. We need to be 100% certain that when we run a recipe, we know exactly what URL it’s pulling packages from and what’s happening to that package before it gets into our repo. We need to be able to track changes in recipes so that we can be alerted if a URL changes, or if more files are suddenly copied in, or any other unexpected developments occur. This step is easily done by rsyncing the various recipe repos into git, but this has the obvious downside of adding a ton of stuff to the repo that we may not ever use.

The Goal

The size and shape of the problem is clear:

  • We want to put only recipes that we care about into our repo.
  • We want to automate the updating of the recipes we care about.
  • We want code review for changes to the Munki repo, so each package should be a separate git commit.
  • We want to be alerted when an AutoPkg recipe successfully imports something into Munki.
  • We want to be alerted if a recipe fails for any reason (typically due to a bad URL).
  • We really don’t want to do any of this by hand.

autopkg_runner.py

Facebook’s Client Platform Engineering team has authored a Python script that performs these tasks: autopkg_runner.py.

The Setup

In order to make use of this script, AutoPkg needs to be configured slightly differently than usual.

The RECIPE_REPO_DIR key should be the path to where all the AutoPkg git repos are stored (when added via autopkg add).

The RECIPE_SEARCH_DIRS preference key should be reconfigured. Normally, it’s an array of all the git repos that are added with autopkg add (in addition to other built-in search paths). In this context, the RECIPE_SEARCH_DIRS key is going to be used to contain only two items – ‘.’ (the local directory), and a path to a directory inside your git repo that all recipes will be copied to (with rsync, specifically). As described earlier, this allows any changes in recipes to be incorporated into git differentials and put up for code review.

Although not necessary for operation, I also recommend that RECIPE_OVERRIDE_DIRS be inside a git repo as well, so that overrides can also be tracked with source control.

The entire Munki repo should also be within a git repo, obviously, in order to make use of source control for managing Munki imports.

Notifications

In the public form of this script, the create_task() function is empty. This can be populated with any kind of notification system you want – such as sending an email, generating an OS X notification to Notification Center (such as Terminal Notifier or Yo), filing a ticket with your ticketing / helpdesk system, etc.

If run as is, no notifications of any kind will be generated. You’ll have to write some code to perform this task (or track me down in Slack or at a conference and badger me into doing it).

What It Does

The script has a list of recipes to execute inside (at line 33). These recipes are parsed for a list of parents, and all parent recipes necessary for executing these are then copied into the RECIPE_REPO_DIR from the AutoPkg preferences plist. This section is where you’ll want to put in the recipes that you want to run.

Each recipe in the list is then run in sequence, and catalogs are made each time. This allows each recipe to create a full working git commit that can be added to the Munki git repo without requiring any other intervention (obviously into a testing catalog only, unless you shout “YOLO” first).

Each recipe saves a report plist. This plist is parsed after each autopkg run to determine if any Munki imports were made, or if any recipes failed. The function create_task() is called to send the actual notification.

If any Munki imports were made, the script will automatically change directory to the Munki repo, and create a git feature branch for that update – named after the item and the version that was imported. The changes that were made (the package, the pkginfo, and the changes to the catalogs) are put into a git commit. Finally, the current branch is switched back to the Master branch, so that each commit is standalone and not dependent on other commits to land in sequence.
NOTE: the commits are NOT automatically pushed to git. Manual intervention is still necessary to push the commit to a git repo, as Facebook has a different internal workflow for doing this. An enterprising Python coder could easily add that functionality in, if so desired.

Execution & Automation

At this point, executing the script is simple. However, in most contexts, some automation may be desired. A straightforward launch daemon to run this script nightly could be used:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.facebook.CPE.autopkg</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/autopkg_runner.py</string>
</array>
<key>StartCalendarInterval</key>
<array>
<dict>
<key>Hour</key>
<integer>0</integer>
<key>Minute</key>
<integer>0</integer>
</dict>
</array>
<key>StandardOutPath</key>
<string>/var/log/autopkg.log</string>
<key>StandardErrorPath</key>
<string>/var/log/autopkg_err.log</string>
</dict>
</plist>

Some Caveats on Automation

Automation is great, and I’m a big fan of it. However, with any automated system, it’s important to fully understand the implications of each workflow.

With this particular workflow, there’s a specific issue that might arise based on timing. Since each item imported into Munki via AutoPkg is a separate feature branch, that means that the catalog technically hasn’t changed when you run the .munki recipe against the Master branch. If you run this recipe twice in a row, AutoPkg will try to re-import the packages again, because the Master branch hasn’t incorporated your changes yet.

In other words, you probably won’t want to run this script until your git commits are pushed into Master. This could be a potential timing issue if you are running this script on a constant time schedule and don’t get an opportunity to push the changes into master before the next iteration.

I Feel Powerful Today, Give Me More

If you are seeking even more automation (and feel up to doing some Python), you could add in a git push to make these changes happen right away. If you are only adding in items to a testing catalog with limited and known distribution, this may be reasonably safe way to keep track of all Munki changes in source control without requiring human intervention.

Such a change would be easy to implement, since there’s already a helper function to run git commands – git_run(). Here’s some sample code that could incorporate a git push, which involves making some minor changes to the end of create_commit():


def create_commit(imported_item):
'''Creates a new feature branch, commits the changes,
switches back to master'''
# print "Changing location to %s" % autopkglib.get_pref('MUNKI_REPO')
os.chdir(autopkglib.get_pref('MUNKI_REPO'))
# Now, we need to create a feature branch
print "Creating feature branch."
branch = '%s-%s' % (str(imported_item['name']),
str(imported_item["version"]))
print change_feature_branch(branch)
# Now add all items to git staging
print "Adding items…"
gitaddcmd = ['add', '–all']
gitaddcmd.append(autopkglib.get_pref("MUNKI_REPO"))
print git_run(gitaddcmd)
# Create the commit
print "Creating commit…"
gitcommitcmd = ['commit', '-m']
message = "Updating %s to version %s" % (str(imported_item['name']),
str(imported_item["version"]))
gitcommitcmd.append(message)
print git_run(gitcommitcmd)
# Switch back to master
print change_feature_branch('master')
# Merge into master first
gitmergecmd = ['merge', branch]
print git_run(gitmergecmd)
# Now push to remote master
gitpushcmd = ['push', 'origin', 'master']
print git_run(gitpushcmd)

Conclusions

Ultimately, the goal here is to remove manual work from a repetitive process, without giving up any control or the ability to isolate changes. Incorporating Munki and AutoPkg into source control is a very strong way of adding safety, sanity, and accountability to the Mac infrastructure. Although this blog post bases it entirely around git, you could accommodate a similar workflow to Mercurial, SVN, etc.

The full take-away from this is to be mindful of the state of your data at all times. With source control, it’s easier to manage multiple people working on your repo, and it’s (relatively) easy to fix a mistake before it becomes a catastrophe. Source control has the added benefit of acting as an ersatz backup of sorts, where it becomes much easier to reconstitute your repo in case of disaster because you now have a record for what the state of the repo was at any given point in its history.

Generating PBKDF2 Password Hashes In Python, Not Ruby

Chef offers a great many useful features, including the ability to manage and create user accounts. The password for a local user account can be specified either in clear text or as a password hash.

According to the documentation linked above, generating an appropriate password hash for 10.8+ requires the use of a specific Ruby function:

OpenSSL::PKCS5::pbkdf2_hmac(
password,
salt,
iterations,
128,
OpenSSL::Digest::SHA512.new
)

However, when trying to generate such a hash using this tool on 10.10.5, I discovered a problem:

irb(main):026:0> OpenSSL::PKCS5::pbkdf2_hmac(
irb(main):027:1* password,
irb(main):028:1* salt,
irb(main):029:1* iterations,
irb(main):030:1* 128,
irb(main):031:1* OpenSSL::Digest::SHA512.new
irb(main):032:1> )
NotImplementedError: pbkdf2_hmac() function is unimplemented on this machine
from (irb):26:in `pbkdf2_hmac'
from (irb):26
from /usr/bin/irb:12:in `<main>'

Well, that's not very nice.

The issue is that the version of OpenSSL on OS X for the last several years is still 0.9.8zg. That version simply doesn't have an implementation of the pbkdf2_hmac() that Ruby wants to use. However, Python does, thanks to hashlib.

To recreate the same process in Python that the Chef documentation recommends for generating a 10.8+ password hash, use the following steps:

import hashlib
import binascii
import os

password = b'password'
salt = os.urandom(32)
chef_salt = binascii.hexlify(salt)
iterations = 25000

hex = hashlib.pbkdf2_hmac('sha512', password, salt, 25000, 128)
chef_password_hash = binascii.hexlify(hex)

Let's break down what happened there. First, we set the password to our password string. In Python 2, the b before a string doesn't really do anything.

The salt is a random 32-bit string. In Python, this comes out in binary form:

>>> salt = os.urandom(32)
>>> salt
'M\xde\xf6\x9fp\xd7$\x128\x9a\xc2!\xad\x1a\xe6\x9bE\xf8N\n\xd0\x18\xf6Ez\xf5@\xe0\xd1\r\xe6a'

Chef, however, requires this in a hexadecimal form:

>>> binascii.hexlify(salt)
'4ddef69f70d72412389ac221ad1ae69b45f84e0ad018f6457af540e0d10de661'

We use 25,000 iterations as a nice arbitrary number, but you should use anything above 10,000 to be at least minimally safe. Of course this is a local user account on a service machine in my context, so I’m not entirely worried about its security as much.

Once we have all the variables, we can use the actual pbkdf2_hmac() function. In the example above, we’re using the SHA-512 digest, with a derived key length of 128 as the Chef documentation suggests. Once again, the result of that command is binary data, and Chef requires a hexadecimal representation, so we turn to our trusty friend binascii.hexlify again.

This allows us to create the Chef user block we need:

user 'myuser' do
gid 'staff'
home '/Users/myuser'
shell '/bin/bash'
password 'e6a8a452c0a9edb7f80703657b91fae74191d3b83982687ca00b83741ad775410178542ffc176abe6db9dc46053bc7ed36c91c1f43f82ba1dedc12de929f81cca868e223a25f3f16728e9f92c02e4421e9f73d73edb5e23e5d0cf1784243e8c79307ee5e61b411c9f116c450af8112e519fa15cfb50f5e7a8c1e6a78fb7cbc0e'
salt 'eb30e9c1946f086b4cd84679c1ee81235edea080b28b1ce4d39341794fad1ccd'
iterations 25_000
supports manage_home: true
action :create
end

I’m told this same technique can also generate password hashes to be used with Puppet as well, although I haven’t tested it personally.

Using iOSConsole to Query Information About iOS Devices

When Apple discontinued iPhone Configuration Utility, we lost the ability to access certain kinds of information easily from iOS devices that had not yet completed the Setup Assistant. With iPCU, we could attach a device to a computer and then get information such as WiFi MAC Address and Bluetooth MAC Address (which are not written on the box, nor on the outside of the device). Anyone who requires any kind of MAC-address authentication for wireless (like me) will need this information before being able to activate iOS devices on our network.

Unfortunately, Apple Configurator doesn’t give this kind of information for devices that have not yet been Prepared. Xcode’s Devices window also doesn’t provide this information for attached devices. iPCU was the only tool that provided this info right off the bat.

Thankfully, Sam Marshall has provided us with some tools to get this information, by building a framework called iOSConsole. This does require compiling this project in Xcode, so there are a few steps that must be done first.

Major, major credit to Mike Lynn for his instructions on how to build this project – I wouldn’t have been able to do this without his help.

Obtain the Project Files

Update: Sam Marshall has provided a pre-built version of iOSConsole that is already signed, so you can simply download the release and skip to the “Using iOSConsole” section below. Thanks, Sam!

  1. Go to https://github.com/samdmarshall/SDMMobileDevice and download the Master as a zip.
  2. Go to https://github.com/samdmarshall/Core-Lib/tree/62b93fa94fbfde421ea8bb1513f5e935191e755d, which is a specific commit in time, and download the Master as a zip.
  3. Extract both archives.
  4. Place Core-Lib-62b93fa94fbfde421ea8bb1513f5e935191e755d/Core into SDMMobileDevice-master/Core/. The end result should look like SDMMobileDevice-master/Core/Core/.

Alternatively, you can do it via git:
git clone https://github.com/samdmarshall/SDMMobileDevice.git
cd SDMMobileDevice;
git submodule update --init --recursive

Build the Project

  1. Open SDMMobileDevice.xcworkspace
  2. Click on the scheme (to the right of the grey Stop square button) that currently says “reveal_loader” and change it to “iOSConsole”:
    Screenshot 2015-07-08 13.49.47
    Screenshot 2015-07-08 13.49.41
  3. Click on “iOSConsole” in the left sidebar:
    Screenshot 2015-07-08 13.50.00
  4. Go to the Product menu -> Scheme -> Edit Scheme:
    Screenshot 2015-07-08 13.50.10Screenshot 2015-07-08 13.50.16
  5. Change Build Configuration to “Release” (it defaults to Debug):
    Screenshot 2015-07-08 13.50.18
    Screenshot 2015-07-08 13.50.20
  6. With “iOSConsole” highlighted in left sidebar, you should see Build Settings menu:
  7. Scroll down to Code Signing section and make sure “Don’t code sign” is selected:
    Screenshot 2015-07-08 13.50.42
  8. Go to the Product menu -> Build for -> Running:
    Screenshot 2015-07-08 13.50.56
    Screenshot 2015-07-08 13.50.59
  9. The build should succeed after compiling.
  10. Click the disclosure triangle underneath iOSConsole in left sidebar.
  11. Click the disclosure triangle underneath “Products” under “iOSConsole”:
    Screenshot 2015-07-08 13.51.41
  12. Right click “iOSConsole” under “Products” and choose “Show in Finder”:
    Screenshot 2015-07-08 13.51.44
  13. This will open up the “Release” folder:
    Screenshot 2015-07-08 13.51.47

Using iOSConsole

Navigate to this in Terminal (cd :drag folder into Terminal:):

./iOSConsole --help to see available commands:

-h [service|query] : list available services or queries
-l,--list : list attached devices
-d,--device [UDID] : specify a device
-s,--attach [service] : attach to [service]
-q,--query <domain>=<key> : query value for <key> in <domain>, specify 'null' for global domain
-a,--apps : display installed apps
-i,--info : display info of a device
-r,--run [bundle id] : run an application with specified [bundle id]
-p,--diag [sleep|reboot|shutdown] : perform diag power operations on a device
-x,--develop : setup device for development
-t,--install [.app path] : install specificed .app to a device
-c,--profile [.mobileconfig path] : install specificed .mobileconfig to a device

./iOSConsole -h query to see available information domains and keys to query

./iOSConsole --list (or ./iOSConsole -l) to list all attached iOS devices and get device identifiers for each device. You’ll need these device identifiers to specify which devices you want to query information from, using the -d argument.

./iOSConsole -l
Currently connected devices: (1)
1) d194149c6d840dcbabdd638d5aa5cee3e4eeb0c7 : iPad (iPad Air) (USB)

To get Ethernet Address (aka WiFi MAC address):
./iOSConsole -d d194149c6d840dcbabdd638d5aa5cee3e4eeb0c7 -q null=EthernetAddress
Bluetooth address:
./iOSConsole -d d194149c6d840dcbabdd638d5aa5cee3e4eeb0c7 -q null=BluetoothAddress

Note that most values are unavailable until after activation takes place.

If you want to get all possible information from the device (this is quite a lot):
./iOSConsole -d d194149c6d840dcbabdd638d5aa5cee3e4eeb0c7 -q NULL=NULL

This tool allows you to get some hardware information from devices without having to complete the setup process first.

Suppressing Adobe CC 2015 Splash Screens

Adobe has released new version of the CC apps, now called the “2015” versions. With the new Adobe CC apps comes new behavior.

Many of thew new CC 2015 apps have new splash screens. Some of them use a new welcome screen called “Hello” which is actually an interactive web page that requires network connection to function. This has resulted in some problems) for some users or bad network conditions.

Even if it works fine, it’s an extra step for users who just want to get started. In some cases, I’d prefer to suppress this welcome screen if possible. Here’s how you can do it for the new CC 2015 products:

Centrally managed

Photoshop, Illustrator, InDesign:

These apps are helpfully documented by Adobe. It involves downloading a file called “prevent_project_hello_launching.jsx” and placing it into the startup scripts folders for the Adobe software.

Here’s a script to build a package to do this, once you download the file and extract it from the .zip:


#!/bin/bash
temppath="$(mktemp -d -t AdobePSIDIL)"
/bin/mkdir -p "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Adobe Photoshop/"
/bin/mkdir -p "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Adobe InDesign/"
/bin/mkdir -p "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Illustrator 2015/"
/bin/cp prevent_project_hello_launching.jsx "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Adobe Photoshop/"
/bin/cp prevent_project_hello_launching.jsx "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Adobe InDesign/"
/bin/cp prevent_project_hello_launching.jsx "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Illustrator 2015/"
/usr/bin/pkgbuild –root "$temppath" –identifier "org.sacredsf.adobe.psidil.welcome" –version 1.0 AdobePSIDIL-Welcome.pkg

AfterEffects:

After Effects uses the same “Startup Scripts CC” folder as the ones above, but requires a slightly different script.
Place this content into /Library/Application Support/Adobe/Startup Scripts CC/Adobe After Effects/suppress_welcome.jsx:

app.preferences.savePrefAsBool("General Section", "Show Welcome Screen", false) ;

Alternatively, you can also just run this script to build a package to do this (you do not need to save the above file):


#!/bin/bash
temppath="$(mktemp -d -t AdobeAfterEffects)"
/bin/mkdir -p "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Adobe After Effects"
echo "app.preferences.savePrefAsBool(\"General Section\", \"Show Welcome Screen\", false) ;" > "$temppath/Library/Application Support/Adobe/Startup Scripts CC/Adobe After Effects/after_effects.jsx"
/usr/bin/pkgbuild –root "$temppath" –identifier "org.sacredsf.adobe.aftereffects.welcome" –version 1.0 AdobeAfterEffects-Welcome.pkg

Lightroom 6

Lightroom 6, thankfully, uses the built in preference system. All the settings are stored in ~/Library/Preferences/com.adobe.Lightroom6.plist, and can be changed using defaults.

Lightroom launches a barrage of messages, notices, and prompts at the user when it first launches, but they all correspond to preference keys that can be changed or managed. Here are the preference keys:

firstLaunchHasRun30 = 1
noAutomaticallyCheckUpdates = 1
noSplashScreenOnStartup = 1
"com.adobe.ag.library_Showed_Walkthroughs" = 1
"Showed_Sync_Walkthrough" = 1
HighBeamParticipationNoticeHasShowed = 1

Here’s a profile that will configure those settings as well.

Per-user Preferences

Dreamweaver

Unfortunately, Dreamweaver doesn’t seem to have any nice mechanism for centrally managing the preferences via startup scripts or anything. The only way I’ve discovered to manage this is by copying in a pre-fabbed “Adobe Dreamweaver CC 2015 Prefs” file into the preferences folder, which is located at:
~/Library/Preferences/Adobe Dreamweaver CC 2015 Prefs.

To apply this setting to all users, we need to deploy it to a central location (such as /Library/Preferences/Adobe/Adobe Dreamweaver CC 2015 Prefs, which is simply the non-user-specific equivalent of the preferences path), and then copy it into each user’s Library at login. We can use Outset to accomplish this easily.

These are the settings you need to provide in the Prefs file that will turn off the hello page on startup (including first run). Save this file as “Adobe Dreamweaver CC 2015 Prefs”:

[GENERAL PREFERENCES]
show hello page=FALSE

This script will be run by Outset to copy the preferences file from a centralized Library location into the user account’s correct Preferences location:


#!/bin/sh
/bin/cp -f "/Library/Preferences/Adobe Dreamweaver CC 2015 Prefs" "$HOME/Library/Preferences/Adobe Dreamweaver CC 2015 Prefs"

Download and save this script as “AdobeDWCCPrefs.sh”. The ending destination for this script is going to be /usr/local/outset/login-once/, which will trigger the first time a user logs in.

This script will build the package that will deposit all the necessary files in the right places, once you’ve downloaded and saved the two files above (the script, and Prefs file):


#!/bin/bash
temppath="$(mktemp -d -t AdobeDreamweaver)"
/bin/mkdir -p "$temppath/Library/Preferences/"
/bin/cp "Adobe Dreamweaver CC 2015 Prefs" "$temppath/Library/Preferences/Adobe Dreamweaver CC 2015 Prefs"
/bin/mkdir -p "$temppath/usr/local/outset/login-once"
/bin/chmod ugo+x AdobeDWCCPrefs.sh
/bin/cp AdobeDWCCPrefs.sh "$temppath/usr/local/outset/login-once/"
/bin/mkdir "scripts"
/bin/cp AdobeDWCCPrefs.sh "scripts/postinstall"
/usr/bin/pkgbuild –root "$temppath" –identifier "org.sacredsf.adobe.dreamweaver.welcome" –scripts "scripts" –version 1.0 AdobeDreamweaver-Welcome.pkg

Muse:

Muse has the same problem as Dreamweaver. You will need to drop a pre-configured “helloPrefStore.xml” into the Muse preferences folder, which is located at:
~/Library/Preferences/Adobe/Adobe Muse CC/2015.0/helloPrefStore.xml

As with Dreamweaver, we’ll deploy this file into a central location (such as /Library/Preferences/Adobe/Adobe Muse CC/2015.0/helloPrefStore.xml), and then copy it into each user’s Library at login with Outset.

Save this file as helloPrefStore.xml in a local directory:

<prop.list>
<prop.pair>
<key>helloPrefVersion</key>
<ustring>2015.0</ustring>
</prop.pair>
<prop.pair>
<key>helloUIDontShowAgain</key>
<false/>
</prop.pair>
</prop.list>

This script will be run by Outset to copy the preferences file from a centralized Library location into the user account’s correct Preferences location:


#!/bin/sh
/bin/cp -f "/Library/Preferences/Adobe/Adobe Muse CC/2015.0/helloPrefStore.xml" "$HOME/Library/Preferences/Adobe/Adobe Muse CC/2015.0/helloPrefStore.xml"

Download and save this script as “AdobeMuseCCHelloPrefStore.sh”. The ending destination for this script is going to be /usr/local/outset/login-once/, which will trigger the first time a user logs in.

This script will build the package that will deposit all the necessary files in the right places, once you’ve downloaded and saved the two files above (the script, and XML file):


#!/bin/bash
temppath="$(mktemp -d -t AdobeMuse)"
/bin/mkdir -p "$temppath/Library/Preferences/Adobe/Adobe Muse CC/2015.0/"
/bin/cp helloPrefStore.xml "$temppath/Library/Preferences/Adobe/Adobe Muse CC/2015.0/"
/bin/mkdir -p "$temppath/usr/local/outset/login-once"
/bin/chmod ugo+x AdobeMuseCCHelloPrefStore.sh
/bin/cp AdobeMuseCCHelloPrefStore.sh "$temppath/usr/local/outset/login-once/"
/bin/mkdir "scripts"
/bin/cp AdobeMuseCCHelloPrefStore.sh "scripts/postinstall"
/usr/bin/pkgbuild –root "$temppath" –identifier "org.sacredsf.adobe.muse.welcome" –scripts "scripts" –version 1.0 AdobeMuse-Welcome.pkg

Required welcome screens

Edge Animate

Unlike some of the other CC applications, Edge Animate CC 2015’s welcome screen is required to load the rest of the app content. The welcome screen serves as the entrypoint into creating or loading up a project (similar to iMovie or GarageBand’s introductory windows). The welcome screen will appear regardless of whether or not you have a project available or previously opened, and closing the welcome screen will quit the application.

Flash CC

Flash CC uses the welcome screen as part of its window templates, so it can’t be suppressed.

Prelude

Similar to Edge Animate, the welcome screen is a required way to access projects. However, by default, it will show this window every startup regardless of whether or not you are loading a project already.

You can uncheck that box by default by pre-providing a stripped down copy of Adobe Prelude’s preferences file, located at ~/Library/Application Support/Adobe/Prelude/4.0/Adobe Prelude Prefs:


<?xml version="1.0" encoding="UTF-8"?>
<PremiereData Version="3">
<Preferences ObjectRef="1"/>
<Preferences ObjectID="1" ClassID="f06902ec-e637-4744-a586-c26202143e36" Version="30">
<Properties Version="1">
<MZ.Prefs.ShowQuickstartDialog>false</MZ.Prefs.ShowQuickstartDialog>
</Properties>
</Preferences>
</PremiereData>

You can use the same mechanism as described above in Muse to do so. Save the above gist as “Adobe Prelude Prefs”.

Save this Outset script as “AdobePreludeCCPrefs.sh”:


#!/bin/sh
/bin/cp -f "/Library/Application Support/Adobe/Prelude/4.0/Adobe Prelude Prefs" "$HOME/Library/Application Support/Adobe/Prelude/4.0/Adobe Prelude Prefs"

Use this script to build a package for it:


#!/bin/bash
temppath="$(mktemp -d -t AdobePreludePro)"
/bin/mkdir -p "$temppath/Library/Application Support/Adobe/Prelude/4.0/"
/bin/cp Adobe\ Prelude\ Prefs "$temppath/Library/Application Support/Adobe/Prelude/4.0/Adobe Prelude Prefs"
/bin/mkdir -p "$temppath/usr/local/outset/login-once"
/bin/chmod ugo+x AdobePreludeCCPrefs.sh
/bin/cp AdobePreludeCCPrefs.sh "$temppath/usr/local/outset/login-once/"
/bin/mkdir "scripts"
/bin/cp AdobePreludeCCPrefs.sh "scripts/postinstall"
/usr/bin/pkgbuild –root "$temppath" –identifier "org.sacredsf.adobe.prelude.welcome" –scripts "scripts" –version 1.0 AdobePrelude-Welcome.pkg

Just remember that you cannot completely suppress the Prelude welcome screen, but you can prevent it from coming up by default in the future.

Premiere Pro

Premiere Pro displays a similar “Hello” welcome screen that Photoshop, InDesign, and Illustrator do. With Premiere Pro, like Adobe Prelude, the splash screen will always display on startup if no default project has been selected / created. Otherwise, it will open the last project. It does not seem possible to isolate a specific key to disable the welcome screen – if anyone finds one, please let me know in the comments!

Premiere Pro’s preferences are stored in ~/Documents/Adobe/Premiere Pro/9.0/Profile-/Adobe Premiere Pro Prefs.

Although I’m not sure what happens if you make changes here, it also lists a “SystemPrefPath” as /Library/Application Support/Adobe/Adobe Premiere Pro Cc 2015. You may be able to centralize preferences there.

CC applications with no welcome screens:

  • Audition
  • Character Animator (Preview)
  • InCopy
  • Media Encoder
  • SpeedGrade