Choria Update

Recently at Config Management Camp I’ve had many discussions about Orchestration, Playbooks and Choria, I thought it’s time for another update on it’s status.

I am nearing version 1.0.0, there are a few things to deal with but it’s getting close. Foremost I wanted to get the project it’s own space on all the various locations like GitHub, Forge, etc.

Inevitably this means getting a logo, it’s been a bit of a slog but after working through loads of feedback on Twitter and offers for assistance from various companies I decided to go to a private designer called Isaac Durazo and the outcome can be seen below:


 

The process of getting the logo was quite interesting and I am really pleased with the outcome, I’ll blog about that separately.

Other than the logo the project now has it’s own GitHub organisation at https://github.com/choria-io and I have moved all the forge modules to it’s own space as well https://forge.puppet.com/choria.

There are various other places the logo show up like in the Slack notifications and so forth.

On the project front there’s a few improvements:

  • There is now a registration plugin that records a bunch of internal stats on disk, the aim is for them to be read by Collectd and Sensu
  • A new Auditing plugin that emits JSON structured data
  • Several new Data Stores for Playbooks – files, environment.
  • Bug fixes on Windows
  • All the modules, plugins etc have moved to the Choria Forge and GitHub
  • Quite extensive documentation site updates including branding with the logo and logo colors.

There is now very few things left to do to get 1.0.0 out but I guess another release or two will be done before then.

So from now to update to coming versions you need to use the choria/mcollective_choria module which will pull in all it’s dependencies from the Choria project rather than my own Forge.

Still no progress on moving the actual MCollective project forward but I’ve discussed a way to deal with forking the various projects in a way that seems to work for what I want to achieve. In reality I’ll only have time to do that in a couple of months so hopefully something positive will happen in the mean time.

Head over to Choria.io to take a look.

Choria Playbooks – Data Sources

About a month ago I blogged about Choria Playbooks – a way to write series of actions like MCollective, Shell, Slack, Web Hooks and others – contained within a YAML script with inputs, node sets and more.

Since then I added quite a few tweaks, features and docs, it’s well worth a visit to choria.io to check it out.

Today I want to blog about a major new integration I did into them and a major step towards version 1 for Choria.

Overview


In the context of a playbook or even a script calling out to other system there’s many reasons to have a Data Source. In the context of a playbook designed to manage distributed systems the Data Source needed has some special needs. Needs that tools like Consul and etcd fulfil specifically.

So today I released version 0.0.20 of Choria that includes a Memory and a Consul Data Source, below I will show how these integrate into the Playbooks.

I think using a distributed data store is important in this context rather than expecting to pass variables from the Playbook around like on the CLI since the business of dealing with the consistency, locking and so forth are handled and I can’t know all the systems you wish to interact with, but if those can speak to Consul you can prepare an execution environment for them.

For those who don’t agree there is a memory Data Store that exists within the memory of the Playbook. Your playbook should remain the same apart from declaring the Data Source.

Using Consul


Defining a Data Source


Like with Node Sets you can have multiple Data Sources and they are identified by name:

data_stores:
  pb_data:
    type: consul
    timeout: 360
    ttl: 20

This creates a Consul Data Source called pb_data, you need to have a local Consul Agent already set up. I’ll cover the timeout and ttl a bit later.

Playbook Locks


You can create locks in Consul and by their nature they are distributed across the Consul network. This means you can ensure a playbook can only be executed once per Consul DC or by giving a custom lock name any group of related playbooks or even other systems that can make Consul locks.

---
locks:
  - pb_data
  - pb_data/custom_lock

This will create 2 locks in the pb_data Data Store – one called custom_lock and another called choria/locks/playbook/pb_name where pb_name is the name from the metadata.

It will try to acquire a lock for up to timeout seconds – 360 here, if it can’t the playbook run fails. The associated session has a TTL of 20 seconds and Choria will renew the sessions around 5 seconds before the TTL expires.

The TTL will ensure that should the playbook die, crash, machine die or whatever, the lock will release after 20 seconds.

Binding Variables


Playbooks already have a way to bind CLI arguments to variables called Inputs. Data Sources extend inputs with extra capabilities.

We now have two types of Input. A static input is one where you give the data on the CLI and the data stays static for the life of the playbook. A dynamic input is one bound against a Data Source and the value of it is fetched every time you reference the variable.

inputs:
  cluster:
    description: "Cluster to deploy"
    type: "String"
    required: true
    data: "pb_data/choria/kv/cluster"
    default: "alpha"

Here we have a input called cluster bound to the choria/kv/cluster key in Consul. This starts life as a static input and if you give this value on the CLI it will never use the Data Source.

If however you do not specify a CLI value it becomes dynamic and will consult Consul. If there’s no such key in Consul the default is used, but the input remains dynamic and will continue to consult Consul on every access.

You can force an input to be dynamic which will mean it will not show up on the CLI and will only speak to a data source using the dynamic: true property on the Input.

Writing and Deleting Data


Of course if you can read data you should be able to write and delete it, I’ve added tasks to let you do this:

locks:
  - pb_data
 
inputs:
  cluster:
    description: "Cluster to deploy"
    type: "String"
    required: true
    data: "pb_data/choria/kv/cluster"
    default: "alpha"
    validation: ":shellsafe"
 
hooks:
  pre_book:
    - data:
        action: "delete"
        key: "pb_data/choria/kv/cluster"
 
tasks:
  - shell:
      description: Deploy to cluster {{{ inputs.cluster }}}
      command: /path/to/script --cluster {{{ inputs.cluster }}}
 
  - data:
      action: "write"
      value: "bravo"
      key: "pb_data/choria/kv/cluster"
 
  - shell:
      description: Deploy to cluster {{{ inputs.cluster }}}
      command: /path/to/script --cluster {{{ inputs.cluster }}}

Here I have a pre_book task list that ensures there is no stale data, the lock ensures no other Playbook will mess around with the data while we run.

I then run a shell command that uses the cluster input, with nothing there it uses the default and so deploys cluster alpha, it then writes a new value and deploys cluster brova.

This is a bit verbose I hope to add the ability to have arbitrarily named tasks lists that you can branch to, then you can have 1 deploy task list and use the main task list to set up variables for it and call it repeatedly.

Conclusion


That’s quite a mouthful, the possibilities of this is quite amazing. On one hand we have a really versatile data store in the Playbooks but more significantly we have expanded the integration possibilities by quite a bit, you can now have other systems manage the environment your playbooks run in.

I will soon add task level locks and of course Node Set integration.

For now only Consul and Memory is supported, I can add others if there is demand.

An update on my Choria project

Some time ago I mentioned that I am working on improving the MCollective Deployment story.

I started a project called Choria that aimed to massively improve the deployment UX and yield a secure and stable MCollective setup for those using Puppet 4.

The aim is to make installation quick and secure, towards that it seems a common end to end install from scratch by someone new to project using a clustered NATS setup can take less than a hour, this is a huge improvement.

Further I’ve had really good user feedback, especially around NATS. One user reports 2000 nodes on a single NATS server consuming 300MB RAM and it being very performant, much more so than the previous setup.

It’s been a few months, this is whats changed:

  • The module now supports every OS AIO Puppet supports, including Windows.
  • Documentation is available on choria.io, installation should take about a hour max.
  • The PQL language can now be used to do completely custom infrastructure discovery against PuppetDB.
  • Many bugs have been fixed, many things have been streamlined and made more easy to get going with better defaults.
  • Event Machine is not needed anymore.
  • A number of POC projects have been done to flesh out next steps, things like a very capable playbook system and a revisit to the generic RPC client, these are on GitHub issues.

Meanwhile I am still trying to get to a point where I can take over maintenance of MCollective again, at first Puppet Inc was very open to the idea but I am afraid it’s been 7 months and it’s getting nowhere, calls for cooperation are just being ignored. Unfortunately I think we’re getting pretty close to a fork being the only productive next step.

For now though, I’d say the Choria plugin set is production ready and stable any one using Puppet 4 AIO should consider using these – it’s about the only working way to get MCollective on FOSS Puppet now due to the state of the other installation options.

Puppet Query Language

For a few releases now PuppetDB had a new query language called Puppet Query Language or PQL for short. It’s quite interesting, I thought a quick post might make a few more people aware of it.

Overview


To use it you need a recent PuppetDB and as this is quite a new feature you really want the latest PuppetDB. There is nothing to enable when you install it the feature is already active. The feature is marked as experimental so some things will change as it moves to production.

PQL Queries look more or less like this:

nodes { certname ~ 'devco' }

This is your basic query it will return a bunch of nodes, something like:

[
  {
    "deactivated": null,
    "latest_report_hash": null,
    "facts_environment": "production",
    "cached_catalog_status": null,
    "report_environment": null,
    "latest_report_corrective_change": null,
    "catalog_environment": "production",
    "facts_timestamp": "2016-11-01T06:42:15.135Z",
    "latest_report_noop": null,
    "expired": null,
    "latest_report_noop_pending": null,
    "report_timestamp": null,
    "certname": "devco.net",
    "catalog_timestamp": "2016-11-01T06:42:16.971Z",
    "latest_report_status": null
  }
]

There are a bunch of in-built relationships between say a node and it’s facts and inventory, so queries can get quite complex:

inventory[certname] { 
  facts.osfamily = "RedHat" and
  facts.dc = "linodeldn" and
  resources { 
    type = "Package" and
    title = "java" and
    parameters.ensure = "1.7.0" 
  } 
}

This finds all the RedHat machines in a particular DC with Java 1.7.0 on them. Be aware this will also find machines that are deactivated.

I won’t go into huge details of the queries, the docs are pretty good – examples, overview.

So this is quite interesting, this finally gives us a reasonably usable DB to do queries that mcollective discovery used to be used for – but of course its not a live view nor does it have any clue what the machines are up to but as a cached data source for discovery this is interesting.

Using


CLI


You can of course query this stuff on the CLI and I suggest you familiarise yourself with JQ.

First you’ll have to set up your account:

{
  "puppetdb": {
    "server_urls": "https://puppet:8081",
    "cacert": "/home/rip/.puppetlabs/etc/puppet/ssl/certs/ca.pem",
    "cert": "/home/rip/.puppetlabs/etc/puppet/ssl/certs/rip.mcollective.pem",
    "key": "/home/rip/.puppetlabs/etc/puppet/ssl/private_keys/rip.mcollective.pem"
  }
}

This is in ~/.puppetlabs/client-tools/puppetdb.conf which is a bit senseless to me since there clearly is a standard place for config files, but alas.

Once you have this and you installed the puppet-client-tools package you can do queries like:

$ puppet query "nodes { certname ~ 'devco.net' }"

Puppet Code


Your master will have the puppetdb-termini package on it and this brings with it Puppet functions to query PuppetDB so you do not need to use a 3rd party module anymore:

$nodes = puppetdb_query("nodes { certname ~ 'devco' }")

Puppet Job


At the recent PuppetConf Puppet announced that their enterprise tool puppet job supports using this as discovery, if I remember right it’s something like:

$ puppet job run -q 'nodes { certname ~ 'devco' }'

MCollective


At PuppetConf I integrated this into MCollective and my Choria tool, both these are still due a release (MCO-776, choria #61):

Run Puppet on all the nodes matched by the query:

$ puppet query "nodes { certname ~ 'devco.net' }"|mco rpc puppet runonce

The above is a bit limited in that the apps in question have to specifically support this kind of STDIN discovery – the rpc app does.

I then also added support to the Choria CLI:

$ mco puppet runonce -I "pql:nodes[certname] { certname ~ 'devco.net' }"

These queries are a bit special in that they must return just the certname as here, I’ll document this up. The reason for this is that they are actually part of a much larger query done in the Choria discovery system (that uses PQL internally and is a good intro on how to query this API from code).

Here’s an example of a complex query – as used by Choria internally – that limits nodes to ones in a particular collective, our PQL query who have mcollective installed and running. You can see you can nest and combine queries into quite complex ones:

nodes[certname, deactivated] { 
  # finds nodes in the chosen inventory via a fact
  (certname in inventory[certname] { 
    facts.mcollective.server.collectives.match("d+") = "mcollective" 
  }) and 
 
  # does the supplied PQL query
  (certname in nodes[certname] {
    certname ~ 'devco.net'
  }) and
 
  # limited to machines with mcollective installed
  (resources {
    type = "Class" and title = "Mcollective"
  }) and 
 
  # who also have the service started
  (resources {
    type = "Class" and title = "Mcollective::Service"
  }) 
}

Conclusion


This is really handy and I hope more people will become familiar with it. I don’t think this quite rolls off the fingers easily – but neither does SQL or any other similar system so par for the course. What is amazing is that we can get nearer to having a common language across CLI, Code, Web UIs and 3rd party tools for describing queries of our estate so this is a major win.

Puppet 4 Sensitive Data Types

You often need to handle sensitive data in manifests when using Puppet. Private keys, passwords, etc. There has not been a native way to deal with these and so a cottage industry of community tools have spring up.

To deal with data at rest various Hiera backends like the popular hiera-eyaml exist, to deal with data on nodes a rather interesting solution called binford2k-node_encrypt exist. There are many more but less is more, these are good and widely used.

The problem is data leaks all over the show in Puppet – diffs, logs, reports, catalogs, PuppetDB – it’s not uncommon for this trusted data to show up all over the place. And dealing with this problem is a huge scope issue that will require adjustments to every component – Puppet, Hiera / Lookup, PuppetDB, etc.

But you have to start somewhere and Puppet is the right place, lets look at the first step.

Sensitive[T]


Puppet 4.6.0 introduce – and 4.6.1 fixed – a new data type that decorates other data telling the system it’s sensitive. And this data cannot by accident become logged or leaked since the type will only return a string indicating it’s redacted.

It’s important to note this is step one of a many step process towords having a unified blessed way of dealing with Sensitive data all over. But lets take a quick look at them. The official specification for this feature lives here.

In the most basic case we can see how to make sensitive data, how it looks when logged or leaked by accident:

$secret = Sensitive("My Socrates Note")
notice($secret)

This prints out the following:

Notice: Scope(Class[main]): Sensitive [value redacted]

To unwrap this and gain access to the real original data:

$secret = Sensitive(hiera("secret"))
 
$unwrapped = $secret.unwrap |$sensitive| { $sensitive }
notice("Unwrapped: ${unwrapped}")
 
$secret.unwrap |$sensitive| { notice("Lambda: ${sensitive}") }

Here you can see how to assign it unwrapped to a new variable or just use it in a block. Important to note you should never print these values like this and ideally you’d only ever use them inside a lambda if you have to use them in .pp code. Puppet has no concept of private variables so this $unwrapped variable could be accessed from outside of your classes. A lambda scope is temporary and private.

The output of above is:

Notice: Scope(Class[main]): Unwrapped: Too Many Secrets
Notice: Scope(Class[main]): Lambda: Too Many Secrets

So these are the basic operations, you can now of course pass the data around classes.

class mysql (
  Sensitive[String] $root_pass
) {
  # somehow set the password
}
 
class{"mysql":
  root_pass => Sensitive(hiera("mysql_root"))
}

Note here you can see the class specifically wants a String that is sensitive and not lets say a Number using the Sensitive[String] markup. And if you attempted to pass Sensitive(1) into it you’d get a type missmatch error.

Conclusion


So this appears to be quite handy, you can see down the line that lookup() might have a eyaml like system and emit Sensitive data directly and perhaps some providers and types will support this. But as I said it’s early days so I doubt this is actually useful yet.

I mentioned how other systems like PuppetDB and so forth also need updates before this is useful and indeed today PuppetDB is oblivious to these types and stores the real values:

$ puppet query 'resources[parameters] { type = "Class" and title = "Test" }'
...
  {
    "parameters": {
      "string": "My Socrates Note"
    }
  },
...

So this really does not yet serve any purpose but as a step one it’s an interesting look at what will come.

Interacting with the Puppet CA from Ruby

I recently ran into a known bug with the puppet certificate generate command that made it useless to me for creating user certificates.

So I had to do the CSR dance from Ruby myself to work around it, it’s quite simple actually but as with all things in OpenSSL it’s weird and wonderful.

Since the Puppet Agent is written in Ruby and it can do this it means there’s a HTTP API somewhere, these are documented reasonably well – see /puppet-ca/v1/certificate_request/ and /puppet-ca/v1/certificate/. Not covered is how to make the CSRs and such.

First I have a little helper to make the HTTP client:

def ca_path; "/home/rip/.puppetlabs/etc/puppet/ssl/certs/ca.pem";end
def cert_path; "/home/rip/.puppetlabs/etc/puppet/ssl/certs/rip.pem";end
def key_path; "/home/rip/.puppetlabs/etc/puppet/ssl/private_keys/rip.pem";end
def csr_path; "/home/rip/.puppetlabs/etc/puppet/ssl/certificate_requests/rip.pem";end
def has_cert?; File.exist?(cert_path);end
def has_ca?; File.exist?(ca_path);end
def already_requested?;!has_cert? && File.exist?(key_path);end
 
def http
  http = Net::HTTP.new(@ca, 8140)
  http.use_ssl = true
 
  if has_ca?
    http.ca_file = ca_path
    http.verify_mode = OpenSSL::SSL::VERIFY_PEER
  else
    http.verify_mode = OpenSSL::SSL::VERIFY_NONE
  end
 
  http
end

This is a HTTPS client that uses full verification of the remote host if we have a CA. There’s a small chicken and egg where you have to ask the CA for it’s own certificate where it’s a unverified connection. If this is a problem you need to arrange to put the CA on the machine in a safe manner.

Lets fetch the CA:

def fetch_ca
  return true if has_ca?
 
  req = Net::HTTP::Get.new("/puppet-ca/v1/certificate/ca", "Content-Type" => "text/plain")
  resp, _ = http.request(req)
 
  if resp.code == "200"
    File.open(ca_path, "w", Ob0644) {|f| f.write(resp.body)}
    puts("Saved CA certificate to %s" % ca_path)
  else
    abort("Failed to fetch CA from %s: %s: %s" % [@ca, resp.code, resp.message])
  end
 
  has_ca?
end

At this point we have the CA and saved it, future requests will be verified against this CA. If you put the CA there using some other means this will do nothing.

Now we need to start making our CSR, first we have to make a private key, this is a 4096 bit key saved in pem format:

def write_key
  key = OpenSSL::PKey::RSA.new(4096)
  File.open(key_path, "w", Ob0640) {|f| f.write(key.to_pem)}
  key
end

And the CSR needs to be made using this key, Puppet CSRs are quite simple with few fields filled in, can’t see why you couldn’t fill in more fields and of course it now supports extensions, I didn’t add any of those here, just a OU:

def write_csr(key)
  csr = OpenSSL::X509::Request.new
  csr.version = 0
  csr.public_key = key.public_key
  csr.subject = OpenSSL::X509::Name.new(
    [
      ["CN", @certname, OpenSSL::ASN1::UTF8STRING],
      ["OU", "my org", OpenSSL::ASN1::UTF8STRING]
    ]
  )
  csr.sign(key, OpenSSL::Digest::SHA1.new)
 
  File.open(csr_path, "w", Ob0644) {|f| f.write(csr.to_pem)}
 
  csr.to_pem
end

Let’s combine these to make the key and CSR and send the request to the Puppet CA, this request is verified using the CA:

def request_cert
  req = Net::HTTP::Put.new("/puppet-ca/v1/certificate_request/%s?environment=production" % @certname, "Content-Type" => "text/plain")
  req.body = write_csr(write_key)
  resp, _ = http.request(req)
 
  if resp.code == "200"
    puts("Requested certificate %s from %s" % [@certname, @ca])
  else
    abort("Failed to request certificate from %s: %s: %s: %s" % [@ca, resp.code, resp.message, resp.body])
  end
end

You’ll now have to sign the cert on your Puppet CA as normal, or use autosign, nothing new here.

And finally you can attempt to fetch the cert, this method is designed to return false if the cert is not yet ready on the master – ie. not signed yet.

def attempt_fetch_cert
  return true if has_cert?
 
  req = Net::HTTP::Get.new("/puppet-ca/v1/certificate/%s" % @certname, "Content-Type" => "text/plain")
  resp, _ = http.request(req)
 
  if resp.code == "200"
    File.open(cert_path, "w", Ob0644) {|f| f.write(resp.body)}
    puts("Saved certificate to %s" % cert_path)
  end
 
  has_cert?
end

Pulling this all together you have some code to make keys, CSR etc, cache the CA and request a cert is signed, it will then do a wait for cert like Puppet does till things are signed.

def main
  abort("Already have a certificate '%s', cannot continue" % @certname) if has_cert?
 
  make_ssl_dirs
  fetch_ca
 
  if already_requested?
    puts("Certificate %s has already been requested, attempting to retrieve it" % @certname)
  else
    puts("Requesting certificate for '%s'" % @certname)
    request_cert
  end
 
  puts("Waiting up to 120 seconds for it to be signed")
  puts
 
  12.times do |time|
    print "Attempting to download certificate %s: %d / 12r" % [@certname, time]
 
    break if attempt_fetch_cert
 
    sleep 10
  end
 
  abort("Could not fetch the certificate after 120 seconds") unless has_cert?
 
  puts("Certificate %s has been stored in %s" % [@certname, ssl_dir])
end

Hiera Node Classifier 0.7

A while ago I released a Puppet 4 Hiera based node classifier to see what is next for hiera_include(). This had the major drawback that you couldn’t set an environment with it like with a real ENC since Puppet just doesn’t have that feature.

I’ve released a update to the classifier that now include a small real ENC that takes care of setting the environment based on certname and then boots up the classifier on the node.

Usage


ENCs tend to know only about the certname, you could imagine getting most recent seen facts from PuppetDB etc but I do not really want to assume things about peoples infrastructure. So for now this sticks to supporting classification based on certname only.

It’s really pretty simple, lets assume you are wanting to classify node1.example.net, you just need to have a node1.example.net.yaml (or JSON) file somewhere in a path. Typically this is going to be in a directory environment somewhere but could of course also be a site wide hiera directory.

In it you put:

classification::environment: development

And this will node will form part of that environment. Past that everything in the previous post just applies so you make rules or assign classes as normal, and while doing so you have full access to node facts.

The classifier now expose some extra information to help you determine if the ENC is in use and based on what file it’s classifying the node:

  • $classifier::enc_used – boolean that indicates if the ENC is in use
  • $classifier::enc_source – path to the data file that set the environment. undef when not found
  • $classifier::enc_environment – the environment the ENC is setting

It supports a default environment which you configure when configuring Puppet to use a ENC as below.

Configuring Puppet


Configuring Puppet is pretty simple for this:

[main]
node_terminus = exec
external_nodes = /usr/local/bin/classifier_enc.rb --data-dir /etc/puppetlabs/code/hieradata --node-pattern nodes/%%.yaml

Apart from these you can do –default development to default to that and not production and you can add –debug /tmp/enc.log to get a bunch of debug output.

The data-dir above is for your classic Hiera single data dir setup, but you can also use globs to support environment data like –data-dir /etc/puppetlabs/code/environments/*/hieradata. It will now search the entire glob until it finds a match for the certname.

That’s really all there is to it, it produce a classification like this:

---
environment: production
classes:
  classifier:
    enc_used: true
    enc_source: /etc/puppetlabs/code/hieradata/node.example.yaml
    enc_environment: production

Conclusion


That’s really all there is to it, I think this might hit a high percentage of user cases and bring a key ability to the hiera classifiers. It’s a tad annoying there is no way really to do better granularity than just per node here, I might come up with something else but don’t really want to go too deep down that hole.

In future I’ll look about adding a class to install the classifier into some path and configure Puppet, for now that’s up to the user. It’s shipped in the bin dir of the module.

A Puppet 4 Hiera Based Classifier

When I first wrote Hiera I included a simple little hack called hiera_include() that would do a Array lookup and include everything it found. I only included it even because include at the time did not take Array arguments. In time this has become quite widely used and many people do their node classification using just this and the built in hierarchical nature of Hiera.

I’ve always wanted to do better though, like maybe write an actual ENC that uses Hiera data keys on the provided certname? Seemed like the only real win would be to be able to set the node environment from Hiera, I guess this might be valuable enough on it’s own.

Anyway, I think the ENC interface is really pretty bad and should be replaced by something better. So I’ve had the idea of a Hiera based classifier in my mind for years.

Some time ago Ben Ford made a interesting little hack project that used a set of rules to classify nodes and this stuck to my mind as being quite a interesting approach. I guess it’s a bit like the new PE node classifier.

Anyway, so I took this as a starting point and started working on a Hiera based classifier for Puppet 4 – and by that I mean the very very latest Puppet 4, it uses a bunch of the things I blogged about recently and the end result is that the module is almost entirely built using the native Puppet 4 DSL.

Simple list-of-classes based Classification


So first lets take a look at how this replaces/improves on the old hiera_include().

Not really much to be done I am afraid, it’s an array with some entries in it. It now uses the Knockout Prefix features of Puppet Lookup that I blogged about before to allow you to exclude classes from nodes:

So we want to include the sysadmins and sensu classes on all nodes, stick this in your common tier:

# common.yaml
classifier::extra_classes:
 - sysadmins
 - sensu

Then you have some nodes that need some more classes:

# clients/acme.yaml
classifier::extra_classes:
 - acme_sysadmins

At this point it’s basically same old same old, but lets see if we had some node that needed Nagios and not Sensu:

# nodes/example.net.yaml
classifier::extra_classes:
 - --sensu
 - nagios

Here we use the knockout prefix of to remove the sensu class and add the nagios one instead. That’s already a big win from old hiera_include() but to be fair this is just as a result of the new Lookup features.

It really gets interesting later when you throw in some rules.

Rule Based Classification


The classifier is built around a set of Classifications and these are made up of one or many rules per Classification which if they match on a host means a classification applies to the node. And the classifications can include classes and create data.

Here’s a sample rule where I want to do some extra special handling of RedHat like machines. But I want to handle VMs different from Physical machines.

# common.yaml
classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
    classes:
      - centos::vm
 
  RedHat:
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
    data:
      redhat_os: true
    classes:
      - centos::common

This shows 2 Classifications one called “RedHat VMs” and one just “RedHat”, you can see the VMs one contains 2 rules and it sets match: all so they both have to match.

End result here is that all RedHat machines get centos::common and RedHat VMs also get centos::vm. Additionally 2 pieces of data will be created, a bit redundant in this example but you get the idea.

Using the Classifier


So using the classifier in the basic sense is just like hiera_include():

node default {
  include classifier
}

This will process all the rules and include the resulting classes. It will also expose a bunch of information via this class, the most interesting is $classifier::data which is a Hash of all the data that the rules emit. But you can also access the the included classes via $classifier::classes and even the whole post processed classification structure in $classifier::classification. Some others are mentioned in the README.

You can do very impressive Hiera based overrides, here’s an example of adjusting a rule for a particular node:

# clients/acme.yaml
classifier::rules:
  RedHat VMs:
    classes:
      - some::other
    data:
      extra_data: true

This has the result that for this particular client additional data will be produced and additional classes will be included – but only on their RedHat VMs. You can even use the knockout feature here to really adjust the data and classes.

The classes get included automatically for you and if you set classifier::debug you’ll get a bunch of insight into how classification happens.

Hiera Inception


So at this point things are pretty neat, but I wanted to both see how the new Data Provider API look and also see if I can expose my classifier back to Hiera.

Imagine I am making all these classifications but with what I shown above it’s quite limited because it’s just creating data for the $classifier::data hash. What you really want is to create Hiera data and be able to influence Automatic Parameter Lookup.

So a rule like:

# clients/acme.yaml
classifier::rules:
  RedHat:
    data:
      centos::common::selinux: permissive

Here I am taking the earlier RedHat rule and setting centos::common::selinux: permissive, now you want this to be Data that will be used by the Automatic Parameter Lookup system to set the selinux parameter of the centos::common class.

You can configure your Environment with this hiera.yaml

# environments/production/hiera.yaml
---
version: 4
datadir: "hieradata"
hierarchy:
  - name: "%{trusted.certname}"
    backend: "yaml"
 
  - name: "classification data"
    backend: "classifier"
 
  # ... and the rest

Here I allow node specific YAML files to override the classifier and then have a new Data Provider called classifier that expose the classification back to Hiera. Doing it this way is super important, the priority the classifier have on a site is not a single one size fits all choice, doing it this way means the site admins can decide where in their life classification site so it best fits their workflows.

So this is where the inception reference comes in, you extract data from Hiera, process it using the Puppet DSL and expose it back to Hiera. At first thought this is a bit insane but it works and it’s really nice. Basically this lets you completely redesign hiera from something that is Hierarchical in nature and turn it into a rule based system – or a hybrid.

And you can even test it from the CLI:

% puppet lookup --compile --explain centos::common::selinux
Merge strategy first
  Data Binding "hiera"
    No such key: "centos::common::selinux"
  Data Provider "Hiera Data Provider, version 4"
    ConfigurationPath "environments/production/hiera.yaml"
    Merge strategy first
      Data Provider "%{trusted.certname}"
        Path "environments/production/hieradata/dev2.devco.net.yaml"
          Original path: "%{trusted.certname}"
          No such key: "centos::common::selinux"
      Data Provider "classification data"
        Found key: "centos::common::selinux" value: "permissive"
      Merged result: "permissive"
  Merged result: "permissive"

I hope to expose here which rule provided this data like the other lookup explanations do.

Clearly this feature is a bit crazy, so consider this a exploration of what’s possible rather than a strong endorsement of this kind of thing :)

Implementation


Implementing this has been pretty interesting, I got to use a lot of the new Puppet 4 features. Like I mentioned all the data processing, iteration and deriving of classes and data is done using the native Puppet DSL, take a look at the functions directory for example.

It also makes use of the new Type system and Type Aliases all over the place to create a strong schema for the incoming data that gets validated at all levels of the process. See the types directory.

The new Modules in Data is used to set lookup strategies so that there are no manual calling of lookup(), see the module data.

Writing a Data Provider ie. a Hiera Backend for the new lookup system is pretty nice, I think the APIs around there is still maturing so definitely bleeding edge stuff. You can see the bindings and data provider in the lib directory.

As such this module only really has a hope of working on Puppet 4.4.0 at least, and I expect to use new features as they come along.

Conclusion


There’s a bunch more going on, check the module README. It’s been quite interesting to be able to really completely rethink how Hiera data is created and what a modern take on classification can achieve.

With this approach if you’re really not too keen on the hierarchy you can totally just use this as a rules based Hiera instead, that’s pretty interesting! I wonder what other strategies for creating data could be prototyped like this?

I realise this is very similar to the PE node classifier but with some additional benefits in being exposed to Hiera via the Data Provider, being something you can commit to git and being adjustable and overridable using the new Hiera features I think it will appeal to a different kind of user. But yeah, it’s quite similar. Credit to Ben Ford for his original Ruby based implementation of this idea which I took and iterated on. Regardless the ‘like a iTunes smart list’ node classifier isn’t exactly a new idea and have been discussed for literally years :)

You can get the module on the forge as ripienaar/classifier and I’d greatly welcome feedback and ideas.

Puppet 4 Type Aliases

Back when I first took a look at Puppet 4 features I explored the new Data Types and said:

Additionally I cannot see myself using a Struct like above in the argument list – to which Henrik says they are looking to add a typedef thing to the language so you can give complex Struc’s a more convenient name and use that. This will help that a lot.

And since Puppet 4.4.0 this has now become a reality. So a quick post to look at that.

The Problem


I’ve been writing a Hiera based node classifier both to scratch and itch and to have something fairly complex to explore the new features in Puppet 4.

The classifier takes a set of classification rules and produce classifications – classes to include and parameters – from there. Here’s a sample classification:

classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
      centos::vm::someprop: someval
    classes:
      - centos::vm

This is a classification rule that has 2 rules to match against machines running RedHat like operating systems and that are virtual. In that case if both these are true it will:

  • Include the class centos::vm
  • Create some data redhat_vm => true and centos::vm::someprop => someval

You can have an arbitrary amount of classifications made up of a arbitrary amount of rules. This data lives in hiera so you can have all sorts of merging, overriding and knock out fun with it.

The amazing thing is since Puppet 4.4.0 there is now no Ruby code involved in doing what I said above, all the parsing, looping, evaluating or rules and building of data structures is all done using functions written in the pure Puppet DSL.

There’s some Ruby there in the form of a custom backend for the new lookup based hiera system – but this is experimental, optional and a bit crazy.

Anyway, so here’s the problem, before Puppet 4.4.0 my main class had this in:

class classifier (
  Hash[String,
    Struct[{
      match    => Enum["all", "any"],
      rules    => Array[
        Struct[{
          fact     => String,
          operator => Enum["==", "=~", ">", " =>", "<", "<="],
          value    => Data,
          invert   => Optional[Boolean]
        }]
      ],
      data     => Optional[Hash[Pattern[/A[a-z0-9_][a-zA-Z0-9_]*Z/], Data]],
      classes  => Array[Pattern[/A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*Z/]]
    }]
  ] $rules = {}
) {
....
}

This describes the full valid rule as a Puppet Type. It’s pretty horrible. Worse I have a number of functions and classes all that receives the full classification or parts of it and I’d have to duplicate all this all over.

The Solution


So as of yesterday I can now make this a lot better:

class classifier (
  Classifier::Classifications  $rules = {},
) {
....
}

to do this I made a few files in the module:

# classifier/types/matches.pp
type Classifier::Matches = Enum["all", "any"]
# classifier/types/classname.pp
type Classifier::Classname = Pattern[/A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*Z/]

and a few more, eventually ending up in:

# classifier/types/classification.pp
type Classifier::Classification = Struct[{
  match    => Classifier::Matches,
  rules    => Array[Classifier::Rule],
  data     => Classifier::Data,
  classes  => Array[Classifier::Classname]
}]

Which you can see solves the problem quite nicely. Now in classes and functions where I need lets say just a Rule all I do is use Classifier::Rule instead of all the crazy.

This makes the native Puppet Data Types perfectly usable for me, well worth adopting these.