Category Archives: devops

Puppet Query Language

For a few releases now PuppetDB had a new query language called Puppet Query Language or PQL for short. It’s quite interesting, I thought a quick post might make a few more people aware of it.

Overview


To use it you need a recent PuppetDB and as this is quite a new feature you really want the latest PuppetDB. There is nothing to enable when you install it the feature is already active. The feature is marked as experimental so some things will change as it moves to production.

PQL Queries look more or less like this:

nodes { certname ~ 'devco' }

This is your basic query it will return a bunch of nodes, something like:

[
  {
    "deactivated": null,
    "latest_report_hash": null,
    "facts_environment": "production",
    "cached_catalog_status": null,
    "report_environment": null,
    "latest_report_corrective_change": null,
    "catalog_environment": "production",
    "facts_timestamp": "2016-11-01T06:42:15.135Z",
    "latest_report_noop": null,
    "expired": null,
    "latest_report_noop_pending": null,
    "report_timestamp": null,
    "certname": "devco.net",
    "catalog_timestamp": "2016-11-01T06:42:16.971Z",
    "latest_report_status": null
  }
]

There are a bunch of in-built relationships between say a node and it’s facts and inventory, so queries can get quite complex:

inventory[certname] { 
  facts.osfamily = "RedHat" and
  facts.dc = "linodeldn" and
  resources { 
    type = "Package" and
    title = "java" and
    parameters.ensure = "1.7.0" 
  } 
}

This finds all the RedHat machines in a particular DC with Java 1.7.0 on them. Be aware this will also find machines that are deactivated.

I won’t go into huge details of the queries, the docs are pretty good – examples, overview.

So this is quite interesting, this finally gives us a reasonably usable DB to do queries that mcollective discovery used to be used for – but of course its not a live view nor does it have any clue what the machines are up to but as a cached data source for discovery this is interesting.

Using


CLI


You can of course query this stuff on the CLI and I suggest you familiarise yourself with JQ.

First you’ll have to set up your account:

{
  "puppetdb": {
    "server_urls": "https://puppet:8081",
    "cacert": "/home/rip/.puppetlabs/etc/puppet/ssl/certs/ca.pem",
    "cert": "/home/rip/.puppetlabs/etc/puppet/ssl/certs/rip.mcollective.pem",
    "key": "/home/rip/.puppetlabs/etc/puppet/ssl/private_keys/rip.mcollective.pem"
  }
}

This is in ~/.puppetlabs/client-tools/puppetdb.conf which is a bit senseless to me since there clearly is a standard place for config files, but alas.

Once you have this and you installed the puppet-client-tools package you can do queries like:

$ puppet query "nodes { certname ~ 'devco.net' }"

Puppet Code


Your master will have the puppetdb-termini package on it and this brings with it Puppet functions to query PuppetDB so you do not need to use a 3rd party module anymore:

$nodes = puppetdb_query("nodes { certname ~ 'devco' }")

Puppet Job


At the recent PuppetConf Puppet announced that their enterprise tool puppet job supports using this as discovery, if I remember right it’s something like:

$ puppet job run -q 'nodes { certname ~ 'devco' }'

MCollective


At PuppetConf I integrated this into MCollective and my Choria tool, both these are still due a release (MCO-776, choria #61):

Run Puppet on all the nodes matched by the query:

$ puppet query "nodes { certname ~ 'devco.net' }"|mco rpc puppet runonce

The above is a bit limited in that the apps in question have to specifically support this kind of STDIN discovery – the rpc app does.

I then also added support to the Choria CLI:

$ mco puppet runonce -I "pql:nodes[certname] { certname ~ 'devco.net' }"

These queries are a bit special in that they must return just the certname as here, I’ll document this up. The reason for this is that they are actually part of a much larger query done in the Choria discovery system (that uses PQL internally and is a good intro on how to query this API from code).

Here’s an example of a complex query – as used by Choria internally – that limits nodes to ones in a particular collective, our PQL query who have mcollective installed and running. You can see you can nest and combine queries into quite complex ones:

nodes[certname, deactivated] { 
  # finds nodes in the chosen inventory via a fact
  (certname in inventory[certname] { 
    facts.mcollective.server.collectives.match("d+") = "mcollective" 
  }) and 
 
  # does the supplied PQL query
  (certname in nodes[certname] {
    certname ~ 'devco.net'
  }) and
 
  # limited to machines with mcollective installed
  (resources {
    type = "Class" and title = "Mcollective"
  }) and 
 
  # who also have the service started
  (resources {
    type = "Class" and title = "Mcollective::Service"
  }) 
}

Conclusion


This is really handy and I hope more people will become familiar with it. I don’t think this quite rolls off the fingers easily – but neither does SQL or any other similar system so par for the course. What is amazing is that we can get nearer to having a common language across CLI, Code, Web UIs and 3rd party tools for describing queries of our estate so this is a major win.

Puppet 4 Sensitive Data Types

You often need to handle sensitive data in manifests when using Puppet. Private keys, passwords, etc. There has not been a native way to deal with these and so a cottage industry of community tools have spring up.

To deal with data at rest various Hiera backends like the popular hiera-eyaml exist, to deal with data on nodes a rather interesting solution called binford2k-node_encrypt exist. There are many more but less is more, these are good and widely used.

The problem is data leaks all over the show in Puppet – diffs, logs, reports, catalogs, PuppetDB – it’s not uncommon for this trusted data to show up all over the place. And dealing with this problem is a huge scope issue that will require adjustments to every component – Puppet, Hiera / Lookup, PuppetDB, etc.

But you have to start somewhere and Puppet is the right place, lets look at the first step.

Sensitive[T]


Puppet 4.6.0 introduce – and 4.6.1 fixed – a new data type that decorates other data telling the system it’s sensitive. And this data cannot by accident become logged or leaked since the type will only return a string indicating it’s redacted.

It’s important to note this is step one of a many step process towords having a unified blessed way of dealing with Sensitive data all over. But lets take a quick look at them. The official specification for this feature lives here.

In the most basic case we can see how to make sensitive data, how it looks when logged or leaked by accident:

$secret = Sensitive("My Socrates Note")
notice($secret)

This prints out the following:

Notice: Scope(Class[main]): Sensitive [value redacted]

To unwrap this and gain access to the real original data:

$secret = Sensitive(hiera("secret"))
 
$unwrapped = $secret.unwrap |$sensitive| { $sensitive }
notice("Unwrapped: ${unwrapped}")
 
$secret.unwrap |$sensitive| { notice("Lambda: ${sensitive}") }

Here you can see how to assign it unwrapped to a new variable or just use it in a block. Important to note you should never print these values like this and ideally you’d only ever use them inside a lambda if you have to use them in .pp code. Puppet has no concept of private variables so this $unwrapped variable could be accessed from outside of your classes. A lambda scope is temporary and private.

The output of above is:

Notice: Scope(Class[main]): Unwrapped: Too Many Secrets
Notice: Scope(Class[main]): Lambda: Too Many Secrets

So these are the basic operations, you can now of course pass the data around classes.

class mysql (
  Sensitive[String] $root_pass
) {
  # somehow set the password
}
 
class{"mysql":
  root_pass => Sensitive(hiera("mysql_root"))
}

Note here you can see the class specifically wants a String that is sensitive and not lets say a Number using the Sensitive[String] markup. And if you attempted to pass Sensitive(1) into it you’d get a type missmatch error.

Conclusion


So this appears to be quite handy, you can see down the line that lookup() might have a eyaml like system and emit Sensitive data directly and perhaps some providers and types will support this. But as I said it’s early days so I doubt this is actually useful yet.

I mentioned how other systems like PuppetDB and so forth also need updates before this is useful and indeed today PuppetDB is oblivious to these types and stores the real values:

$ puppet query 'resources[parameters] { type = "Class" and title = "Test" }'
...
  {
    "parameters": {
      "string": "My Socrates Note"
    }
  },
...

So this really does not yet serve any purpose but as a step one it’s an interesting look at what will come.

Will containers take over ?

and if so why haven't they done so yet ?

Unlike many people think, containers are not new, they have been around for more than a decade, they however just became popular for a larger part of our ecosystem. Some people think containers will eventually take over.

Imvho It is all about application workloads, when 8 years ago I wrote about a decade of open source virtualization, we looked at containers as the solution for running a large number of isolated instances of something on a machine. And with large we meant hundreds or more instances of apache, this was one of the example use cases for an ISP that wanted to give a secure but isolated platform to his users. One container per user.

The majority of enterprise usecases however were full VM's Partly because we were still consolidating existing services to VM's and weren't planning on changing the deployment patterns yet. But mainly because most organisations didn't have the need to run 100 similar or identical instances of an application or a service, they were going from 4 bare metal servers to 40 something VM's but they had not yet come to the need to run 100's of them. The software architecture had just moved from FatClient applications that talked directly to bloated relational databases containing business logic, to web enabled multi-tier
applications. In those days when you suggested to run 1 Tomcat instance per VM because VM's were cheap and it would make management easier, (Oh oops I shut down the wrong tomcat instance) , people gave you very weird looks

Slowly software architectures are changing , today the new breed of applications is small, single function, dedicated, and it interacts frequently with it's peers, together combined they provide similar functionality as a big fat application 10 years ago, But when you look at the market that new breed is a minority. So a modern application might consist of 30-50 really small ones, all with different deployment speeds. And unlike 10 years ago where we needed to fight hard to be able to build both dev, acceptance and production platforms, people now consider that practice normal. So today we do get environments that quickly go to 100+ instances , but requiring similar CPU power as before, so the use case for containers like we proposed it in the early days is now slowly becoming a more common use case.

So yes containers might take over ... but before that happens .. a lot of software architectures will need to change, a lot of elephants will need to be sliced, and that is usually what
blocks cloud, container, agile and devops adoption.

Jenkins DSL and Heisenbugs

I`m working on getting even more moving parts automated, those who use Jenkins frequently probably also have Love - Hate relationship with it.

The love coming from the flexibility , stability and the power you get from it, the hate from it's UI. If you've ever had to create a new Jenkins job or even pipeline based on one that already existed you've gone trough the horror of click and paste errors , and you know where the hate breeds.

We've been trying to automate this with different levels of success, we've puppetized the XML jobs, we've used the Buildflow Plugin (reusing the same job for different pipelines is a bad idea..) We played with JJB running into issues with some plugins (Promoted Build) and most recently we have put our hope in the Job DSL.

While toying with the DSL I ran into a couple of interresting behaviours. Imagine you have an entry like this which is supposed to replace the $foldername with the content of the variable and actually take the correct upstream

  1. cloneWorkspace('${foldername}/dashing-dashboard-test', 'Successful')

You generate the job, look inside the Jenkins UI to verify what the build result was .. save the job and run it .. success ..
Then a couple of times later that same job gives an error ... It can't find the upstream job to copy the workspace from. You once again open up the job in the UI, look at it .. save it , run it again and then it works.. a typical case of Heisenbug ..

When you start looking closer to the XML of the job you notice ..

  1. <parentJobName>${foldername}/dashing-dashboard-test</parentJobName>

obviously wrong .. I should have used double quotes ..

But why doesn't it look wrong in the UI ? That's because the UI autoselects the first option from it's autogenerated pull down list .. Which actually contains the right upstream workplace I wanted to trigger (that will teach me to use 00 as a prefix for the foldername for all my tests..)

So when working with the DSL .. review the generated XML .. not just if the job works ..

A Puppet 4 Hiera Based Classifier

When I first wrote Hiera I included a simple little hack called hiera_include() that would do a Array lookup and include everything it found. I only included it even because include at the time did not take Array arguments. In time this has become quite widely used and many people do their node classification using just this and the built in hierarchical nature of Hiera.

I’ve always wanted to do better though, like maybe write an actual ENC that uses Hiera data keys on the provided certname? Seemed like the only real win would be to be able to set the node environment from Hiera, I guess this might be valuable enough on it’s own.

Anyway, I think the ENC interface is really pretty bad and should be replaced by something better. So I’ve had the idea of a Hiera based classifier in my mind for years.

Some time ago Ben Ford made a interesting little hack project that used a set of rules to classify nodes and this stuck to my mind as being quite a interesting approach. I guess it’s a bit like the new PE node classifier.

Anyway, so I took this as a starting point and started working on a Hiera based classifier for Puppet 4 – and by that I mean the very very latest Puppet 4, it uses a bunch of the things I blogged about recently and the end result is that the module is almost entirely built using the native Puppet 4 DSL.

Simple list-of-classes based Classification


So first lets take a look at how this replaces/improves on the old hiera_include().

Not really much to be done I am afraid, it’s an array with some entries in it. It now uses the Knockout Prefix features of Puppet Lookup that I blogged about before to allow you to exclude classes from nodes:

So we want to include the sysadmins and sensu classes on all nodes, stick this in your common tier:

# common.yaml
classifier::extra_classes:
 - sysadmins
 - sensu

Then you have some nodes that need some more classes:

# clients/acme.yaml
classifier::extra_classes:
 - acme_sysadmins

At this point it’s basically same old same old, but lets see if we had some node that needed Nagios and not Sensu:

# nodes/example.net.yaml
classifier::extra_classes:
 - --sensu
 - nagios

Here we use the knockout prefix of to remove the sensu class and add the nagios one instead. That’s already a big win from old hiera_include() but to be fair this is just as a result of the new Lookup features.

It really gets interesting later when you throw in some rules.

Rule Based Classification


The classifier is built around a set of Classifications and these are made up of one or many rules per Classification which if they match on a host means a classification applies to the node. And the classifications can include classes and create data.

Here’s a sample rule where I want to do some extra special handling of RedHat like machines. But I want to handle VMs different from Physical machines.

# common.yaml
classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
    classes:
      - centos::vm
 
  RedHat:
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
    data:
      redhat_os: true
    classes:
      - centos::common

This shows 2 Classifications one called “RedHat VMs” and one just “RedHat”, you can see the VMs one contains 2 rules and it sets match: all so they both have to match.

End result here is that all RedHat machines get centos::common and RedHat VMs also get centos::vm. Additionally 2 pieces of data will be created, a bit redundant in this example but you get the idea.

Using the Classifier


So using the classifier in the basic sense is just like hiera_include():

node default {
  include classifier
}

This will process all the rules and include the resulting classes. It will also expose a bunch of information via this class, the most interesting is $classifier::data which is a Hash of all the data that the rules emit. But you can also access the the included classes via $classifier::classes and even the whole post processed classification structure in $classifier::classification. Some others are mentioned in the README.

You can do very impressive Hiera based overrides, here’s an example of adjusting a rule for a particular node:

# clients/acme.yaml
classifier::rules:
  RedHat VMs:
    classes:
      - some::other
    data:
      extra_data: true

This has the result that for this particular client additional data will be produced and additional classes will be included – but only on their RedHat VMs. You can even use the knockout feature here to really adjust the data and classes.

The classes get included automatically for you and if you set classifier::debug you’ll get a bunch of insight into how classification happens.

Hiera Inception


So at this point things are pretty neat, but I wanted to both see how the new Data Provider API look and also see if I can expose my classifier back to Hiera.

Imagine I am making all these classifications but with what I shown above it’s quite limited because it’s just creating data for the $classifier::data hash. What you really want is to create Hiera data and be able to influence Automatic Parameter Lookup.

So a rule like:

# clients/acme.yaml
classifier::rules:
  RedHat:
    data:
      centos::common::selinux: permissive

Here I am taking the earlier RedHat rule and setting centos::common::selinux: permissive, now you want this to be Data that will be used by the Automatic Parameter Lookup system to set the selinux parameter of the centos::common class.

You can configure your Environment with this hiera.yaml

# environments/production/hiera.yaml
---
version: 4
datadir: "hieradata"
hierarchy:
  - name: "%{trusted.certname}"
    backend: "yaml"
 
  - name: "classification data"
    backend: "classifier"
 
  # ... and the rest

Here I allow node specific YAML files to override the classifier and then have a new Data Provider called classifier that expose the classification back to Hiera. Doing it this way is super important, the priority the classifier have on a site is not a single one size fits all choice, doing it this way means the site admins can decide where in their life classification site so it best fits their workflows.

So this is where the inception reference comes in, you extract data from Hiera, process it using the Puppet DSL and expose it back to Hiera. At first thought this is a bit insane but it works and it’s really nice. Basically this lets you completely redesign hiera from something that is Hierarchical in nature and turn it into a rule based system – or a hybrid.

And you can even test it from the CLI:

% puppet lookup --compile --explain centos::common::selinux
Merge strategy first
  Data Binding "hiera"
    No such key: "centos::common::selinux"
  Data Provider "Hiera Data Provider, version 4"
    ConfigurationPath "environments/production/hiera.yaml"
    Merge strategy first
      Data Provider "%{trusted.certname}"
        Path "environments/production/hieradata/dev2.devco.net.yaml"
          Original path: "%{trusted.certname}"
          No such key: "centos::common::selinux"
      Data Provider "classification data"
        Found key: "centos::common::selinux" value: "permissive"
      Merged result: "permissive"
  Merged result: "permissive"

I hope to expose here which rule provided this data like the other lookup explanations do.

Clearly this feature is a bit crazy, so consider this a exploration of what’s possible rather than a strong endorsement of this kind of thing :)

Implementation


Implementing this has been pretty interesting, I got to use a lot of the new Puppet 4 features. Like I mentioned all the data processing, iteration and deriving of classes and data is done using the native Puppet DSL, take a look at the functions directory for example.

It also makes use of the new Type system and Type Aliases all over the place to create a strong schema for the incoming data that gets validated at all levels of the process. See the types directory.

The new Modules in Data is used to set lookup strategies so that there are no manual calling of lookup(), see the module data.

Writing a Data Provider ie. a Hiera Backend for the new lookup system is pretty nice, I think the APIs around there is still maturing so definitely bleeding edge stuff. You can see the bindings and data provider in the lib directory.

As such this module only really has a hope of working on Puppet 4.4.0 at least, and I expect to use new features as they come along.

Conclusion


There’s a bunch more going on, check the module README. It’s been quite interesting to be able to really completely rethink how Hiera data is created and what a modern take on classification can achieve.

With this approach if you’re really not too keen on the hierarchy you can totally just use this as a rules based Hiera instead, that’s pretty interesting! I wonder what other strategies for creating data could be prototyped like this?

I realise this is very similar to the PE node classifier but with some additional benefits in being exposed to Hiera via the Data Provider, being something you can commit to git and being adjustable and overridable using the new Hiera features I think it will appeal to a different kind of user. But yeah, it’s quite similar. Credit to Ben Ford for his original Ruby based implementation of this idea which I took and iterated on. Regardless the ‘like a iTunes smart list’ node classifier isn’t exactly a new idea and have been discussed for literally years :)

You can get the module on the forge as ripienaar/classifier and I’d greatly welcome feedback and ideas.

Puppet 4 Type Aliases

Back when I first took a look at Puppet 4 features I explored the new Data Types and said:

Additionally I cannot see myself using a Struct like above in the argument list – to which Henrik says they are looking to add a typedef thing to the language so you can give complex Struc’s a more convenient name and use that. This will help that a lot.

And since Puppet 4.4.0 this has now become a reality. So a quick post to look at that.

The Problem


I’ve been writing a Hiera based node classifier both to scratch and itch and to have something fairly complex to explore the new features in Puppet 4.

The classifier takes a set of classification rules and produce classifications – classes to include and parameters – from there. Here’s a sample classification:

classifier::rules:
  RedHat VMs:
    match: all
    rules:
      - fact: "%{facts.os.family}"
        operator: ==
        value: RedHat
      - fact: "%{facts.is_virtual}"
        operator: ==
        value: "true"
    data:
      redhat_vm: true
      centos::vm::someprop: someval
    classes:
      - centos::vm

This is a classification rule that has 2 rules to match against machines running RedHat like operating systems and that are virtual. In that case if both these are true it will:

  • Include the class centos::vm
  • Create some data redhat_vm => true and centos::vm::someprop => someval

You can have an arbitrary amount of classifications made up of a arbitrary amount of rules. This data lives in hiera so you can have all sorts of merging, overriding and knock out fun with it.

The amazing thing is since Puppet 4.4.0 there is now no Ruby code involved in doing what I said above, all the parsing, looping, evaluating or rules and building of data structures is all done using functions written in the pure Puppet DSL.

There’s some Ruby there in the form of a custom backend for the new lookup based hiera system – but this is experimental, optional and a bit crazy.

Anyway, so here’s the problem, before Puppet 4.4.0 my main class had this in:

class classifier (
  Hash[String,
    Struct[{
      match    => Enum["all", "any"],
      rules    => Array[
        Struct[{
          fact     => String,
          operator => Enum["==", "=~", ">", " =>", "<", "<="],
          value    => Data,
          invert   => Optional[Boolean]
        }]
      ],
      data     => Optional[Hash[Pattern[/A[a-z0-9_][a-zA-Z0-9_]*Z/], Data]],
      classes  => Array[Pattern[/A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*Z/]]
    }]
  ] $rules = {}
) {
....
}

This describes the full valid rule as a Puppet Type. It’s pretty horrible. Worse I have a number of functions and classes all that receives the full classification or parts of it and I’d have to duplicate all this all over.

The Solution


So as of yesterday I can now make this a lot better:

class classifier (
  Classifier::Classifications  $rules = {},
) {
....
}

to do this I made a few files in the module:

# classifier/types/matches.pp
type Classifier::Matches = Enum["all", "any"]
# classifier/types/classname.pp
type Classifier::Classname = Pattern[/A([a-z][a-z0-9_]*)?(::[a-z][a-z0-9_]*)*Z/]

and a few more, eventually ending up in:

# classifier/types/classification.pp
type Classifier::Classification = Struct[{
  match    => Classifier::Matches,
  rules    => Array[Classifier::Rule],
  data     => Classifier::Data,
  classes  => Array[Classifier::Classname]
}]

Which you can see solves the problem quite nicely. Now in classes and functions where I need lets say just a Rule all I do is use Classifier::Rule instead of all the crazy.

This makes the native Puppet Data Types perfectly usable for me, well worth adopting these.

The Puppet 4 Lookup Function

Puppet 4 has a new lookup subsystem exposed to the user in a few places:

  • The lookup() function
  • Automatic parameter lookups
  • Configuring the automatic parameter lookups via Data in Modules

I’ve not been able to figure out everything the docs have been trying to say about this function but it turns out they were copied from the deep_merge gem and it actually has better examples in some cases. So I thought a post exploring it and it’s various forms is in order

It’s pivotal to the use of data in Puppet so while you probably don’t need to fully grasp all of it’s intricacies as in this post a passing knowledge is valuable as is knowing how to find good help for it. I do think there’s some opportunity for improving the UX of this function though.

As usual the challenge when faced with all these options isn’t in how to use them all but in which options to use when that won’t result in a giant unmaintainable mess down the line. I think this function is definitely on the wrong side of the line in this regard. It’s massive and unwieldy in that it is exposing internals of Puppet in a 1:1 manner to the user.

So I would not recommend writing code that calls this function directly unless in extraordinary circumstances. With the Data in Modules and Automatic Parameter Lookup features you can achieve this, see the last section of the post for that.

First though you need to know the behaviours and terminology of the lookup() function in order to get to a point where you can use the other methods, so lets dive in.

Lookup Patterns

Basic usage


The function comes in a few forms past the most obvious lookup(“thing”):

lookup("some::thing", String, "first", "default value")

Here we’re looking up the key some::thing and it has to be a String from the data store. It will do a first style lookup which is your basic traditional Hiera first-match-wins and there’s a default. Apparently there is no simple case lookup(“some::thing”, “default”) which seems like it would be the most common use. You can come kind of close though with (more on this below):

lookup({"name" => "some::thing", "default_value" => "default"})

Anyway, you’re not really going to be using the lookup function directly much so this is probably fine

The thing to note here are the lookup strategies, there are a few and you will always have to know them:

first First match found is returned, just like in traditional hiera() default behaviour
unique This is an array merge like old hiera_array().
hash This is hiera_hash() without deep merging enabled.
deep This is hiera_hash() with deep merging enabled. You would not guess this from the description in the docs.

So this is your basic replacement for the old hiera(), hiera_hash() and hiera_array() and as you can see from the last 2 the merge strategy isn’t set globally like in old Hiera, this is a big improvement.

I will not go into a full exploration of what Tiers mean, the old Hiera docs are pretty good for that. Effectively a merge strategy describe what Hiera does when it finds interesting data in many different levels of data or in different data sources.

Complex Strategies for Setting Defaults


From here it gets a bit crazy, but there are some really great things you can do with some of these so lets look at them.

First I’ll look at the task of setting defaults. Hiera had quite basic features in this space which was enough to get going but lookup has some nice additions.

First the above lookup can also be written like this:

lookup({"name" => "some::thing", "value_type" => String, "default_value" => "default", "merge" => "first"})
lookup({"name" => "some::thing", "default_value" => "default"}) # though accepts any data type

So this is quite nice because now you can decide the order of arguments and which to include.

There’s a more powerful way to set defaults though:

function some_module::params() {
  {
    "some_module::thing" => "default",
    "some_module::other_thing" => false
  }
}
 
lookup({"name" => "some_module::thing", "default_values_hash" => some_module::params()})

Which at first does not seem a huge improvement, but if you’re thinking about strategies to replace something like params.pp you could come up with some interesting patterns using this method. For example you can have a module function like here and an environment one (it supports environment level native functions) and combine them like environment_params() + some_module::params() to come up with layered sets of defaults, in effect this would be a micro hiera on it’s own programmed in pure Puppet DSL.

And finally you can use a lambda to set the default:

lookup("some::thing") |$key| { "Could not find a value for key '${key}', please configure it in your hiera data" }

Here we return a custom string instead that tells the user what is going on rather than blow up badly and we can of course include any helpful information like fact values and such to help them find the right place in your possibly complex data store.

Sticking to the Lambda I saw Henrik mention this on IRC yesterday:

$result = with(lookup("some::thing")) |$value| { if $value =~ Array { $value } else { [$value] } }

This does a lookup and ensures that the result is always an array, like the Ruby code Array(thing). These 2 Lambda approaches can’t really be done without calling lookup() specifically, so probably a bit niche.

I won’t go into all the details just now about Data in Modules and Merge Strategies but to see how these things tie together you should know you can set these option hashes via your data layer, see the linked to blog post for some details about this. The last section of this post shows a end to end working setup with Data in Modules and Merge Strategies in data.

Merge Strategies


The merge strategies in Hiera is where things really gets interesting and this function has even more than before. Some that I honestly can’t imagine any use for but I tend to lean on the less is more side of things wrt Puppet code.

We’ve seen the basic merge strategies above:

lookup("some::thing", String, "first", "default value")
lookup({"name" => "some::thing", "value_type" => String, "default_value" => "default", "merge" => "first"})

Here the strategy is first. But when the strategy is deep this can also be a hash with more merging options.

The most interesting for me is the knockout_prefix one. A common question when using Hiera for node classification is how to exclude a class from a certain node. This was kind of doable at least in Puppet 4 by using Arrays like:

include(hiera_array("classes", []) - hiera_array("exclude_classes", []))

Which will lookup classes and exclude_classes and subtract them from each other. This is a hack, lets look at a better option:

Given data like this:

# common.yaml
classification:
  classes:
    - sensu
    - sysadmin
# node1.example.net.yaml
classification:
  classes:
    - --sensu
    - nagios
    - webserver

What we’re trying to say is that the node1.example.net is not monitored by Sensu but by Nagios instead, the following lookup achieves this and includes the resulting classes:

$classification = lookup({"name" => "classifiation", 
        "merge" => {
          "strategy" => "deep", 
          "knockout_prefix" => "--",
          "sort_merge_arrays" => true
        }
})
 
$classification["classes"].include

Additionally I sorted the merged arrays. The tells it to remove data that matches the prefix. You can remove just some array member like here or entire keys from a resulting hash.

There’s another option where if some array member was a hash and you wanted to merge these hashes in the result sets you can set merge_hash_arrays. At that point you should probably rather rethink your data though tbh.

And the last one which I cannot figure out any use for and was quite baffled at is about turning Strings into Arrays. Henrik says they did not add this one for a reason other than it’s available on the deep_merge gem.

Lets change the data for our node to look like this:

# node1.example.net.yaml
classification:
  classes:
    - --sensu,nagios
    - webserver

While leaving the common data as is. If you set “unpack_arrays” => “,” in the merge options it will take every string found, split it by “,” which would turn this into a array of [“–sensu”, “nagios”] and then merge it up and then perform any knockouts so you get the same outcome ie. [“nagios”, “sysadmin”, “webserver”].

You should probably rethink your data instead if you find this useful :) That said though this –sensu,nagios does look like a search and replace, so perhaps in the context of a classifier utility it’s not all bad.

CLI tool


Like in the old hiera there’s a CLI tool for this function, unlike the old hiera one it does not suck.

To recreate the above lookup on the cli you’d do (though only once PUP-6050 is fixed):

% puppet lookup --hiera_config hiera.yaml --merge deep --knock-out-prefix "--" --unpack-arrays "," --sort-merge-arrays classification
---
classes:
- sysadmin
- nagios
- webserver

This is fine, but it’s a lot nicer than that. If you add the option –explain you get this:

Merge strategy deep
  Options: {
    "sort_merge_arrays" => true,
    "merge_hash_arrays" => false,
    "knockout_prefix" => "--",
    "unpack_arrays" => ","
  }
  Data Binding "hiera"
    Found key: "classification" value: {
      "classes" => [
        "sysadmin",
        "nagios",
        "webserver"
      ]
    }
  Data Provider "EnvironmentDataProvider"
    No such key: "classification"
  Merged result: {
    "classes" => [
      "sysadmin",
      "nagios",
      "webserver"
    ]
  }

A bit lacking in the case of old school hiera data since old Hiera does not emit the right kinds of detail for it to show where it gets your data from. It’s handy though since you can see the merge options hash and what data providers are queries. See below for the full potential.

Bringing it all together

When I started this fairly epic post I said I do not recommend people use lookup() directly, so lets take a look at pulling this all together.

I’ll make a simple classifier class like above in a module. Note the classes variable would above be done with the huge lookup() but not here. We do not want to use the lookup() function instead use Automatic Parameter Lookup:

class classifier($classes) {
  $classes.include
}

I’ll set it up for data in modules and add to it the lookup options:

# production/modules/classifier/data/common.yaml
lookup_options:
  classifier::classes:
    merge:
      strategy: deep
      knockout_prefix: "--"
      unpack_arrays: ","
      sort_merge_arrays: true

Note this is basically a lookup() call but attached to a specific key – classifier::classes. This way as we add more classification data we can have different strategies and such, doing it here means it works across all types of Hiera data old and new.

Now the data, I am using the environment data provider here – so no classic hiera at all:

First we configure our production environment to have it’s own instance of Hiera and it’s own hiera.yaml – take note, this is huge. Per environment hiera and hierarchies now works!

# production/environment.conf
environment_data_provider = hiera
# production/hiera.yaml
---
version: 4
datadir: "hieradata"
hierarchy:
  - name: "%{trusted.certname}"
    backend: "yaml"
  - name: "common"
    backend: "yaml"

Here’s our production environment data:

# production/hieradata/common.yaml
classifier::classes:
  - sensu
  - sysadmins
# production/hieradata/dev1.devco.net.yaml
classifier::classes:
  - nagios
  - --sensu
  - webserver

At this point it all works a charm, our node knocks out Sensu and brings in Nagios. This is a major wishlist item that old hiera_include() did not have!

Note this is just Array data that’s being knocked out and not Hash data here, while the deep strategy is supposed to work with Hashes only, so I am a bit surprised it works but I’ll take it as it makes this classifier better.

% puppet lookup --environmentpath environments classifier::classes
---
- sysadmins
- nagios
- webserver

And if we added –explain you can finally get the massive benefit of finally learning how Hiera finds your data:

% puppet lookup --environmentpath environments --explain classifier::classes
Merge strategy deep
  Options: {
    "knockout_prefix" => "--",
    "sort_merge_arrays" => true,
    "unpack_arrays" => ","
  }
  Data Binding "hiera"
    No such key: "classifier::classes"
  Data Provider "Hiera Data Provider, version 4"
    ConfigurationPath "/home/rip/temp/lookup/environments/production/hiera.yaml"
    Merge strategy deep
      Options: {
        "knockout_prefix" => "--",
        "sort_merge_arrays" => true,
        "unpack_arrays" => ","
      }
      Data Provider "%{trusted.certname}"
        Path "/home/rip/temp/lookup/environments/production/hieradata/dev1.devco.net.yaml"
          Original path: "%{trusted.certname}"
          Found key: "classifier::classes" value: [
            "nagios",
            "--sensu",
            "webserver"
          ]
      Data Provider "common"
        Path "/home/rip/temp/lookup/environments/production/hieradata/common.yaml"
          Original path: "common"
          Found key: "classifier::classes" value: [
            "sensu",
            "sysadmins"
          ]
      Merged result: [
        "sysadmins",
        "nagios",
        "webserver"
      ]
  Module "classifier" using Data Provider "Hiera Data Provider, version 4"
    ConfigurationPath "/home/rip/temp/lookup/environments/production/modules/classifier/hiera.yaml"
    Merge strategy deep
      Options: {
        "knockout_prefix" => "--",
        "sort_merge_arrays" => true,
        "unpack_arrays" => ","
      }
      Data Provider "%{trusted.certname}"
        Path "/home/rip/temp/lookup/environments/production/modules/classifier/data/dev1.devco.net.yaml"
          Original path: "%{trusted.certname}"
          Path not found
      Data Provider "common"
        Path "/home/rip/temp/lookup/environments/production/modules/classifier/data/common.yaml"
          Original path: "common"
          No such key: "classifier::classes"
  Merged result: [
    "sysadmins",
    "nagios",
    "webserver"
  ]

Every data file and every config file is shown and the full merge logic in all it’s glory is included. Huge win over previous hiera.

The result is a bit dense but if you follow along you can see it all works quite nicely and it’s super helpful for debugging cases where hiera just don’t work.

It’s a bit awkward – here I am doing it on the node the data is for, but for other nodes you would need their facts. As I understand it, it basically compiles the catalog and profiles the lookups during that process, so it needs facts as usual.

Conclusion

So that’s a rather epic exploration of the lookup() function which eventually ended us up with – do not use the lookup() function :)

You can see how this is a big step forward and in the end by using environment and module data – and no site data – I am not using old Hiera at all anymore as far as I know. This is purely the new lookup subsystem and it’s really powerful.

  • Environments and Modules can have data and independent hierarchies
  • The lookup subsystem is fully exposed in lookup() but the bulk of the features are accessible via lookup_options and so the Automatic Parameter Lookups
  • It has a really good CLI command which once a few bugs are sorted can bring amazing visibility to where your data comes from and what data is assigned to a node. Even without those bugs fixed though if you use lookup_options as in the last option it’s totally usable today

params.pp in Puppet 4

I do not like the params.pp pattern. Puppet 4 has brought native Data in Modules that’s pretty awesome and to a large extend it removes the traditional need for params.pp.

Thing is, we kind of do still need some parts of params.pp. To understand this we have to consider what the areas of concern params.pp has in Puppet world:

  • Holds data, often in large if or case statements that ultimately resemble hiera data
  • Derives new data using logic based on situation specific data like facts
  • Validates data is valid. This was kind of all over the place and not in params.pp since it’s not parameterised generally. But it’s closely related.

Points 1 and 3 are roughly sorted out by Puppet 4 types and data in modules, but what about the 2nd point and to some extend more complex data validation that falls outside of the type system?

Before I start looking at how to derive data though I’ll take a look at the new function API in Puppet 4.

Native Functions


Puppet has always allowed us to write functions but they needed to be in Ruby and nothing else. This isn’t really great. The message is kind of:

Puppet has a DSL for managing systems, we think it’s awesome and can do everything you need. But in order to use it you have to learn 2 programming languages with different models.

And I always felt the same about the general suggestion to write ENCs etc, luckily not something we hear much these days.

And they had a few major issues:

  • They do not work right in environments, just like custom providers and types do not. This is a showstopper bug as environments have become indispensable in modern Puppet use.
  • They are not namespaced. This is a showstopper for putting them on the forge.

The Puppet 4 functions API fix this, you can write functions in the native DSL and they work fine in environments. The Puppet 4 DSL with it’s loops and blocks and so forth have matured enough that it can do a lot of the things I’d need to do for deriving data from other data.

They live in your module in the functions directory, they’re namespaced and environment safe:

function mymod::myfunc(Fixnum $input) {
  $input * 2
}

And you’d use this like any other function: $x = mymod::myfunc(10). Simple stuff, whatever is the last value is returned like in Ruby.

Derived Data in Puppet 4


So we’re finally where I can show my preferred method for deriving data in Puppet 4, and that’s to use a native function.

As an example we’ll stick with Apache but this time a wrapper for the main class. From the previous blog post you’ll remember (or if not please read that post) that we wrapped the puppetlabs-apache module to create our own vhost define. Here I’ll show a wrapper for the main apache class.

class site::apache(
  Boolean $passenger = false,
  Hash $apache = {}
  Hash $module_options = {}
) {
  if $passenger {
    $_passenger_defaults = {
      "passenger_max_pool_size" => site::apache::passenger_pool_size(),
      # ...
    }
 
    class{"apache::mod::passenger":
      * => $_passenger_defaults + site::fetch($module_options, "passenger", {})
    }
  }
 
  $defaults = {
    "default_vhost" => false,
    # ....
  }
 
  class{"apache":
    * => $defaults + $apache
  }
}

Here I have a wrapper that does the basic Apache configuration with some overridable defaults via $apache and I have a way to configure Passenger again with overridable defaults via $module_options[“passenger”].

The Passenger part uses 2 functions: site::apache::passenger_poolsize and site::fetch. These are name spaced to the site module and are functions that you can see below:

First the site:apache::passenger_poolsize that follows typical community guidelines for the pool size based on core count, it’s also aware if the machine is virtual or physical. This is a good example of derived data that would be impossible to do using just Hiera – and so simply does not have a place there.

function site::apache::passenger_poolsize {
  if $is_virtual {
    $multiplier = 1.5
  } else {
    $multiplier = 2
  }
 
  floor($facts["processors"]["count"] * $multiplier)
}

And this is site::fetch that’s like Ruby’s Hash#fetch. stdlib will soon have dig() that does something similar.

function site::fetch(
  Hash $data,
  String $key,
  $default
) {
  if $data[$key] {
    $data[$key]
  } else {
    $default
  }
}

Why functions and not inlining the logic?

This seems like a bit more work than just sticking the site::apache::passenger_poolsize logic into the class that’s calling it so why bother? The first is obviously that it’s reusable so if you have anywhere else you might need this logic you could use it. But the second is about isolation.

I am not a big fan of writing Puppet rspec tests since I tend to shy away from Puppet logic in modules. But if I have to put logic in modules I’d like to isolate the logic so I can easily test it in isolation. I have no idea if rspec-puppet supports these functions yet, but if it did having this logic in as small a package as possible for testing is absolutely the right thing to do.

Further today the function is quite limited, but I can see I might want to expand it later to consider total memory as well as core count. When that day comes, I only have to edit this function and nothing else. The potential fallout from logic errors and so forth is neatly contained and importantly I can be fairly sure that this function is used for 1 thing only and changing it’s internals is something I can safely do – the things calling it really should not care for it’s internals.

Early on here I touched on complex validation of data as a possibly thing these functions could solve. The example here does not really do this, but imagine that for my site I never want to set the passenger_poolsize above some threshold that might relate to the memory on the machine. Given that this poolsize is user overridable I’d write a function like site::apache::validate_poolsize that takes care of this and fails when needed.

These validations could become very complex and situation specific (ie. based on facts) so this is more than we can expect from a Type system. Writing validations as native functions is easy and fits in neatly with the DSL.

Conclusion

These functions are great, to me they are everything defined types should have been and more. I think they move Puppet as a whole a huge leap forward in that you can achieve more complex things using just the Puppet DSL and they combine very nicely with the recent epp native Puppet based templates.

They fix massive show stopper bugs of environment compatibility and makes sharing modules like this on a forge a lot safer.

Using them in this manner here Puppet 4 can close the loop on all the functionality that params.pp had:

  • Pure data that is hierarchical in nature can live in modules.
  • Input validation can be done using the data type system.
  • Derived data can be done in isolation and in a reusable manner using native functions

When combined in this manner params.pp can be removed completely without any loss of functionality. Every one of these above points improve significantly on the old pattern.

I could not find docs for the new functions on the Puppet Labs site, hopefully we’ll see some soon.

I have a short wishlist for these functions:

  • I want to be able to specify their return type from functions, I think this is critical.
  • I want a return() function like in other languages. I know you can generally do without but sometimes that can lead to some pretty awkward code.
  • More docs

Bonus: The end of defined types?

These functions can create resources just like any other manifest can. This is a big difference from old Ruby functions who had to do all kinds of nasty things, possibly via create_resources. But since they can create resources they might be a viable replacement for defined types.

There are a few issues with this idea: The immediate missing part is that you cannot export a function. Additionally as they are outside of the resource system you couldn’t do overrides and do any relations on them. You can’t say install a package before a vhost made by a function.

The first I don’t really personally care for since I do not and will never use exported resources. The 2nd is perhaps a more important issue, from a ordering perspective the MOAR ordering in Puppet 4 helps but for doing notifies and such it might not be that hot.

It’s a interesting thought experiment though, I think with a bit of work defined types can be deprecated, people want to think of defined types as functions but they aren’t and this is a hurdle in learning Puppet for newcomers, with some work I think functions can eventually replace defined types. That’s a good goal to work toward.

Managing AWS CloudFront Security Group with AWS Lambda

One of our security groups on Amazon Web Services (AWS) allows access to an Elastic Load Balancer (ELB) from one of our Amazon CloudFront distributions. Traffic from CloudFront can originate from a number of a different source IP addresess that Amazon publishes. However, there is no pre-built security group to allow inbound traffic from CloudFront.

I constructed an AWS Lambda function to periodically update our security group so that we can ensure all CloudFront IP addresses are permitted to access our ELB.

AWS Lambda

AWS Lambda allows you to execute functions in a few different languages (Python, Java, and Node.js) in response to events. One of these events can be the triggering of a regular schedule. In this case, I created a scheduled event with an Amazon CloudWatch rule to execute a lambda function on an hourly basis.

CloudWatch Schedule to Lambda Function

The Idea

The core of my code involves calls to authorize_ingress and revoke_ingress using the boto3 library for AWS. AWS Lambda makes the boto3 library available for Python functions.


print("the following new ip addresses will be added:")
print(authorize_dict['ipranges'])
print("the following new ip addresses will be removed:")
print(revoke_dict['ipranges'])
security_group.authorize_ingress(ippermissions=[authorize_dict])
security_group.revoke_ingress(ippermissions=[revoke_dict])

Amazon publishes the IP address ranges of its various services online.


response = urllib2.urlopen('https://ip-ranges.amazonaws.com/ip-ranges.json')
jsondata = json.loads(response.read())
newipranges = [ x['ipprefix'] for x in jsondata['prefixes'] if x['service'] == 'cloudfront' ]
print(newip_ranges)

I can easily compare the allowed ingress address ranges in an existing security group with those retrieved from the published ranges. The authorized_ingress and revoke_ingress functions then allow me to make modifications to the security group to keep it up-to-date, and permit traffic from CloudFront to access my ELB.


for ip in new_ip_ranges:
    if ip not in current_ip_ranges:
        authorize_dict['ipranges'].append({u'cidrip': ip})
for ip in current_ip_ranges:
    if ip not in new_ip_ranges:
        revoke_dict['ipranges'].append({u'cidrip': ip})

The AWS Lambda Function

The full lambda function is written as a standard lambda_handler for AWS. In this case, the event and context are ignored, and the code is just executed on a regular schedule.

Lambda Function

Notice that the existing security group is directly referenced as sg-3xxexx5x.


from __future__ import print_function
import json, urllib2, boto3
def lambda_handler(event, context):
    response = urllib2.urlopen('https://ip-ranges.amazonaws.com/ip-ranges.json')
    json_data = json.loads(response.read())
    new_ip_ranges = [ x['ip_prefix'] for x in json_data['prefixes'] if x['service'] == 'cloudfront' ]
    print(new_ip_ranges)
    ec2 = boto3.resource('ec2')
    security_group = ec2.securitygroup('sg-3xxexx5x')
    current_ip_ranges = [ x['cidrip'] for x in security_group.ip_permissions[0]['ipranges'] ]
    print(current_ip_ranges)
    params_dict = {
        u'prefixlistids': [],
        u'fromport': 0,
        u'ipranges': [],
        u'toport': 65535,
        u'ipprotocol': 'tcp',
        u'useridgrouppairs': []
    }
    authorize_dict = params_dict.copy()
    for ip in new_ip_ranges:
        if ip not in current_ip_ranges:
            authorize_dict['ipranges'].append({u'cidrip': ip})
    revoke_dict = params_dict.copy()
    for ip in current_ip_ranges:
        if ip not in new_ip_ranges:
            revoke_dict['ipranges'].append({u'cidrip': ip})
    print("the following new ip addresses will be added:")
    print(authorize_dict['ipranges'])
    print("the following new ip addresses will be removed:")
    print(revoke_dict['ipranges'])
    security_group.authorize_ingress(ippermissions=[authorize_dict])
    security_group.revoke_ingress(ippermissions=[revoke_dict])
    return {'authorized': authorize_dict, 'revoked': revoke_dict}

The Security Policy

The above lamdba function presumes permissions to be able to edit the referenced security group. These permissions can be configured with an AWS Identity and Access Management (IAM) policy, applied to the role which the lamdba function executes as.

Lambda function role

Notice that the security group resource, sg-3xxexx5x, is specifically scoped to the us-west-2 AWS region.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeNetworkAcls"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:RevokeSecurityGroupIngress"
            ],
            "Resource": "arn:aws:ec2:us-west-2:*:security-group/sg-3xxexx5x"
        }
    ]
}

Making It All Work

In order to get everything hooked up correctly, an appropriate security group needs to exist. The identifier for the group needs to be referenced in both the Lambda script, and the policy used by the role that the lambda script executes as. The IAM policy uses the Amazon Resource Name (ARN) instead of the security group identifier. The AWS Lambda function presumes that Amazon will publish changes to the CloudFront IP address range in a timely manner, and that running the function once per hour will be sufficient to grant ingress permissions on the security group. If the CloudFront ranges change frequently, or traffic is particularly crucial, the frequency of the lambda function run should be increased.

The post Managing AWS CloudFront Security Group with AWS Lambda appeared first on Atomic Spin.