Viewing AlertManager Email Alerts via MailHog

After adding AlertManager to my Prometheus test stack in a previous post I spent some time triggering different failiure cases and generating test messages. While it’s slightly satisfying seeing rows change from green to red I soon wanted to actually send real alerts, with all their values somewhere I could easily view. My criteria were:

  • must be easy to integrate with AlertManager
  • must not require external network access
  • must be easy to use from docker-compose
  • should have as few moving parts as possible

A few short web searches later I stumbled back onto a small server I’ve used for this in the past - MailHog. MailHog is an awesome little server that listens for SMTP traffic and then displays it using an internal HTTP server. It has sensible defaults so no configuration was required, comes as a single binary and even has a working dockerhub image. My solution was found!

The amount of work to include it was even less than I’d hoped. A new docker-compose.yaml file for mailhog itself, a very basic AlertManager configuration file and a few lines of docker config to put the right configs in each of the containers later and we have a working email alert view:

MailHog screen shot of Alertmanager emails

Adding AlertManager to docker-compose Prometheus

What’s the use of monitoring if you can’t raise alerts? It’s half a solution at best and now I have basic monitoring working, as discussed in Prometheus experiments with docker-compose, it felt like it was time to add AlertManager, Prometheus often used partner in crime, so I can investigate raising, handling and resolving alerts. Unfortunately this turned out to be a lot harder than ‘just’ adding a basic exporter.

Before we delve into the issues and how I worked around them in my implementation let’s see the result of all the work, adding a redis alert and forcing it to trigger. Ignoring all the implementation details for now we need to do four things to add AlertManager to our experiments:

  • add the AlertManager container
  • tell Prometheus how to contact AlertManager
  • tell Prometheus where the alert rules files are located
  • add an alerting rule to confirm everything is connected

Assuming we’re in the root of docker-compose-prometheus we’ll run our docker-compose command to create all the instances we need for testing:

docker-compose 
  -f prometheus-server/docker-compose.yaml   
  -f alertmanager-server/docker-compose.yaml 
  -f redis-server/docker-compose.yaml        
up -d

You can confirm all the containers are available by running:

docker-compose 
  -f prometheus-server/docker-compose.yaml   
  -f alertmanager-server/docker-compose.yaml 
  -f redis-server/docker-compose.yaml        
ps

Screen shot of Prometheus alerting rule

In this screenshot you can see the Prometheus alerting page, with our RedisDown alert against a green background as everything is working correctly. We also show the RedisDown AlertManager rule configuration. This rule checks the redis_up value returned by the redis exporter. If redis is down it will be 0, and if it doesn’t recover in the next minute it will trigger an alert. It’s worth noting here that you can confirm your rules files are valid using this, less scary than it looks, promtool command:

# the left hand argument to `-v` is the local file from this repo.
docker run 
  -v `pwd`/redis-server/redis.rules:/fileof.rules 
  -it --entrypoint=promtool prom/prometheus:v2.1.0 check rules /fileof.rules

Checking /fileof.rules
  SUCCESS: 1 rules found

Everything seems to be configured correctly, so lets break it and confirm alerting is working. First we will kill the redis container. This will cause the exporter to change the value of redis_up.

# kill the container
docker kill prometheusserver_redis-server_1

# check it has exited
docker ps -a | grep prometheusserver_redis-server_1

# simplified output
library/redis:4.0.8    Exited (137) 2 minutes ago    prometheusserver_redis-server_1

The alert will then change to “State PENDING” on the prometheus alerts page. Once the minute it up it will change to “State FIRING” and, if everything is working, appear in AlertManager too.

Screen shot of a triggered Prometheus alerting rule

In addition to using the web UI you can directly query alertmanager via the command line using the docker container

docker exec -ti prometheusserver_alert-manager_1 amtool 
  --alertmanager.url http://127.0.0.1:9093 alert

Alertname  Starts At                Summary
RedisDown  2018-03-09 18:33:58 UTC  Redis Availability alert.

At this point we have a basic but working AlertManager running alongside our local prometheus. It’s far from a complete or comprehensive configuration, and the alerts don’t yet go anywhere, but it’s a solid base to start your own experiments from. You can see all the code to make this work in the add_alert_manager branch

Now we’ve covered how AlertManager fits into our tests and how to confirm it’s working we will delve into how it’s configured, something that was much more work than I expected. Prometheus, by design, runs with a single configuration file. While this is fine for a number of use cases, my design goal of combining any combination of docker-compose files to create a test environment doesn’t play well with it. This became clear to me when I needed to add the alertmanager configuration to the main config file, but only when alertmanager is included. The config to enable AlertManager and its alerting rules is concise:

rule_files:
  - "/etc/prometheus/*.rules"

alerting:
  alertmanagers:
    - static_configs:
      - targets: ['alert-manager:9093']

The first part, rule_files:, accepts wild card selection of alert rule files. Each of these files contain one of more alert rules, such as our RedisDown example above. This globbing makes it easy to add rules to prometheus from each included component. The second part tells prometheus where it can find the alertmanager instance it should raise alerts with.

In order to use these configs I had to add another step to running prometheus; collecting all the configuration snippets and combining them into a single file before starting the process. My first thought was to create my own Prometheus container and preprocess the configuration before starting the daemon. I quickly decided against this as I don’t want to be responsible for maintaining my own fork of the Dockerfile. I was also worried about timing issues and start up race conditions from all the other containers adding their configs. Instead I decided to add another container.

This tiny busybox based container, which I named promconf-concat, runs a short shell script in a loop. This code concatenates all the configuration fragments, starting with the base config, together. If the complete config file has changed it replaces the existing, volume mounted, file which prometheus then detects as changed and reloads.

I have a strong suspicion I’ll be revisiting this part of the project again and splitting the fragments more. Adding ordering will probably be required as some of the exporters (such as MySQL) can’t be configured as targets via the file_sd_configs mechanism. However for now it’s allowed me to test the basic alerting functionality and continue to delver more deeply into Prometheus.

50 000 Node Choria Network

I’ve been saying for a while now my aim with Choria is that someone can get a 50 000 node Choria network that just works without tuning, like, by default that should be the scale it supports at minimum.

I started working on a set of emulators to let you confirm that yourself – and for me to use it during development to ensure I do not break this promise – though that got a bit side tracked as I wanted to do less emulation and more just running 50 000 instances of actual Choria, more on that in a future post.

Today I want to talk a bit about a actual 50 000 real nodes deployment and how I got there – the good news is that it’s terribly boring since as promised it just works.

Setup


Network


The network is pretty much just your typical DC network. Bunch of TOR switches, Distribution switches and Core switches, nothing special. Many dom0’s and many more domUs and some specialised machines. It’s flat there are firewalls between all things but it’s all in one building.

Hardware


I have 4 machines, 3 set aside for the Choria Network Broker Cluster and 1 for a client, while waiting for my firewall ports I just used the 1 machine for all the nodes as well as the client. It’s a 8GB RAM VM with 4 vCPU, not overly fancy at all. Runs Enterprise Linux 6.

In the past I think we’d have considered this machine on the small side for a ActiveMQ network with 1000 nodes 😛

I’ll show some details of the single Choria Network Broker here and later follow up about the clustered setup.

Choria


I run a custom build of Choria 0.0.11, I bump the max connections up to 100k and turned off SSL since we simply can’t provision certificates, so a custom build let me get around all that.

The real reason for the custom build though is that we compile in our agent into the binary so the whole deployment that goes out to all nodes and broker is basically what you see below, no further dependencies at all, this makes for quite a nice deployment story since we’re a bit challenged in that regard.

$ rpm -ql choria
/etc/choria/broker.conf
/etc/choria/server.conf
/etc/logrotate.d/choria
/etc/init.d/choria-broker
/etc/init.d/choria-server
/etc/sysconfig/choria-broker
/etc/sysconfig/choria-server
/usr/sbin/choria

Other than this custom agent and no SSL we’re about on par what you’d get if you just install Choria from the repos.

Network Broker Setup


The Choria Network Broker is deployed basically exactly as the docs. Including setting the sysctl values to what was specified in the docs.

identity = choria1.example.net
logfile = /var/log/choria.log
 
plugin.choria.stats_address = ::
plugin.choria.stats_port = 8222
plugin.choria.network.listen_address = ::
plugin.choria.network.client_port = 4222
plugin.choria.network.peer_port = 4223

Most of this isn’t even needed basically if you use defaults like you should.

Server Setup


The server setup was even more boring:

logger_type = file
logfile = /var/log/choria.log
plugin.choria.middleware_hosts = choria1.example.net
plugin.choria.use_srv = false

Deployment


So we were being quite conservative and deployed it in batches of 50 a time, you can see the graph below of this process as seen from the Choria Network Broker (click for larger):

This is all pretty boring actually, quite predictable growth in memory, go routines, cpu etc. The messages you see being sent is me doing lots of pings and rpc’s and stuff just to check it’s all going well.

$ ps -auxw|grep choria
root     22365 12.9 14.4 2879712 1175928 ?     Sl   Mar06 241:34 /usr/choria broker --config=....
# a bit later than the image above
$ sudo netstat -anp|grep 22365|grep ESTAB|wc -l
58319

Outcome


So how does work in practise? In the past we’d have had a lot of issues with getting consistency out of a network of even 10% this size, I was quite confident it was not the Ruby side, but you never know?

Well, lets look at this one, I set discovery_timeout = 20 in my client configuration:

$ mco rpc rpcutil ping --display failed
Finished processing 51152 / 51152 hosts in 20675.80 ms
Finished processing 51152 / 51152 hosts in 20746.82 ms
Finished processing 51152 / 51152 hosts in 20778.17 ms
Finished processing 51152 / 51152 hosts in 22627.80 ms
Finished processing 51152 / 51152 hosts in 20238.92 ms

That’s a huge huge improvement, and this is without fancy discovery methods or databases or anything – it’s the, generally fairly unreliable, broadcast based method of discovery. These same nodes on a big RabbitMQ cluster never gets a consistent result (and it’s 40 seconds slower), so this is a huge win for me.

I am still using the Ruby code here of course and it’s single threaded and stuck on 1 CPU, so in practise it’s going to have a hard ceiling of churning through about 2500 to 3000 replies/second, hence the long timeouts there.

I have a go based ping, it round trips this network in less than 3.5 seconds quite reliably – wow.

The broker peaked at 25Mbps at times when doing many concurrent RPC requests and pings etc, but it’s all just been pretty good with no surprises.

So, that’s about it, I really can’t complain about this.

Choria Progress Update

It’s been a while since I posted about Choria and where things are. There are major changes in the pipeline so it’s well overdue a update.

The features mentioned here will become current in the next release cycle – about 2 weeks from now.

New choria module


The current gen Choria modules grew a bit organically and there’s a bit of a confusion between the various modules. I now have a new choria module, it will consume features from the current modules and deprecate them.

On the next release it can manage:

  1. Choria YUM and APT repos
  2. Choria Package
  3. Choria Network Broker
  4. Choria Federation Broker
  5. Choria Data Adatpaters

Network Brokers


We have had amazing success with the NATS broker, lightweight, fast, stable. It’s perfect for Choria. While I had a pretty good module to configure it I wanted to create a more singular experience. Towards that there is a new Choria Broker incoming that manages an embedded NATS instance.

To show what I am on about, imagine this is all that is required to configure a cluster of 3 production ready brokers capable of hosting 50k or more Choria managed nodes on modestly specced machines:

plugin.choria.broker_network = true
plugin.choria.network.peers = nats://choria1.example.net:4223, nats://choria2.example.net:4223, nats://choria3.example.net:4223
plugin.choria.stats_address = ::

Of course there is Puppet code to do this for you in choria::broker.

That’s it, start the choria-broker daemon and you’re done – and ready to monitor it using Prometheus. Like before it’s all TLS and all that kinds of good stuff.

Federation Brokers


We had good success with the Ruby Federation Brokers but they also had issues particularly around deployment as we had to deploy many instances of them and they tended to be quite big Ruby processes.

The same choria-broker that hosts the Network Broker will now also host a new Golang based Federation Broker network. Configuration is about the same as before you don’t need to learn new things, you just have to move to the configuration in choria::broker and retire the old ones.

Unlike the past where you had to run 2 or 3 of the Federation Brokers per node you now do not run any additional processes, you just enable the feature in the singular choria-broker, you only get 1 process. Internally each run 10 instances of the Federation Broker, its much more performant and scalable.

Monitoring is done via Prometheus.

Data Adapters


Previously we had all kinds of fairly bad schemes to manage registration in MCollective. The MCollective daemon would make requests to a registration agent, you’d designate one or more nodes as running this agent and so build either a file store, mongodb store etc.

This was fine at small size but soon enough the concurrency in large networks would overwhelm what could realistically be expected from the Agent mechanism to manage.

I’ve often wanted to revisit that but did not know what approach to take. In the years since then the Stream Processing world has exploded with tools like Kafka, NATS Streaming and offerings from GPC, AWS and Azure etc.

Data Adapters are hosted in the Choria Broker and provide stateless, horizontally and vertically scalable Adapters that can take data from Choria and translate and publish them into other systems.

Today I support NATS Streaming and the code is at first-iteration quality, problems I hope to solve with this:

  • Very large global scale node metadata ingest
  • IoT data ingest – the upcoming Choria Server is embeddable into any Go project and it can exfil data into Stream Processors using this framework
  • Asynchronous RPC – replies to requests streaming into Kafka for later processing, more suitable for web apps etc
  • Adhoc asynchronous data rewrites – we have had feature requests where person one can make a request but not see replies, they go into Elastic Search

Plugins


After 18 months of trying to get Puppet Inc to let me continue development on the old code base I have finally given up. The plugins are now hosted in their own GitHub Organisation.

I’ve released a number of plugins that were never released under Choria.

I’ve updated all their docs to be Choria specific rather than out dated install docs.

I’ve added Action Policy rules allowing read only actions by default – eg. puppet status will work for anyone, puppet runonce will give access denied.

I’ve started adding Playbooks the first ones are mcollective_agent_puppet::enable, mcollective_agent_puppet::disable and mcollective_agent_puppet::disable_and_wait.

Embeddable Choria


The new Choria Server is embeddable into any Go project. This is not a new area of research for me – this was actually the problem I tried to solve when I first wrote the current gen MCollective, but i never got so far really.

The idea is that if you have some application – like my Prometheus Streams system – where you will run many of a specific daemon each with different properties and areas of responsibility you can make that daemon connect to a Choria network as if it’s a normal Choria Server. The purpose of that is to embed into the daemon it’s life cycle management and provide an external API into this.

The above mentioned Prometheus Streams server for example have a circuit breaker that can start/stop the polling and replication of data:

$ mco rpc prometheus_streams switch -T prometheus
Discovering hosts using the mc method for 2 second(s) .... 1
 
 * [ ============================================================> ] 1 / 1
 
 
prom.example.net
     Mode: poller
   Paused: true
 
 
Summary of Mode:
 
   poller = 1
 
Summary of Paused:
 
   false = 1
 
Finished processing 1 / 1 hosts in 399.81 ms

Here I am communicating with the internals of the Go process, they sit in their of Sub Collective, expose facts and RPC endpoints. I can use discovery to find all only nodes in certain modes, with certain jobs etc and perform functions you’d typically do via a REST management interface over a more suitable interface.

Likewise I’ve embedded a Choria Server into IoT systems where it uses the above mentioned Data Adapters to publish temperature and humidity while giving me the ability to extract from those devices data in demand using RPC and do things like in-place upgrades of the running binary on my IoT network.

You can use this today in your own projects and it’s compatible with the Ruby Choria you already run. A full walk through of doing this can be found in the ripienaar/embedded-choria-sample repository.

Green system percentage vs user visible issues

How much of your system does your internal monitoring need to consider down before something is user visible? While there will always be the perfect chain of three or four things that can cripple a chunk of you customer visible infrastructure there are often a lot of low importance checks that will flare up and consume time and attention. But what’s the ratio?

As a small thought experiment on one project I’ve recently started to leave a new, very simple four panel, Grafana dashboard open on a Raspberry PI driven monitor that shows the percentage of the internal monitoring checks that are currently in a successful state next to the number of user visible issues and incidents. I’ve found watching the percentage of the system that’s working rise and fall without anyone outside the company, and often the team, noticing to be strangely hypnotic. I’ve also added a couple of panels to show the number of events of each of those types over the last hour.

Fugly Dashboard showing 4 panels described in the page

I was hoping the numbers would provide some inspiration towards questions like, “Are we monitoring at the right level?”, “Do we need to be running all of these at this frequency?” and similar questions but so far I’ve mostly found it to be reassuring that it can withstand small internal failures while also worrying about the amount of state churn it seems to detect. While it’s not been as helpful as alert summary roll ups it has been a great source of visual white noise while thinking about other alerting issues.

Choria Playbooks DSL

I previously wrote about Choria Playbooks – a reminder they are playbooks written in YAML format and can orchestrate many different kinds of tasks, data, inputs and discovery systems – not exclusively ones from MCollective. It integrates with tools like terraform, consul, etcd, Slack, Graphite, Webhooks, Shell scripts, Puppet PQL and of course MCollective.

I mentioned in that blog post that I did not think a YAML based playbook is the way to go.

I am very pleased to announce that with the release of Choria 0.6.0 playbooks can now be written with the Puppet DSL. I am so pleased with this that effectively immediately the YAML DSL is deprecated and set for a rather short life time.

A basic example can be seen here, it will:

  • Reuse a company specific playbook and notify Slack of the action about to be taken
  • Discover nodes using PQL in a specified cluster and verify they are using a compatible Puppet Agent
  • Obtain a lock in Consul ensuring only 1 member in the team perform critical tasks related to the life cycle of the Puppet Agent at a time
  • Disable Puppet on the discovered nodes
  • Wait for up to 200 seconds for the nodes to become idle
  • Release the lock
  • Notify Slack that the task completed
# Disables Puppet and Wait for all in-progress catalog compiles to end
plan acme::disable_puppet_and_wait (
  Enum[alpha, bravo] $cluster
) {
  choria::run_playbook(acme::slack_notify, message => "Disabling Puppet in cluster ${cluster}")
 
  $puppet_agents = choria::discover("mcollective",
    discovery_method => "choria",
    agents => ["puppet"],
    facts => ["cluster=${cluster}"],
    uses => { puppet => ">= 1.13.1" }
  )
 
  $ds = {
    "type" => "consul",
    "timeout" => 120,
    "ttl" => 60
  }
 
  choria::lock("locks/puppet.critical", $ds) || {
    choria::task(
      "action" => "puppet.disable",
      "nodes" => $puppet_agents,
      "fail_ok" => true,
      "silent" => true,
      "properties" => {"message" => "restarting puppet server"}
    )
 
    choria::task(
      "action"    => "puppet.status",
      "nodes"     => $puppet_agents,
      "assert"    => "idling=true",
      "tries"     => 10,
      "silent"    => true,
      "try_sleep" => 20,
    )
  }
 
  choria::run_playbook(acme::slack_notify,
    message => sprintf("Puppet disabled on %d nodes in cluster %s", $puppet_agents.count, $cluster)
  )
}

As you can see we can re-use playbooks and build up a nice cache of utilities that the entire team can use, the support for locks and data sharing ensures safe and coordinated use of this style of system.

You can get this today if you use Puppet 5.4.0 and Choria 0.6.0. Refer to the Playbook Docs for more details, especially the Tips and Patterns section.

Why Puppet based DSL?

The Plan DSL as you’ll see in the Background and History part later in this post is something I have wanted a long time. I think the current generation Puppet DSL is fantastic and really suited to this problem. Of course having this in the Plan DSL I can now also create Ruby versions of this and I might well do that.

The Plan DSL though have many advantages:

  • Many of us already know the DSL
  • There are vast amounts of documentation and examples of Puppet code, you can get trained to use it.
  • The other tools in the Puppet stable support plans – you can use puppet strings to document your Playbooks
  • The community around the Puppet DSL is very strong, I imagine soon rspec-puppet might support testing Plans and so by extension Playbooks. This appears to be already possible but not quite as easy as it could be.
  • We have a capable and widely used way of sharing these between us in the Puppet Forge

I could not compete with this in any language I might want to support.

Future of Choria Playbooks

As I mentioned the YAML playbooks are not long for this world. I think they were an awesome experiment and I learned a ton from them, but these Plan based Playbooks are such a massive step forward that I just can’t see the YAML ones serving any purpose what so ever.

This release supports both YAML and Plan based Playbooks, the next release will ditch the YAML ones.

At that time a LOT of code will be removed from the repositories and I will be able to very significantly simplify the supporting code. My goal is to make it possible to add new task types, data sources, discovery sources etc really easily, perhaps even via Puppet modules so the eco system around these will grow.

I will be doing a bunch of work on the Choria Plugins (agent, server, puppet etc) and these might start shipping small Playbooks that you can use in your own Playbooks. The one that started this blog post would be a great candidate to supply as part of the Choria suite and I’d like to do that for this and many other plugins.

Background and History

For many years I have wanted Puppet to move in a direction that might one day support scripts – perhaps even become a good candidate for shell scripts, not at the expense of the CM DSL but as a way to reward people for knowing the Puppet Language. I wanted this for many reasons but a major one was because I wanted to use it as a DSL to write orchestration scripts for MCollective.

I did some proof of concepts of this late in 2012, you can see the fruits of this POC here, it allowed one to orchestrate MCollective tasks using Puppet DSL and a Ruby DSL. This was interesting but the DSL as it was then was no good for this.

I also made a pure YAML Puppet DSL that deeply incorporated Hiera and remained compatible with the Puppet DSL. This too was interesting and in hindsight given the popularity of YAML I think I should have given this a lot more attention than I did.

Neither of these really worked for what I needed. Around the time Henrik Lindberg started talking about massive changes to the Puppet DSL and I think our first ever conversation covered this very topic – this must have been back in 2012 as well.

More recently I worked on YAML based playbooks for Choria, a sample can be seen in the old Choria docs, this is about the closest I got to something workable, we have users in the wild using it and having success with these. As a exploration they were super handy and taught me loads.

Fast forward to Puppet Conf 2017 and Puppet Inc announced something called Puppet Plans, these are basically script like, uncompiled (kind of), top-down executed and aimed at use within your CLI much like you would a script. This was fantastic news, unfortunately the reality ended up with these locked up inside their new SSH based orchestrator called Bolt. Due to some very unfortunate technical direction and decision making Plans are entirely unusable by Puppet users without Bolt. Bolt vendors it’s own Puppet and Facter and so it’s unaware of the AIO Puppet.

Ideally I would want to use Plans as maintained by Puppet Inc for my Playbooks but the current status of things are that the team just is not interested in moving in that direction. Thus in the latest version of Choria I have implemented my own runner, result types, error types and everything needed to write Choria Playbooks using the Puppet DSL.

Conclusion


I am really pleased with how these playbooks turned out and am excited for what I can provide to the community in the future. There are no doubt some rough edges today in the implementation and documentation, your continued feedback and engagement in the Choria community around these would ensure that in time we will have THE Playbook system in the Puppet Eco system.

Prometheus experiments with docker-compose

As 2018 rolls along the time has come to rebuild parts of my homelab again. This time I’m looking at my monitoring and metrics setup, which is based on sensu and graphite, and planning some experiments and evaluations using Prometheus. In this post I’ll show how I’m setting up my tests and provide the Prometheus experiments with docker-compose source code in case it makes your own experiments a little easier to run.

My starting requirements were fairly standard. I want to use containers where possible. I want to test lots of different backends and I want to be able to pick and choose which combinations of technologies I run for any particular tests. As an example I have a few little applications that make use of redis and some that use memcached, but I don’t want to be committed to running all of the backing services for each smaller experiment. In terms of technology I settled on docker-compose to help keep the container sprawl in check while also enabling me to specify all the relationships. While looking into compose I found Understanding multiple Compose files and my basic structure began to emerge.

Starting with prometheus and grafana themselves I created the prometheus-server directory and added a basic prometheus config file to configure the service. I then added configuration for each of the things it was to collect from; prometheus and grafana in this case. Once these were in place I added the prometheus and grafana docker-compose.yaml file and created the stack.

docker-compose -f prometheus-server/docker-compose.yaml up -d

docker-compose -f prometheus-server/docker-compose.yaml ps

> docker-compose -f prometheus-server/docker-compose.yaml ps
        Name                   Command       State   Ports
-----------------------------------------------------------------------
prometheusserver_grafana_1     /run.sh       Up  0.0.0.0:3000->3000/tcp
prometheusserver_prometheus_1  /bin/prom ... Up  0.0.0.0:9090->9090/tcp

After manually configuring the prometheus data source in Grafana, all of which is covered in the README you have a working prometheus scraping itself and grafana and a grafana that allows you to experiment with presenting the data.

While this is a good first step I need visibility into more than the monitoring system itself, so it’s time to add another service. Keeping our goal of being modular in mind I decided to break everything out into separate directories and isolate the configuration. Adding a new service is as simple as adding a redis-server directory and writing a docker-compose file to run redis and the prometheus exporter we use to get metrics from it. This part is simple as most of the work is done for us. We use third party docker containers and everything is up and running. But how do we add the redis exporter to the prometheus targets? That’s where docker-composes merging behaviour shines.

In our base docker-compose.yaml file we define the prometheus service and the volumes assigned to it:

services:
  prometheus:
    image: prom/prometheus:v2.1.0
    ports:
      - 9090:9090
    networks:
      - public
    volumes:
      - prometheus_data:/prometheus
      - ${PWD}/prometheus-server/config/prometheus.yml:/etc/prometheus/prometheus.yml
      - ${PWD}/prometheus-server/config/targets/prometheus.json:/etc/prometheus/targets/prometheus.json
      - ${PWD}/prometheus-server/config/targets/grafana.json:/etc/prometheus/targets/grafana.json
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'

You can see we’re mounting individual target files in to prometheus for it to probe. Now in our docker-compose-prometheus/redis-server/docker-compose.yaml file we’ll reference back to the existing prometheus service and add to the volumes array.

  prometheus:
    volumes:
      - ${PWD}/redis-server/redis.json:/etc/prometheus/targets/redis.json

Rather than overriding the array this incomplete service configuration adds another element to it. Allowing us to build up our config over multiple docker-compose files. In order for this to work we have to run the compose commands with each config specified every time. Resulting in the slightly hideous -

docker-compose 
  -f prometheus-server/docker-compose.yaml 
  -f redis-server/docker-compose.yaml 
  up -d

Once you’re running a stack with 3 or 4 components you’ll probably reach for aliases and add a base docker-compose replacement

alias dc='docker-compose -f prometheus-server/docker-compose.yaml -f redis-server/docker-compose.yaml'

and then call that with actual commands like dc up -d and dc logs. Adding your own application to the testing stack is as easy as adding a backing resource. Create a directory and the two config files and everything should be hooked in.

It’s early in the process and I’m sure to find issues with this naive approach but it’s enabled me to create arbitrarily complicated prometheus test environments and start evaluating its ecosystem of plugins and exporters. I’ll add more to it and refine where possible, the manual steps should hopefully be reduced by Grafana 5 for example, but hopefully it’ll remain a viable way for myself and others to run quick, adhoc tests.

Replicating NATS Streams between clusters

I’ve mentioned NATS before – the fast and light weight message broker from nats.io – but I haven’t yet covered the sister product NATS Streaming before so first some intro.

NATS Streaming is in the same space as Kafka, it’s a stream processing system and like NATS it’s super light weight delivered as a single binary and you do not need anything like Zookeeper. It uses normal NATS for communication and ontop of that builds streaming semantics. Like NATS – and because it uses NATS – it is not well suited to running over long cluster links so you end up with LAN local clusters only.

This presents a challenge since very often you wish to move data out of your LAN. I wrote a Replicator tool for NATS Streaming which I’ll introduce here.

Streaming?


First I guess it’s worth covering what Streaming is, I should preface also that I am quite new in using Stream Processing tools so I am not about to give you some kind of official answer but just what it means to me.

In a traditional queue like ActiveMQ or RabbitMQ, which I covered in my Common Messaging Patterns posts, you do have message storage, persistence etc but those who consume a specific queue are effectively a single group of consumers and messages either go to all or load shared all at the same pace. You can’t really go back and forth over the message store independently as a client. A message gets ack’d once and once it’s been ack’d it’s done being processed.

In a Stream your clients each have their own view over the Stream, they all have their unique progress and point in the Stream they are consuming and they can move backward and forward – and indeed join a cluster of readers if they so wish and then have load balancing with the other group members. A single message can be ack’d many times but once ack’d a specific consumer will not get it again.

This is to me the main difference between a Stream processing system and just a middleware. It’s a huge deal. Without it you will find it hard to build very different business tools centred around the same stream of data since in effect every message can be processed and ack’d many many times vs just once.

Additionally Streams tend to have well defined ordering behaviours and message delivery guarantees and they support clustering etc. much like normal middleware has. There’s a lot of similarity between streams and middleware so it’s a bit hard sometimes to see why you won’t just use your existing queueing infrastructure.

Replicating a NATS Stream


I am busy building a system that will move Choria registration data from regional data centres to a global store. The new Go based Choria daemon has a concept of a Protocol Adapter which can receive messages on the traditional NATS side of Choria and transform them into Stream messages and publish them.

This gets me my data from the high frequency, high concurrency updates from the Choria daemons into a Stream – but the Stream is local to the DC. Indeed in the DC I do want to process these messages to build a metadata store there but I also want to processes these messages for replication upward to my central location(s).

Hence the importance of the properties of Streams that I highlighted above – multiple consumers with multiple views of the Stream.

There are basically 2 options available:

  1. Pick a message from a topic, replicate it, pick the next one, one after the other in a single worker
  2. Have a pool of workers form a queue group and let them share the replication load

At the basic level the first option will retain ordering of the messages – order in the source queue will be the order in the target queue. NATS Streaming will try to redeliver a message that timed out delivery and it won’t move on till that message is handled, thus ordering is safe.

The 2nd option since you have multiple workers you have no way to retain ordering of the messages since workers will go at different rates and retries can happen in any order – it will be much faster though.

I can envision a 3rd option where I have multiple workers replicating data into a temporary store where on the other side I inject them into the queue in order but this seems super prone to failure, so I only support these 2 methods for now.

Limiting the rate of replication


There is one last concern in this scenario, I might have 10s of data centres all with 10s of thousands of nodes. At the DC level I can handle the rate of messages but at the central location where I might have 10s of DCs x 10s of thousands of machines if I had to replicate ALL the data at near real time speed I would overwhelm the central repository pretty quickly.

Now in the case of machine metadata you probably want the first piece of metadata immediately but from then on it’ll be a lot of duplicated data with only small deltas over time. You could be clever and only publish deltas but you have the problem then that should a delta publish go missing you end up with a inconsistent state – this is something that will happen in distributed systems.

So instead I let the replicator inspect your JSON, if your JSON has something like fqdn in it, it can look at that and track it and only publish data for any single matching sender every 1 hour – or whatever you configure.

This has the effect that this kind of highly duplicated data is handled continuously in the edge but that it only gets a snapshot replication upwards once a hour for any given node. This solves the problem neatly for me without there being any risks to deltas being lost, it’s also a lot simpler to implement.

Choria Stream Replicator


So finally I present the Choria Stream Replicator. It does all that was described above with a YAML configuration file, something like this:

debug: false                     # default
verbose: false                   # default
logfile: "/path/to/logfile"      # STDOUT default
state_dir: "/path/to/statedir"   # optional
topics:
    cmdb:
        topic: acme.cmdb
        source_url: nats://source1:4222,nats://source2:4222
        source_cluster_id: dc1
        target_url: nats://target1:4222,nats://target2:4222
        target_cluster_id: dc2
        workers: 10              # optional
        queued: true             # optional
        queue_group: cmdb        # optional
        inspect: host            # optional
        age: 1h                  # optional
        monitor: 10000           # optional
        name: cmdb_replicator    # optional

Please review the README document for full configuration details.

I’ve been running this in a test DC with 1k nodes for a week or so and I am really happy with the results, but be aware this is new software so due care should be given. It’s available as RPMs, has a Puppet module, and I’ll upload some binaries on the next release.

A short 2017 review

It’s time for a little 2017 navel gazing. Prepare for a little self-congratulation and a touch of gushing. You’ve been warned. In general my 2017 was a decent one in terms of tech. I was fortunate to be presented a number of opportunities to get involved in projects and chat to people that I’m immensely thankful for and I’m going to mention some of them here to remind myself how lucky you can be.

Let’s start with conferences, I was fortunate enough to attend a handful of them in 2017. Scale Summit was, as always, a great place to chat about our industry. In addition to the usual band of rascals I met Sarah Wells in person for the first time and was blown away by the breadth and depth of her knowledge. She gave a number of excellent talks over 2017 and they’re well worth watching. The inaugural Jeffcon filled in for a lack of Serverless London (fingers crossed for 2018) and was inspiring throughout, from the astounding keynote by Simon Wardley keynote all the way to the after conference chats.

I attended two DevopsDays, London, more about which later, and Stockholm. It was the first in Sweden and the organisers did the community proud. In a moment of annual leave burning I also attended Google Cloud and AWS Summits at the Excel centre. It’s nice to see tech events so close to where I’m from. I finished the year off with the GDS tech away day, DockerCon Europe and Velocity EU.

DevopsDays holds a special place in my heart as the conference and community that introduced me to so many of my peers that I heartily respect. The biggest, lasting contribution, of Patricks for me is building those bridges. When the last “definition of Devops” post is made I’ll still cherish the people I met from that group of very talented folk. That’s one of the reasons I was happy to be involved in the organisation of my second London DevOps. You’d be amazed at the time, energy and passion the organisers, speakers and audience invest in to a DevopsDays event. But it really does show on the day(s).

I was also honoured to be included in the Velocity Europe Program Committee. Velocity has always been one of the important events of industry and to go from budgeting most of a year in advance to attend to being asked to help select from the submitted papers, and even more than that, be a session chair, was something I’m immensely proud of and thankful to James Turnbull for even thinking of me. The speakers, some of who were old hands at large events and some giving their first conference talk (in their second language no less!), were a pleasure to work with and made a nerve wracking day so much better than I could have hoped. It was also a stark reminder of how much I hate speaking in front of a room full of people.

Moving away from gushing over conferences, I published a book. It was a small experiment and it’s been very educational. It’s sold a few copies, made enough to pay for the domain for a few years and led to some interesting conversations with readers. I also wrote a few Alexa skills. While they’re not the more complicated or interesting bits of code from last year they have a bit of a special significance to me. I’m from a very non-technical background so it’s nice for my family to actually see, or in this case hear, something I’ve built.

Other things that helped keep me sane were tech reviewing a couple of books, hopefully soon to be published, and reviewing talk submissions. Some for conferences I was heavily involved in and some for events I wasn’t able to attend. It’s a significant investment of time but nearly every one of them taught me something. Even about technology I consider myself competent in.

I still maintain a small quarterly Pragmatic Investment Plan (PiP), which I started a few years ago, and while it’s more motion than progress these days it does keep me honest and ensure I do at least a little bit of non-work technology each month. Apart from Q1 2017 I surprisingly managed to read a tech book each month, post a handful of articles on my blog, and attend a few user groups here and there. I’ve kept the basics of the PiP for 2018 and I’m hoping it keeps me moving.

My general reading for the year was the worst it’s been for five years. I managed to read, from start to finish, 51 books. Totalling under 15,000 pages. I did have quite a few false starts and unfinished books at the end which didn’t help.

Oddly, my most popular blog post of the year was Non-intuitive downtime and possibly not lost sales. It was mentioned in a lot of weekly newsletters and resulted in quite a bit of traffic. SRE weekly also included it, which was a lovely change of pace from my employer being mentioned in the “Outages” section.

All in all 2017 was a good year for me personally and contained at least one career highlight. In closing I’d like to thank you for reading UnixDaemon, especially if you made it this far down, and let’s hope we both have an awesome 2018.

Terraform testing thoughts

As your terraform code grows in both size and complexity you should invest in tests and other ways to ensure everything is doing exactly what you intended. Although there are existing ways to exercise parts of your code I think Terraform is currently missing an important part of testing functionality, and I hope by the end of this post you’ll agree.

I want puppet catalog compile testing in terraform

Our current terraform testing process looks a lot like this:

  • precommit hooks to ensure the code is formatted and valid before it’s checked in
  • run terraform plan and apply to ensure the code actually works
  • execute a sparse collection of AWSSpec / InSpec tests against the created resources
  • Visually check the AWS Console to ensure everything “looks correct”

We ensure the code is all syntactically validate (and pretty) before it’s checked in. We then run a plan, which often finds issues with module paths, names and such, and then the slow, all encompassing, and cost increasing apply happens. And then you spot an unexpanded variable. Or that something didn’t get included correctly with a count.

I think there is a missed opportunity to add a separate phase, between plan and apply above, to expose the compiled plan in a easy to integrate format such as JSON or YAML. This would allow existing testing tools, and things like custom rspec matchers and cucumber test cases, to verify your code before progressing to the often slow, and cash consuming, apply phase. There are a number of things you could usefully test in a serialised plan output. Are your “fake if” counts doing what you expect? Are those nested data structures translating to all the tags you expect? How about the stringified splats and local composite variables? And what are the actual values hidden behind those computed properties? All of this would be visible at this stage. Having these tests would allow you to catch a lot of more subtle logic issues before you invoke the big hammer of actually creating resources.

I’m far from the first person to request this and upstream have been fair and considerate but it’s not something that’s on the short term road map. Work arounds do exist but they all have expensive limitations. The current plan file is in a binary format that isn’t guaranteed to be backwards compatible to external clients. Writing a plan output parser is possible but “a tool like this is very likely to be broken by future Terraform releases, since we don’t consider the human-oriented plan output to be a compatibility constraint” and hooking the plan generation code, an approach taken by palantir/tfjson will be a constant re-investment as terraforms core rapidly changes.

Adding a way to publish the plan in an easy to process way would allow many other testing tools and approaches to bloom and I hope I’ve managed to convince you that it’d be a great addition to terraform.