Chef & FreeBSD : use pkgng

Baptiste Daroussin did an incredible job on FreeBSD with the new packages system, named PkgNG. It brings modern workflow, options and shiny features that were needed for a long time. Say goodbye to painfully long upgrades.

However, Chef is not yet able to use this packaging system as it does not have a PkgNG provider, or not had. This is a hacky way to do so but here is a way to use PkgNG with chef, making it the default provider for your packages.

Read more : poudriere & pkgng

Cuisine : updated & shiny

cuisineIt’s been a while since my last post here, and I have updated my cuisine dashboard yesterday so here is a little follow up on what’s new in it.

First, the asynchronous handler has been updated, pushing backing data a bit differently and fixing a stupid issue on diffs. Check it out, it is needed for this new version of the cuisine dashboard.

I added a consumer that allows to use postgresql as storage backend. This does NOT replace the need for elasticsearch. It’s by making mistakes that you learn, and this one was pretty insightful : it’s better (necessary ?) to have a permanent storage you can rely on. When you come to update mappings, change stuff everywhere you don’t want to loose your data, so reindexing is the solution. There are now 3 scripts to cope with the incoming data :

  • direct indexing to ES, this one does NOT remove data from message queue.
  • storing to PG, data is removed from message queue
  • going to both, data is removed from message queue

These scripts will probably be federated to a single one later, that will be a proper daemon (with logging, pid files and such)

UPDATE : this rewrite is done. A single script allows data consumption, indexing and storage. As a bonus, a quick and dirty packager based on FPM is provided.

In the additions, cuisine now takes enviroments in account, you can filter when looking the last runs and in search you can restrict to a particular one.

To end this post, a few screenshots :

index search

Cuisine : a chef dashboard

cuisine When I wrote the asynchronous chef handler that I presented in the previous post, I had a little idea in mind. being able to track changes made by chef. The idea grew up a little and I now release a little dashboard I wrote. It’s still in a very early stage of development but I’ll try to present the idea behind it.

The changes (including diffs) are pushed in a queue. This queue is consumed by a script and datas are indexed in elasticsearch, an open source search engine. On the top of this I wrote a web interface, based on sinatra and twitter’s bootstrap) that allow you to see the latest runs, filter out runs with no changes and search on criterias (hostname, updated resources and inside the diffs)

To use this you will need a couple of things :

  • a STOMP broker (I use rabbitmq, but activemq or stompserver will fit too)
  • an elasticsearch instance (or cluster)
  • sinatra and its dependencies + the stomp ruby gem

The code is available on github, feel free to get in touch on freenode IRC, you can find me on the #chef-hacking channel (nickname : nico)

Asynchronous reporting with chef

gousset Configuration management tools are awesome. Using them, you are now managing loads of servers, reaching the pub on time and you can focus on really fun stuff. A counterpart is that they almost work in your back : changes are propagated quickly, and even if you store your cookbooks/modules in a VCS, even if you review them, you still want to know what really happens on your servers. Puppet has a really nice automatic summary part to do that job, and some people are doing cool things with it. Chef also has summary of updated resources but they don’t get back to a master like puppet do. So I wrote a little report handler to push back data to wherever you want to. It is based on the stomp protocol, to be non blocking and easily scale if you have a large number of machine. With this you just need to have a consumer that will process data to your favorite datastore.

I’ve put that on my github chef repo, check the readme for the extensive file list & function.

Trigger your chef runs with mcollective

eth0 Mcollective has been able to fire up puppetd runs for a while now, via a standalone RPC call or through the puppet commander binary (check it out, spread your load). I wanted to be able to fire up my chef clients with mcollective, to use metadata to filter what should be impacted. So I wrote a little piece of ruby, mostly based on the puppet one, to achieve this. You can now do the following :

mco rpc chef runonce or  mco rpc chef status
The plugin is available on my github mcollective repository.

Mcollective agent, to manage agents

agent_smithIt started like a toy, to learn a little more about mcollective agents but I finally turned into something useful (at least for me). I pushed my agent “smith” on my github account. It allows you to install or remove agents within mcollective. I usually use my configuration management tool to deploy such pieces of software but it can be useful in some case to go without it.

The mandatory internet meme reference : yes xzibit approves.

STOMP-ed nagios

eth0Basing more and more stuff on mcollective means relying more and more on one of its underlying components : the activeMQ middleware, and more precisely the stomp connector. I hit a weird bug a few days ago and realized that I was not functionnaly monitoring this part of the system. The port was bound and responded to connections, subscriptions were possible but messages didn’t pass through.So I wrote this little plugin that makes this possible : it creates a random string, sends it to a queue and then reads the queue to check if the result is the same.

This has been possible with the help of @ripienaar. Thanks for the explanation for the difference between topics & queues !

Meet the marionette

eth0Another cool project I keep an eye on for some weeks is “the marionette collective“, aka mcollective. This project is leaded & develloped by R.I. Pienaar, one of the most active people in the puppet world too.

Mcollective is an framework for distributed sysadmin. It relies on a messaging framework and has many features included : flexibility, speed, easy to understand.

Some time ago, I had wrote a tool called “whosyourdaddy” to help me (and my memory as big as a goldfish one) to find on which Xen dom0 a Xen domU was living. It worked fine, expect the fact that is was not dynamic : if a VM was migrated  from a dom0 to another, I had to update the CMDB. Not really reliable (if an update fails the CMDB is no more accurate) and I didn’t want to have to embed this constraint in the Xen logic. So I decided to try out to write my own mcollective agent and here it is ! It is built on top of a (very) small ruby module for xen and has it own client.

You can find on which dom0 a domU resides :

master1:~# ./mc-xen -a find --domu test
hypervisor2              : Absent
hypervisor1              : Absent
master1:~# ./mc-xen -a find --domu domu2
hypervisor2              : Present
hypervisor1              : Absent

Or list your domUs :

master1:~# ./mc-xen -a list

 no domU running

Download the agent & the client

Put your ruby in my ERB

Today I started installing a reverse proxy at $WORK. I choose to follow this way, and all my DNS data is stored in my CMDB. Once again, the solution came from #puppet ! You can embed some “pure” ruby code in ERB templates. And, yes, you can query your database !

dbh = DBI.connect("", "you", "XXXX")
query = dbh.prepare("your fancy query")
while row = query.fetch do
<%= todisplay %>
<% end %>

I use this technique to generate the dnsmasq data file. Just use the subscribe function and all is done !