Google OAuth2 proxy on EL7

The team at bitly has written an http reverse proxy that provides authentication using Google's OAuth2 API. They write about it in a blog post.

The proxy is written in Go but builds to a single, statically-linked executable, ie. there are no complex run-time dependencies, which is great.

I've built an RPM for EL7 which also includes a sample systemd unit file, and sample configuration file. Both source and binary RPMs are available in my yum repo.

Additionally, I've create a puppet module that installs the RPM, creates a systemd service, and sets up an nginx front end to the proxy service. The module is available from the Puppetforge, and also on github.

I'd be interested in any feedback/comments/bug reports/pull requests.

Monitoring file modifications with auditd with exceptions

Playing with auditd, I had a need to monitor file modifications for all files recursively underneath a given directory. According to the auditctl(8) man page there are two ways of writing a rule to do this:

-w /directory/ -p wa
-a exit,always -F dir=/directory/ -F perm=wa

The former rule is basically a shortcut for the latter rule; the latter rule is also potentially more expressive with the addition of extra -F conditions. I also needed to ideally exclude certain files and/or sub-directories in the directory from triggering the audit rule and it turns out you do this:

-a exit,never -F dir=/directory/directory-to-exclude/
-a exit,never -F path=/directory/file-to-exclude
-a exit,always -F dir=/directory/ -F perm=wa

According to this post order is important; list the exceptions before the main rule.

passenger native libs on CentOS 7

I'm setting up a new puppet master running under passenger on CentOS 7 using packages from the puppetlabs and foreman repos. I used a fork of Stephen Johnson's puppet module to set everything up (with puppet apply). All went swimmingly, except I would see this error in the logs the first time the puppet master app loaded (ie. the first time it got a request):

[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] *** Phusion Passenger: no passenger_native_support.so found for the current Ruby interpreter. Compiling one (set PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0 to disable)...
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # mkdir -p /usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/locations.ini/buildout/ruby/ruby-2.0.0-x86_64-linux
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] Not a valid directory. Trying a different one...
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] -------------------------------
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # mkdir -p /var/lib/puppet/.passenger/native_support/4.0.18/ruby-2.0.0-x86_64-linux
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # cd /var/lib/puppet/.passenger/native_support/4.0.18/ruby-2.0.0-x86_64-linux
[ 2014-11-07 23:22:13.2600 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] # /usr/bin/ruby '/usr/share/gems/gems/passenger-4.0.18/ruby_extension_source/extconf.rb'
[ 2014-11-07 23:22:13.3048 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] /usr/bin/ruby: No such file or directory -- /usr/share/gems/gems/passenger-4.0.18/ruby_extension_source/extconf.rb (LoadError)
[ 2014-11-07 23:22:13.3156 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] Compilation failed.
[ 2014-11-07 23:22:13.3156 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] -------------------------------
[ 2014-11-07 23:22:13.3157 2603/7f1a0660e700 Pool2/Spawner.h:159 ]: [App 2643 stderr] Ruby native_support extension not loaded. Continuing without native_support.

I double checked, and I do have the native libs installed – they're in the rubygem-passenger-native-libs rpm – the main library is in /usr/lib64/gems/ruby/passenger-4.0.18/native/passenger_native_support.so.

Digging in the passenger code, it tries to load the native libs by doing:

require 'native/passenger_native_support'

If I hacked this to:

require '/usr/lib64/gems/ruby/passenger-4.0.18/native/passenger_native_support'

then it loaded correctly.

It seems that /usr/lib64/gems/ruby/passenger-4.0.18 is not in the ruby load path.

Additional directories can be added to the ruby load path by setting an environment variable, RUBYLIB.

To set RUBYLIB for the apache process, I added the following line to /etc/sysconfig/httpd and restarted apache:

RUBYLIB=/usr/lib64/gems/ruby/passenger-4.0.18

The passenger native libraries now load correctly.

Building mod_proxy_wstunnel for CentOS 6

I had a need to be able to put an Apache-based reverse proxy in front of an install of Uchiwa which is a Node.js-based dashboard for Sensu. The only problem is that it uses WebSockets which means it doesn’t work with the regular mod_proxy_http module. In version 2.4.5 onwards there is mod_proxy_wstunnel which fills in the gap however CentOS 6 only has a 2.2.15 (albeit heavily patched) package.

There are various instructions on how to backport the module for 2.2.x (mostly for Ubuntu) but these involve compiling the whole of Apache from source again with the module added via an additional patch. I don’t want to maintain my own Apache packages but more importantly Apache has provided apxs a.k.a the APache eXtenSion tool to compile external modules without requiring the whole source tree available.

So, I have created a standalone RPM package for CentOS 6 that just installs the mod_proxy_wstunnel module alongside the standard httpd RPM package. In order to do this I took the original patch and removed the alterations to the various build files and also flattened the source into a single file, (the code changes were basically adding whole new functions so they were fine to just inline together). The revised source file and accompanying RPM spec file are available in this Github gist.

HP Microserver TPM

I’ve been dabbling with DNSSEC which involves creating a few zone- and key-signing keys, and it became immediately apparent that my headless HP Microserver has very poor entropy generation for /dev/random. After poking and prodding it became apparent there’s no dormant hardware RNG that I can just enable to fix it.

Eventually I stumbled on this post which suggests you can install and make use of the optional TPM as a source of entropy.

I picked up one cheaply and installed it following the above instructions to install and configure it; I found I only needed to remove the power cord for safety’s sake, the TPM connector on the motherboard is right at the front so I didn’t need to pull the tray out.

Also, since that blog post, the rng-tools package on RHEL/CentOS 6.x now includes an init script so it’s just a case of doing the following final step:

# chkconfig rngd on
# service rngd start

It should then be possible to pass this up to any KVM guests using the virtio-rng.ko module.

FOSDEM 2014 is coming

and with that almost a full week of side events.
For those who don't know FOSDEM, (where have you been hiding for the past 13 years ? ) Fosdem is the annual Free and Open Source Developers European meeting. If you are into open source , you just can't mis this event where thousands of likeminded people will meet.

And if 2 days of FOSDEM madness isn't enough people organise events around it.

Last year I organised PuppetCamp in Gent, the days before Fosdem and a MonitoringLove Hackfest in our office the 2 days after FOSDEM This year another marathon is planned.

On Friday (31/1/2014) the CentOs community is hosting a Dojo in Brussels at the IBM Forum. (Free, but registration required by the venue)

After the success of PuppetCamp in Gent last year we decided to open up the discussion and get more Infrastructure as Code people involved in a CfgMgmtCamp.eu

The keynotes for CfgMgmtCamp will include the leaders of the 3 most popular tools around , both Mark Burgess, Luke Kanies and Adam Jacob will present at the event which will take place in Gent right after Fosdem. We expect people from all the major communities including, but not limited to , Ansible, Salt, Chef, Puppet, CFengine, Rudder, Foreman and Juju (Free but registration required for catering)

And because 3 events in one week isn't enough the RedHat Community is hosting their Infrastructure.next conference after CfgMgmtCamp at the same venue. (Free but registration required for catering)

cya in Belgium next year..

hbase lzo compression on CentOS 6.3

The installation of hbase on CentOS is fairly painless thanks to those generous folks at Cloudera. Add their CDH4 repository and you're there: yum install hbase.

However, adding lzo compression for hbase is a little more tricky. There are a few guides describing how to checkout from github, build the extension, and copy the resulting libraries into the right place, but I want a nice, simple RPM package to deploy.

Enter the hadoop-lzo-packager project on github. Let's try and use this to build an RPM I can use to install lzo support for hbase.

Get the source code:

git clone git://github.com/toddlipcon/hadoop-lzo-packager.git

Install the deps:

yum install lzo-devel ant ant-nodeps gcc-c++ rpmbuild java-devel

Build the RPMs:

cd hadoop-lzo-packager
export JAVA_HOME=/usr/lib/jvm/java
./run.sh --no-debs

Et voila – cloudera-hadoop-lzo RPMS ready for installation. But wait… The libs get installed to /usr/lib/hadoop-0.20… That's no good, I want them in /usr/lib/hbase.

So I went ahead & hacked run.sh and template.spec to allow the install dir on the target machine to be specified on the command-line. I can now use a command line something like this:

./run.sh --name hbase-lzo --install-dir /usr/lib/hbase --no-deb

That produces a set of RPMs (binary, source, and debuginfo) with the base name hbase-lzo and libraries installed to /usr/lib/hbase

My changes (plus another small change adding necessary BuildRequires to the RPM spec template) are in my fork of the project on github

Installing MCollective 2.2.0 on CentOS 6

I have recently been reintroduced to CentOS having not used a RedHat distribution in anger since around RedHat Linux 7 (pre-RHEL). One of the first things I wanted to do was install MCollective, so I thought I’d document my journey. Below is how I went about installing ActiveMQ 5.5 for the messaging and MCollective 2.2.0, the most recent stable version at the time of writing on CentOS 6.3. I was suprised to learn that Puppet Labs have made this an incredibly easy process since my last attempt, specifically with their excellent ActiveMQ packaging.

First things first, install the Puppet Labs repository: rpm -Uvh http://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-5.noarch.rpm

ActiveMQ

Install Java 1.6 as 1.7 is not yet supported. The Oracle JRE works just as well however is not packaged in CentOS anymore due to inane distribution restrictions imposed by Oracle. So we’re going to go with OpenJDK: yum install java-1.6.0-openjdk

Install ActiveMQ: yum install activemq

Edit /etc/activemq/activemq.xml with the following configuration, obviously replacing the passwords as you go:

<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:amq="http://activemq.apache.org/schema/core"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
    http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd
    http://activemq.apache.org/camel/schema/spring http://activemq.apache.org/camel/schema/spring/camel-spring.xsd">

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" useJmx="true" schedulePeriodForDestinationPurge="60000">
        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" producerFlowControl="false"/>
                <policyEntry queue="*.reply.>" gcInactiveDestinations="true" inactiveTimoutBeforeGC="300000"/>
              </policyEntries>
            </policyMap>
        </destinationPolicy>

        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <plugins>
          <statisticsBrokerPlugin/>
          <simpleAuthenticationPlugin>
            <users>
              <authenticationUser username="mcollective" password="eeVah9pahNgeefaikietohMa" groups="mcollective,everyone"/>
              <authenticationUser username="admin" password="Thih0theipeesuocie6eif9h" groups="mcollective,admins,everyone"/>
            </users>
          </simpleAuthenticationPlugin>
          <authorizationPlugin>
            <map>
              <authorizationMap>
                <authorizationEntries>
                  <authorizationEntry queue=">" write="admins" read="admins" admin="admins"/>
                  <authorizationEntry topic=">" write="admins" read="admins" admin="admins"/>
                  <authorizationEntry queue="mcollective.>" write="mcollective" read="mcollective" admin="mcollective"/>
                  <authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective"/>
                  <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
                </authorizationEntries>
              </authorizationMap>
            </map>
          </authorizationPlugin>
        </plugins>

        <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="20 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="1 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="100 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <transportConnectors>
            <transportConnector name="openwire" uri="tcp://0.0.0.0:6166"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:6163"/>
        </transportConnectors>
    </broker>

    <import resource="jetty.xml"/>
</beans>

Restart ActiveMQ to load the new configuration: service activemq restart

MCollective Server

Next we need to install MCollective on each node you wish to be part of the collective: yum install mcollective

Configure MCollective in /etc/mcollective/server.cfg with some basic settings:

topicprefix = /topic/
main_collective = mcollective
collectives = mcollective
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective.log
loglevel = info
daemonize = 1

# Plugins
securityprovider = psk
plugin.psk = eiqu5aeKahxeemith6Sahkah

connector = stomp
plugin.stomp.host = localhost
plugin.stomp.port = 6163
plugin.stomp.user = mcollective
plugin.stomp.password = eeVah9pahNgeefaikietohMa

# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

Restart MCollective to load the new configuration: service mcollective restart

MCollective Client

Install the client: yum install mcollective-client

Likewise configure the client with similar settings in /etc/mcollective/client.cfg:

topicprefix = /topic/
main_collective = mcollective
collectives = mcollective
libdir = /usr/libexec/mcollective
logger_type = console
loglevel = warn

# Plugins
securityprovider = psk
plugin.psk = eiqu5aeKahxeemith6Sahkah

connector = stomp
plugin.stomp.host = localhost
plugin.stomp.port = 6163
plugin.stomp.user = mcollective
plugin.stomp.password = eeVah9pahNgeefaikietohMa

# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

Now verify you can now communicate with your node:

aeg@client ~ % mco inventory node1.example.com
Inventory for node1.example.com:

   Server Statistics:
                      Version: 2.2.0
                   Start Time: Sat Sep 22 03:19:52 +0100 2012
                  Config File: /etc/mcollective/server.cfg
                  Collectives: mcollective
              Main Collective: mcollective
                   Process ID: 1567
               Total Messages: 1
      Messages Passed Filters: 1
            Messages Filtered: 0
             Expired Messages: 0
                 Replies Sent: 0
         Total Processor Time: 0.02 seconds
                  System Time: 0.02 seconds

 Agents:
    discovery       rpcutil

 Data Plugins:
    agent           fstat

 Configuration Management Classes:
    No classes applied

 Facts:
    mcollective => 1

In future posts I’ll cover securing MCollective from traffic sniffing and man in the middle attacks.

From Imperative to Declarative System Configuration with Puppet

Peanut Butter & Jam Sandwich

After my impromptu presentation about configuration management with Puppet at BarCampGR a few weeks ago, several people mentioned that they had tried to use Puppet before, but couldn’t figure out how to make it do anything in the first place.

I’d like to clear up some of that uncertainty if I can, so here is an example of the simplest thing that could possibly work. This is not an example of how best to organize your code or write expressively, but it will show how you might start transitioning from imperative to declarative thinking through the use of Puppet’s Exec resource type.

Concepts

Declarative vs. Imperative Programming

Puppet’s standard DSL1 uses a declarative programming style that is often unfamiliar to newcomers, even if they are experienced programmers in other domains. Most commonly-used programming languages are examples of imperative programming, in which the programmer must describe a specific algorithm or process. Declarative programming instead focuses on describing the particular state or goal be be achieved. I’ll illustrate the difference with an example in natural language:

Make Me a Sandwich! (Imperative)
Spread peanut butter on one slice of bread. Set this slice of bread on a plate, face-up. Spread jelly on another slice of bread. Place this second slice of bread on top of the first, face-down. Bring me the sandwich.

The Sandwich I Desire. (Declarative)
There should be a sandwich on a plate in front of me in five minutes’ time. It should have only peanut butter and jelly between the two slices of bread.

Declarative programming is a more natural fit for managing system configuration. We want to be talking about whether or not MySQL is installed on this machine or Apache on that machine, not whether yum install mysql-server has been run here or apt-get install apache2 there. It allows us to express intent more clearly in the code. It is also less tedious to write and can even be more portable to different platforms. (See Luke Kanies’ blogpost1 for more advantages specific to the Puppet DSL.)

Puppet’s Resource Types

I won’t go into detail here, but Puppet uses an abstraction layer to manage what it calls “resources.” These are anything from users, to packages, to services, to files, and even commands to be executed (like the Exec resource type we’ll be starting with in this example). For a complete list of the available resource types (and all of their parameters), see the Puppet Type Reference documentation2.

Environment Setup

For this example, I will assume we’re working with a minimal install of CentOS 6.3 (for example, this Vagrant box provided by OpsCode – the makers of Chef, the ‘other’ popular configuration management tool). I’ll assume that we’ve already installed rvm, ruby, and the puppet gem using a bootstrapping script like the one from Justin Kulesza ‘s recent blog post. Here is a Vagrantfile to do just that:

# -*- mode: ruby -*-
# vi: set ft=ruby :
 
Vagrant::Config.run do |config|
  config.vm.box = "centos-6.3"
  config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/boxes/opscode-centos-6.3.box"
 
  # Execute bootstrap.sh script (from a Gist) to install RVM, Ruby, Puppet, etc.
  # Read the Gist: https://gist.github.com/3615875
  config.vm.provision :shell, :inline => "curl -s -L https://raw.github.com/gist/3615875/bootstrap.sh > ~/bootstrap.sh"
  config.vm.provision :shell, :inline => "bash ~/bootstrap.sh"
 
end

What we’d like to do is install and configure tmux and Matt Furden’s wemux script for managing shared tmux sessions.

Installing Tmux Using Puppet Execs

We’ll start with installing tmux. (Note again: we’re deliberately abusing the Exec resource – I’ll come back to this when we refactor.)

Let’s make one file to hold all of our Puppet code and call it site.pp and put it in /etc/puppet.

sudo mkdir /etc/puppet && sudo vim /etc/puppet/site.pp

Inside site.pp we’ll add our first resource, an Exec to install the EPEL repos:

exec{"install-epel":
  command => "/bin/rpm -i http://linux.mirrors.es.net/fedora-epel/6/i386/epel-release-6-7.noarch.rpm",
}

Now we can apply this manifest:

[vagrant@localhost ~]$ rvmsudo puppet apply /etc/puppet/site.pp
/usr/local/rvm/rubies/ruby-1.9.3-p125/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead.
notice: /Stage[main]//Exec[install-epel]/returns: executed successfully
notice: Finished catalog run in 5.52 seconds
[vagrant@localhost ~]$

Great! We’ve installed the EPEL repository we need! Unfortunately, if we try to apply this manifest again…

[vagrant@localhost ~]$ rvmsudo puppet apply /etc/puppet/site.pp
/usr/local/rvm/rubies/ruby-1.9.3-p125/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead.
err: /Stage[main]//Exec[install-epel]/returns: change from notrun to 0 failed: /bin/rpm -i http://linux.mirrors.es.net/fedora-epel/6/i386/epel-release-6-7.noarch.rpm returned 1 instead of one of [0] at /etc/puppet/site.pp:3
notice: Finished catalog run in 5.49 seconds
[vagrant@localhost ~]$

…Puppet will re-run the exact same command and this time it will fail (return with non-zero exit code) because the repository is already installed. We need a check to make sure that our manifest remains idempotent – that is, that it will result in the same system state as a single run if it is run more than one times3. As you might guess, Puppet has a many ways of doing this. We’ll go with what’s most straightforward for our present case:

exec{"install-epel":
  command => "/bin/rpm -i http://linux.mirrors.es.net/fedora-epel/6/i386/epel-release-6-7.noarch.rpm",
  creates => "/etc/yum.repos.d/epel.repo",
}

Adding the creates attribute to our Exec resource ensures that the command will only be run if no file exists at the path provided to creates. Installing the EPEL package creates at least one file: /etc/yum.repos.d/epel.repo. As long as that file is present, this Exec will not be run again. If it is missing, then the EPEL repo has probably been uninstalled and our Exec will reinstall it.

Next we’ll install tmux:

exec{"install-tmux":
  command => "/usr/bin/yum install -y tmux",
  creates => "/usr/bin/tmux",
}

Now when we apply our manifest we should see something like:

[vagrant@localhost ~]$ rvmsudo puppet apply /etc/puppet/site.pp
/usr/local/rvm/rubies/ruby-1.9.3-p125/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead.
notice: /Stage[main]//Exec[install-tmux]/returns: executed successfully
notice: Finished catalog run in 18.55 seconds
[vagrant@localhost ~]$

But what if we were to start from scratch with a new base VM? Can we use our site.pp manifest to get back to the state we’re in now? You might think we have all of the information needed to apply this manifest to an identical base VM and reach the same state, but you’d be wrong. The order of application for Puppet resources is only deterministic where it is explicitly defined. As our manifest is currently written, we have not stated whether tmux or the EPEL repo needs to be installed first. Sometimes it might work just fine, but sometimes the "install-tmux" Exec will be applied first, and the entire catalog run will fail.

This is a big gotcha with Puppet, but it’s easy to fix. We’ll simply add a requirement to the "install-tmux" Exec:

exec{"install-tmux":
  command => "/usr/bin/yum install -y tmux",
  creates => "/usr/bin/tmux",
  require => Exec["install-epel"],
}

Now Puppet will only apply the "install-tmux" resource if the "install-epel" has already been successfully applied. If "install-epel" needs to be run (i.e. /etc/yum.repos.d/epel.repo doesn’t exist), then Puppet will first run that Exec. If "install-epel" fails, Puppet won’t even try to run "install-tmux" – instead it will display an error that "install-tmux" was not run due to failed dependencies.

Installing Wemux Using Puppet Execs

Now we’re ready to follow the steps in the wemux README to create some more Exec’s:

exec{"clone-wemux-repo":
  command => "/usr/bin/git clone git://github.com/zolrath/wemux.git /usr/local/share/wemux",
  creates => "/usr/local/share/wemux",
}
 
exec{"symlink-wemux-into-path":
  command => "/bin/ln -s /usr/local/share/wemux/wemux /usr/local/bin/wemux",
  creates => "/usr/local/bin/wemux",
}
 
exec{"cp-wemux-conf":
  command => "/bin/cp /usr/local/share/wemux/wemux.conf.example /usr/local/etc/wemux.conf",
  creates => "/usr/local/etc/wemux.conf",
}

But what about that last part?

Then set a user to be a wemux host by adding their username to the host_list in /usr/local/etc/wemux.conf
vim /usr/local/etc/wemux.conf
host_list=(foobar)

We could use sed and create an extra file to mark that the job is done…

exec{"configure-wemux":
  command => "/bin/sed -i -e 's/change_this/vagrant/g' /usr/local/etc/wemux.conf && touch /etc/wemux-configured",
  creates => "/etc/wemux-configured",
  require => Exec["cp-wemux-conf"],
}

This will get the job done, but it’s definitely not pretty.

Hopefully you can see that we’re running against the grain by forcing our Puppet manifests to act as imperative code rather than declarations about the desired state of our system. Puppet gives us much better tools to work with than Execs if we’re willing to think about things a little bit differently.

Refactoring More Declaratively

Here’s what a first pass at refactoring our manifests might look like:

We’ll start by creating a ‘wemux’ module4

/etc/puppet/modules/wemux/manifests/init.pp:

class wemux($wemux_hosts = 'foobar'){
  package{"epel-release":
    provider => rpm,
    source => "http://linux.mirrors.es.net/fedora-epel/6/i386/epel-release-6-7.noarch.rpm",
    ensure => installed,
  }
  package{"tmux":
    ensure => installed,
    require => Package["epel-release"],
  }
  exec{"wemux-clone":
    command => "/usr/bin/git clone git://github.com/zolrath/wemux.git /usr/local/share/wemux",
    creates => "/usr/local/share/wemux",
  }
  file{"/usr/local/bin/wemux":
    ensure => link,
    target => "/usr/local/share/wemux/wemux",
    require => Exec["wemux-clone"],
  }
  file{"/usr/local/etc/wemux.conf":
    ensure => present,
    content => template("wemux/wemux.conf.erb"),
  }
}

/etc/puppet/modules/wemux/templates/wemux.conf.erb:

(This is a templatized version of the wemux config file we were copying and editing before)

...
host_list=(<%= wemux_hosts %>)
...

Then in our site.pp we can include the wemux class to pull in all of the resources described in our module…

/etc/puppet/site.pp:

class{"wemux":
  wemux_hosts => "vagrant"
}

Here I’ve moved most of the code into a self-contained wemux module. In this case our module consists of a single parameterized class containing all of the necessary resources to install and configure wemux. It is included in site.pp where a value for the wemux_hosts class parameter is also provided.

I have also used a couple new resources types here: files and packages. Both of these are much better suited to the task at hand. You’ll notice that the File resource type also lets us use an ERB template for the configuration file. I simply modified the example configuration file and added it to our wemux module as a template. Then I used the Puppet’s template() function to provide a value for the File resource’s content attribute.

Summary

There’s a lot more we could do to improve things even further. For example, we could extend Puppet with a custom type for our git repository rather than using an Exec to clone from github. I’ll leave that as an exercise for the reader (hint), but this is a good start.

We’ve now moved from specifying how to install and configure tmux and wemux to specifying what we want the state of our system to be. Our code is more readable, better expresses our intent, and will be easier to maintain. We’re working with the declarative Puppet DSL now, not against it.

Footnotes

1 Why Puppet has its own configuration language

2 Puppet Type Reference Documentation

3 Idempotence Is Not a Medical Condition

4 Puppet Learning: Modules and Classes

Additional Resources

Learning Puppet

Pro Puppet

Puppet Style Guide

The post From Imperative to Declarative System Configuration with Puppet appeared first on Atomic Spin.

Beyond Bundler: A Configuration Management Starter Kit

Configuration management or “infrastructure as code” can provide a common language for application developers and operations specialists alike to describe the infrastructure requirements of an application. By capturing these requirements in code, bootstrapping becomes a repeatable process, and insights from operations teams supporting the application in a production environment can be fed back to the developers in a virtuous cycle.

As an example of what this might look like with some current tools, I’ve created a starter kit for using vagrant, veewee, and a bit of puppet to automate the building of virtualized infrastructure for a Rails 3 application. The end result is a VirtualBox virtual machine described in code (from a Veewee basebox definition of the basic virtual hardware to a Puppet manifests describing the necessary packages and bootstrapping). This means that down the road, an environment in which your application will run can be repeatedly built and all of the steps of that process are both visible and modifiable, with changes captured in source control.

Find it on GitHub, here.

There are a few things I haven’t finished wiring together as of this writing, but it should be enough to see how the main pieces fit together.

The project makes as few assumptions as possible about your environment. It assumes that you have a recent version of VirtualBox installed, RVM installed, and ruby-1.9.3-p125 installed via RVM – from there the project rvmrc and bundler should take care of the remaining dependencies.

To run it (build a new VM from scratch and deploy your app to it), you’ll want to run the following commands:
vagrant basebox build demo-centos-box
vagrant basebox validate demo-centos-box
vagrant basebox export demo-centos-box
vagrant up
cap environment:vagrant deploy:setup
cap environment:vagrant deploy

As of this writing, I haven’t added the hooks to actually launch the application server, but you can start WEBBrick by hand like so:
vagrant ssh
…and then from within the VM:
cd sites/demo-app/current && RAILS_ENV=production bundle exec rails s

Then, in your browser visit 33.33.33.10:3000. Tada!

This was a weekend project, and it’s likely that I’ve overlooked some things, but I plan to continue honing it. Let me know in the comments (or in pull requests) what’s still broken. One of the major motivations of this approach is getting past the issue of “works for me” – so if it doesn’t work for you, I want to know! Thanks.

The post Beyond Bundler: A Configuration Management Starter Kit appeared first on Atomic Spin.