Women in Tech Starts in Infancy – Here’s How my Daddy Helped

I wrote in a previous article about how I tend not to notice immediately that, somewhere like, for example, a tech conference has, say, 300 men and six women in it, because I innately think of people as people, and not divide them by gender or race or age or whatever. And I was thinking about why this is, and I realised I have my father to thank for a hell of a lot.

I am the only child of a product-of-her-time motherly mother, and a quite extraordinary eccentric polymath father who neither notices nor cares about social conventions like gender or mainstream culture. Daddy is into all sorts of cool stuff. He loves engineering, computer science, zoology, ecology, art, history (including and especially industrial history), science of all kinds, dinosaurs, fossils, ... Oh all kinds of cool. When I was a little girl he bought a BBC B and we both learnt to program it in BASIC. And he read me some great stories when I was little too, so let's include literature.

Here let me mention the first books I recall him reading me, which were Arthur Ransome's glorious "Swallows and Amazons" series. This is relevant, because Ransome was way ahead of his time in terms of gender. "Swallows and Amazons" was published back in 1929, yet it - and the rest of the series - contain the most unstereotyped characters you could hope for. There are more girls than boys, they all do the same exciting, adventurous stuff, the dominant character is a girl, girls often take the lead as they work together as equal teams and no-one ever suggests that a particular role or activity is unsuitable for a girl (or a boy). Compare this to Enid Blyton, who was all "you girls stay and look after the camp while we boys go and have a brave and exciting adventure". Ransome is just awesome. Go and read them, especially if you like sailing. Or pirates.

And I hold my father responsible for most of the toys I played with. My mother tried to interest me in dolls, but it didn't work. I preferred train sets, Lego, toy tractors of all sizes, old Tonka Toy vehicles - including a really cool crane and some trucks - of daddy's, a model farm that he built for me, intelligent games, many many books and art materials. With this kind of material culture it's no wonder that it didn't occur to me to feel limited or shaped by my gender.

I doubt my father would say he's a feminist. He isn't political at all, and I've never heard him talk about isms of any kind. But nor does he judge or prejudge people on the basis of anything. I notice though that he does seem to note and be pleased when a woman makes a particular achievement, breaks a glass ceiling, say being the first female something-or-other. And likewise with me, he gave me no indication ever that I should or should not aspire to anything on account of my gender. He's always been terrifically proud of my achievements and interested in my endeavours, and he never seems to think anything is beyond me if I make the effort.

So, that's the early moulding of me - it hasn't stopped the world messing with my head since then, but it's been some defence against that -, and I'm glad of it, but I'm not letting it blind me to the real and serious prejudice that women face when dealing with a lot of people who aren't my Daddy. However, I think we have a lot to learn from it about how to bring up new generations, especially how to increase inclusivity particularly in STEM subjects and industries. And I've been reading some articles just recently in the UK national press and psychology journals that add academic weight to my anecdotal evidence.

In this article Christia Spears Brown, Associate Professor of Developmental Psychology at the University of Kentucky, shows how even the most innocent-seeming (in this case merely referring to boys and girls separately) gender stereotyping of small children narrows their sense of their own potential. And in this sequel she shows how easily meaningless prejudices and biases can be created in children and proves that gender stereotypes have nothing to do with biology, and everything to do with an innate tendency to segregate according to artificially created differences. In short, children develop "feminine" and "masculine" traits, preferences, aspirations only because adults give them the message that they should, that they're different, that their biology determines them.

And if you don't think that any of that matters, you ought to think about stereotype threat. This is where a stereotype becomes a self-fulfilling prophecy: a person fears their performance will be used to justify a stereotype about their group, and, as a result, underperforms. There is an interesting article about stereotype threat in school-age chess in the March 2014 edition of The Psychologist journal, entitled "Queens under threat from Kings". It's pretty widely, if erroneously, believed that males are innately better at chess than females. And with only one female Grand Master, Judit Polgar, in the world at present, there's little in the way of role model encouragement for women and girl chess players. This research looked at under-12 girl chess players. They were all rated for ability, and then their performance as a % of their ability was observed when playing against boys and girls. Against boys, they played markedly below their ability. The fear of being judged according to the prevailing gender stereotype actually inhibited their game, overriding their actual talent and ability.

So it really does matter what we tell our little children, about themselves, their potential and their abilities. It really does harm the girls when we steer them away from technology, science and maths and towards dolls, princesses, sparkles and pink. And it harms the boys when we discourage them from nurturing, playing gently, playing to learn social skills, or engaging with their emotions.

There's more to this than how gendering of toys affects girls' initial uptake of STEM careers. Once a woman is established in a career, with ambition and high aspirations for promotion and leadership, her ability to fulfil those aspirations mid-to-long-term is likely to be heavily influenced by how willing her partner is to take a 50% share in domestic and child-rearing duties. It doesn't take a lot of imagination to see how that probability is damaged by giving our little girls dolls, toy cookers, toy cleaning equipment, and shaming our little boys into avoiding these kinds of toys, "because they are toys for girls".

This week in the UK, the Internet, newspapers, TV and radio have been fizzing with debate about how we stereotype our children, sparked by a big campaign from the excellent @LetToysBeToys. They've had marked success in persuading toy retailers in the UK to stop labelling their toys as 'for girls' and 'for boys', and to stop pushing the domestic, pink, princessy stuff towards girls, and the construction, space ships, trains and such towards boys. Now they've turned their attention to children's book publishers, in an attempt to get rid of fatuously gendered books called things like "Adventure stories for boys" or "Colouring book for girls". Or, as Katy Guest, the Independent on Sunday's Literary Editor put it, "Girls’ Book of Boring Princesses …and... Great Big Book of Snot for Boys". And they're getting a lot of support, not just form literary editors, but high-profile authors, the main high-street book retailer Waterstones, and leading publishers including Usborne.

I'm delighted by their success. I've know this is a big problem for a long time, but until now it's been hard to find people who take it seriously. Too many parents - even intelligent ones who should know better - seem to like their "little pink princess" or their "rough tough little man", and they can't or won't see the harm they are doing. And the harm is this: too few women in tech and other STEM careers, too few women in leadership, in chess, in government. Too many women in low paid, dead end "feminine" jobs, in careers that fall by the wayside when children arrive, because they pay less than the childcare will cost and childrearing is still considered the responsibility of mothers, not parents. Too many women who are afraid to promote themselves, who lack the confidence to assert themselves, to push for what they deserve, to stand up to bullies, because of that early conditioning to be nice, be likeable, be pretty. And of course, too many men and male-designed cultures with a sense of entitlement to all the power and the best roles, just because they are male. Macho brogrammer cultures that alienate women because both the men and the women were brought up to see the other as different, alien, and are thus unable to build and participate in cultures where people are just people and there is room for them all. And too many unhappy, alienated, messed-up men who can't sustain relationships, understand themselves or anyone else, because they were brought up to be tough and repress their emotions. It messes us all up. Let's make it stop.

The Science Behind Mental Overload and How to Avoid It (the overload, not the science…)

A subject that's been talked about increasingly over the last few months is burnout among IT workers. There was a large and positive response to Stephen Nelson-Smith's presentation on the subject at Devopsdays Tel Aviv in October 2013, and again to Mike Preston's ignite talk at Devopsdays London in November.

They were both roundly praised for their courage in speaking publicly about their own burnout, but more than that they encouraged other people to share their own experiences. And the more people speak up, the less the stigma, and the more clearly we see the magnitude of the problem.

I don't think it's an overreaction to say that burnout is not an occasional unfortunate event, but rather it is a serious and frequently-occurring occupational hazard amongst knowledge workers in the tech industry and elsewhere. Just as athletes have to take especial care of their bodies, and both accept and try to reduce the occupational risk of injury, so we as knowledge workers must take equally good care of our minds. We need to ensure that our minds are trained and fit for the work we give them. We need to design and adhere to sustainable working habits and procedures. And we need to be ever vigilant for signs of mental stress or exhaustion, because they are very very common.

Academic psychology has a lot to say about how we may create optimal working conditions for our mighty but fragile brains. This article explores how cognitive hard work and effortful self-control interact and combine to deplete our mental energy and leave us prone to impaired intellectual performance and unwise decision-making.

Self-control and deliberate thought are both types of mental work, and draw from the same limited budget of mental energy. In his book, "Thinking, Fast and Slow" (Penguin, 2011), Nobel laureate psychologist Daniel Kahneman gives an example illustrating this, in which self-control refers to exerting the effort to walk faster than one's natural pace, whilst trying to do some serious creative thinking. He found that walking at a leisurely pace, which can be maintained automatically with no self-control effort, aids thought; deliberately maintaining a faster pace, however, impacts on the ability to make serious cognitive effort.

This effect isn't only apparent when we overload with simultaneous mental exercises; it also holds true with successive tasks. Kahneman describes experiments conducted by Roy Baumeister, which showed that efforts of will are tiring. Following one mentally challenging task, of cognition or self-control, we are less able - or willing - to undertake and succeed in another challenge of intellect or self-control. This effect is known as "ego depletion". Ego-depleted people more readily acquiesce to the urge to give up on a demanding task. And it holds true for a range of combinations of tasks: tests of emotional self-control followed by tests of physical stamina, resisting temptation followed by hard cognitive exercises, among others. Moreover, the list of activities and situations shown to deplete self-control is even wider, including deliberately trying not to think of certain things, making difficult choices, being patient with the bad behaviour of others and overriding our prejudices. All these draw on the same budget of mental energy, and that budget is finite. However, it seems that it's motivation that gets reduced, more than actual mental energy. Unlike with cognitive load - which really does reach hard limits - with sufficient incentive, people can override the effects of ego-depletion.

Out of the lab and back in the real world of burnout, this is most readily applied to the self-control required to make wise choices and not give in to the urge to do dumb things. Experiments have shown that when the brain is heavily taxed by cognitive effort, it becomes much harder to resist temptation. In the work environment maybe that temptation is to slack off, check email, play games, read something irrelevant (and less effortful), eat unhealthy foods, or turn to potentially addictive temptations like drink, drugs, smoking, porn and so on. And when a person is suffering from stress, anxiety, depression - all the uncomfortable mental states that go with burnout - they will often try to "self-medicate" with the same kind of things; to avoid those painful feelings by way of control strategies, typically involving distraction or numbing, which almost certainly make them feel worse, and playing right into that depleted ability to resist temptation, resulting in a damaging feedback loop.

Interestingly, when we speak of "mental energy", the word energy isn't just a metaphor. It's been shown that in cognitive effort (and efforts of will), the brain consumes a substantial amount of glucose. Kahneman uses the analogy of "a runner who draws down glucose stored in her muscles during a sprint", and Baumeister's work has further confirmed that ego-depletion can be cured in the short term by ingesting glucose.

Unsurprisingly, in addition to cognitive effort and hunger, our mental energy is also depleted by fatigue, consumption of alcohol and a short-term memory full of anxious thoughts. There's a paradox here, in that there's a danger that some of the guidance around avoiding overload and burnout - eat well, sleep enough, don't work too long, don't drink too much, don't worry - is so damn obvious and well known that it's actually very easy to ignore (There's another interesting issue going on here - Akrasia: knowing the right thing to do and still not doing it - but I plan to write about that separately, so I won't go into it in this article.). And yet it's fundamental to our personal and professional wellbeing - our vital intellectual resources are finite and need to be stewarded wisely and replenished often. This is why I'm finding it so interesting and helpful to understand the science behind these things. It makes them feel more real, more serious, and less like something our parents told us years ago and we promptly ignored.

Incrementing Macros in Emacs

Emacs macros are amazing. I was editing a document today, which started off as a list of sentences which I wanted to convert into numbered headlines, followed by some boilerplate text. Let's suppose the list looked like this:

this is the first thing
this is the second thing
this is the third thing

What I wanted to do was convert this to:

1) this is the first thing

- why does it matter?
- who cares anyway?
- will anyone ever read this?

2) this is the second thing

- why does it matter?
- who cares anyway?
- will anyone ever read this?

3) this is the third thing

- why does it matter?
- who cares anyway?
- will anyone ever read this?

But there's a catch. I didn't want to start at 1... I wanted to start at 3. And the list was pretty long. Of course, I could have just done it with pointing and clicking and copying and pasting. But this is the sort of thing that emacs macros are perfect for.

An introduction to Emacs macros

Emacs macros are pretty simple. Think of them as like a tape recorder. You record something, and then you can play them back as many times as you like. Let's do a really simple one. To start a macro, we use C-x (, then we do some stuff (which is recorded), and then we use C-x ) to signal that we've finished recording our macro. To use the macro, we simply do C-x e, which runs the function call-last-kbd-macro.

Let's begin with the text:

Stephen is awesome
Patrick is awesome
Lindsay is awesome

We'd like to insert the adjective 'really', and append an exclamation mark. We can record a macro for this:

C-x (
really <space> C-e !
C-x )

So we press C-x (, then type really, followed by a space, followed by C-e (end of line), followed by !, then C-x ). To use the macro, we move the point to just before 'awesome', and then press C-x e. We repeat for each line. The result is:

Stephen is really awesome!
Patrick is really awesome!
Lindsay is really awesome!

Applying a macro to a region

Running a macro a bunch of times is a bit of a bore. You can run it a set number of times, by prefacing C-x e with C-u and a number. So C-u 3 C-x e will execute your macro three times. In my case I would have needed to count the number of lines, and then ensure my macro advanced to the next line, and handled the last case, and and. Yuck. Thankfully Emacs allows a macro to be applied to all the lines in a region. Simply mark the region in the usual manner (C-space, move point to end of region), and then run C-x C-k r (apply-macro-to-region-lines). Naturally this forces us to rethink how to create our macro. In the case of the really awesome line, one approach would be to use incremental search to find the place to start to insert really. So now our macro becomes:

C-x (
C-s is a <RET> really <space> C-e !
C-x )

Another would be to go to the end of the line with C-e, and then back a word with M-b. On reflection I prefer the second, but the use of incremental search is a handy trick when building macros, so I'll leave it here as an option.

At this stage you're probably realising that when you record your macro, you end up having to change some text. I tend to make a copy of the first line, so I can use that for recording the macro, and then remove it at the end.

Using counters

The final step in my macro was to work out how to increment numbers. There are a couple of ways to do this. Emacs macros support the notion of registers, on which you can perform functions. This proved to be tricky, because I couldn't work out how to handle the case where the counter started to take up two characters rather than one. Additionally, we can always drop into Emacs LISP to produce a counting sequence, but I wasn't sure how to apply that expression to a region. Eventually I settled on using the built-in counters in macros.

When a keyboard macro is recorded, Emacs maintains a counter which increments every time the macro is used. The macro starts at zero, when you define the counter, so will be at 1 on the first use. We have access to the value of the counter within the macro by the function C-x C-k C-i (kmacro-insert-counter). Let's show this in action with our awesome macro:

C-x (
C-s is a <RET> really <space>
C-e ! (counter is: C-x C-k C-i)
C-x )

Applying this to our list gives:

Stephen is really awesome! (counter is: 1)
Patrick is really awesome! (counter is: 2)
Lindsay is really awesome! (counter is: 3)

So we're nearly there. The only snag is that I wanted to start at 3. That's easily accommodated with the kmacro-set-counter function. Prefacing this with M-2 will set the counter at 2, for the first go, i.e. the recording session. The next time the macro is run the counter will be at 3. So my macro looked like this:

M-2 C-x C-k C-c ;; NB we need to set this outside of the macro, or it'll reset each time
C-x (
C-x C-k C-i ) <space>
- why does it matter? <enter>
- who cares anyway? <enter>
- will anyone ever read this? <enter>
C-x )

Let's try this with our awesome macro, so we get:

1) Stephen is really awesome! (counter is: 1)
2) Patrick is really awesome! (counter is: 2)
3) Lindsay is really awesome! (counter is: 3)

We don't need to worry about setting the counter - we're happy with zero. So:

C-x (
C-a C-x C-k C-i ) <space> 
C-e M-b really <space>
C-e (counter is: C-a C-x C-i)
C-x )

At this stage you've probably realised something's awry.... as you record the macro, you probably saw something like:

0) Stephen is really awesome! (counter is: 1)

And if you persevered, you might have seen:

2) Stephen is really awesome! (counter is: 3)
4) Patrick is really awesome! (counter is: 5)
6) Lindsay is really awesome! (counter is: 7)

How do we fix this? The answer is to insert a C-u the second time we use the counter, to reset it. So the final macro (for our awesome text) is:

C-x (
C-x C-k C-i ) <space>
really <space>
<space> (counter is C-u C-x C-k C-i)
C-x )

And our end result is:

1) Stephen is really awesome! (Counter is: 1)
2) Patrick is really awesome! (Counter is: 2)
3) Lindsay is really awesome! (Counter is: 3)


Emacs macros are amazingly powerful. You can get immediate benefit from them right away, and with a bit of creative thought you can accomplish some pretty remarkable things. Hopefully this captures your imagination, and sets you to thinking about how you can make use of them.

Command-line cookbook dependency solving with knife exec

Note: This article was originally published in 2011. In response to demand, I've updated it for 2014! Enjoy! SNS

Imagine you have a fairly complicated infrastructure with a large number of nodes and roles. Suppose you have a requirement to take one of the nodes and rebuild it in an entirely new network, perhaps even for a completely different organization. This should be easy, right? We have our infrastructure in the form of code. However, our current infrastructure has hundreds of uploaded cookbooks - how do we know the minimum ones to download and move over? We need to find out from a node exactly what cookbooks are needed for that node to be built.

The obvious place to start is with the node itself:

$ knife node show controller
Node Name:   controller
Environment: _default
FQDN:        controller
Run List:    role[base], recipe[apt::cacher], role[pxe_server]
Roles:       pxe_server, base
Recipes      apt::cacher, pxe_dust::server, dhcp, dhcp::config
Platform:    ubuntu 10.04

OK, this tells us we need the apt, pxe_dust and dhcp cookbooks. But what about them - do they have any dependencies? How could we find out? Well, dependencies are specified in two places - in the cookbook metadata, and in the individual recipes. Here's a primitive way to illustrate this:

bash-3.2$ for c in apt pxe_dust dhcp
> do
> grep -iER 'include_recipe|^depends' $c/* | cut -d '"' -f 2 | sort | uniq
> done

As I said - primitive. However the problem doesn't end here. In order to be sure, we now need to repeat this for each dependency, recursively. And of course it would be nice to present them more attractively. Thinking about it, it would be rather useful to know what cookbook versions are in use too. This is definitely not a job for a shell one liner - is there a better way?

As it happens, there is. Think about it - the Chef server already needs to solve these dependencies to know what cookbooks to push to API clients. Can we access this logic? Of course we can - clients carry out all their interactions with the Chef server via the API. This means we can let the server solve the dependencies and query it via the API ourselves.

Chef provides two powerful ways to access the API without having to write a RESTful client. The first, Shef, is an interactive REPL based on IRB, which when launched gives access to the Chef server. This isn't trivial to use. The second, much simpler way is the knife exec subcommand. This allows you to write Ruby scripts or simple one-liners that are executed in the context of a fully configured Chef API Client using the knife configuration file.

Now, since I wrote this article, back in summer 2011, the API has changed, which means that my original method no longer works. Additionally, we are now served by at least two local dependency solvers, in the form of Berkshelf (whose dependency solver, 'solve' is now available as an individual Gem), and Librarian-chef. In this updated version, I'll show how to use the new Chef server API to perform the same function. Berkshelf and Librarian solve a slightly different problem, in that in this instance we're trying to solve dependencies for a node, so for the purposes of this article I'll consider them out of scope.

For historical purposes, here's the original solution:

knife exec -E '(api.get "nodes/controller/cookbooks").each { |cb| pp cb[0] => cb[1].version }'

The /nodes/NODE_NAME/cookbooks endpoint returns the cookbook attributes, definitions, libraries and recipes that are required for this node. The response is a hash of cookbook name and Chef::CookbookVersion object. We simply iterate over each one, and pretty print the cookbook name and the version.

Let's give it a try:

$ knife exec -E '(api.get "nodes/controller/cookbooks").each { |cb| pp cb[0] => cb[1].version }'

The current way to solve dependencies using the Chef server API resides under the environments end point. This makes sense, if you think of environments as a way to define and constrain version numbers for a given set of nodes. This means that constructing the API call, and handling the results is slightly more than can easily be comprehended in a one-liner, which gives us the opportunity to demonstrate the use of knife exec with a script on the filesystem.

First let's create the script:

USAGE = "knife exec script.rb NODE_NAME"

def usage_and_exit
  exit 1

node_name = ARGV[2]

usage_and_exit unless node_name

node = api.get("nodes/#{node_name}")
run_list_expansion = node.expand!("server")

cookbook_solution = api.post("environments/#{node.chef_environment}/cookbook_versions",
                            :run_list => run_list_expansion.recipes)

cookbook_solution.each do |name, cb|
  puts name + " => " + cb.version


The way knife exec scripts work is to pass the arguments following knife to Ruby as the ARGV special variable, which is an array of each space-separated argument. This allows us to produce a slightly more general solution, to which we can pass the name of the node for which we want to solve. The usage handling is obvious - we print the usage to stderr if the command is called without a node name. The meat of the script is the API call. First we get the node object (from ARGV[2], i.e. the node we passed to the script) from the Chef server. Next we expand the run list - this means check for and expand any run lists in roles. Finally we call the API to provide us with cookbook versions for the specified node in the environment in which the node currently resides, passing in the recipes from the expanded run list. Finally we iterate over the cookbooks we get back, and print the name and version. Note that this script could easily be modified to solve for a different environment, which would be handy if we wanted to confirm what versions we'd get were we to move the node to a different environment. Let's give in a whirl:

$ knife exec src/knife-cookbook-solve/solve.rb asl-dev-1
chef_handler => 1.1.4
minitest-handler => 0.1.3
base => 0.0.2
hosts => 0.0.1
yum => 2.3.0
tmux => 1.1.1
ssh => 0.0.6
fail2ban => 1.2.2
users => 2.0.6
security => 0.1.0
sudo => 2.0.4
atalanta-users => 0.0.2
community_users => 1.5.1
sudoersd => 0.0.2
build-essential => 1.4.2

To conclude as did the original article....Nifty! :)

Acquiring a Modern Ruby (Part One)

Last month marked the 21st anniversary of the programming language Ruby. I use Ruby pretty much all the time. If I need to write a command line tool, a web app, pretty much anything, I'll probably start with Ruby. This means I've always got Ruby to hand on whatever machine I'm using. However, in practice, getting a recent Ruby on any platform isn't actually as simple as it sounds. Ruby is a fast-moving language, with frequent releases. Mainstream distributions often lag behind the current release. Let's have a quick look at the history of the releases of Ruby over the last year:

require 'nokogiri'
require 'open-uri'
news = Nokogiri::HTML(open("https://www.ruby-lang.org/en/news/2013/"))
news.xpath("//div[@class='post']/following-sibling::*").each do |item|
  match = item.text.match /Ruby (\S+) is released.*Posted by.*on ((?:\d{1,2} [a-zA-z]{3} \d{4}))/m
  if match
    puts "Ruby #{match[1]} was announced on #{match[2]}"

Ruby 2.1.0-rc1 was announced on 20 Dec 2013
Ruby 2.1.0-preview2 was announced on 22 Nov 2013
Ruby 1.9.3-p484 was announced on 22 Nov 2013
Ruby 2.0.0-p353 was announced on 22 Nov 2013
Ruby 2.1.0-preview1 was announced on 23 Sep 2013
Ruby 2.0.0-p247 was announced on 27 Jun 2013
Ruby 1.9.3-p448 was announced on 27 Jun 2013
Ruby 1.8.7-p374 was announced on 27 Jun 2013
Ruby 1.9.3-p429 was announced on 14 May 2013
Ruby 2.0.0-p195 was announced on 14 May 2013
Ruby 2.0.0-p0 was announced on 24 Feb 2013
Ruby 1.9.3-p392 was announced on 22 Feb 2013
Ruby 2.0.0-rc2 was announced on 8 Feb 2013
Ruby 1.9.3-p385 was announced on 6 Feb 2013
Ruby 1.9.3-p374 was announced on 17 Jan 2013

So 15 releases in 2013, including a major version (2.0.0) in February, and a release candidate of 2.1 shortly before Christmas. For a Ruby developer today, the current releases are:

  • Ruby 2.1.1
  • 2.0.0-p451
  • 1.9.3-p545

Let's compare to what's available in Debian stable:

$ apt-cache show ruby
Package: ruby
Source: ruby-defaults
Version: 1:1.9.3
Installed-Size: 31
Maintainer: akira yamada <akira@debian.org>
Architecture: all
Replaces: irb, rdoc
Provides: irb, rdoc
Depends: ruby1.9.1 (>=

So the current version in Debian is older than January 2013? What about Ubuntu? The latest 'saucy' offers us 2.0.0. That's fractionally better, but really, it's pretty old. What about my trusty Fedora? At the time of writing, that gives me 2.0.0p353. Still not exactly current, and not a sniff of a 2.1 package. Archlinux offers a Ruby 2.1, but even that's not right up to date:

$ ruby --version
ruby 2.1.0p0 (2013-12-25 revision 44422) [x86_64-linux]

Now, I will grant that there might be third-party repositories, or backports or other providers of packages which might be more up-to-date, but my experience of this hasn't always been that positive. On the whole, it looks like we need to look somewhere else. On a Unix-derived system, this typically means building from source.

Building From Source

Building Ruby from source isn't difficult, not if you have some idea what you're doing, and can read documentation. However, equally, it's not entirely trivial. It does require you to know how to provide libraries for things like ffi, ncurses, readline, openssl, yaml etc. And, if you're using different systems, they have different names, and you might be working with different compilers and versions of make. In recognition of this, several tools have emerged to shield the complexity, and make it easier to get a modern Ruby on your system. The most popular are RVM, ruby-build, and ruby-install. Let's review each of them, briefly. I'm going to use a CentOS 6 machine as my example Linux machine, but each of these tools will work on all popular Linux distributions and Mac OSX. I've had success on FreeBSD and Solaris too, but I've not tested this recently, so YMMV.


RVM, the Ruby Version Manager, is the father of the tools designed to make managing modern Ruby versions easier. It's what one might call a monolithic tool - it does a huge range of things all in one place. We'll discuss it several times in this series, as it provides functionality beyond that of simply installing a modern Ruby, but for the purposes of this series, we're only going to use it to install Ruby.

RVM is a shell script. The most popular way to install it is via the fashionable 'curl pipe through bash' approach:

$ curl -sSL https://get.rvm.io | bash
Downloading https://github.com/wayneeseguin/rvm/archive/master.tar.gz
Creating group 'rvm'

Installing RVM to /usr/local/rvm/
Installation of RVM in /usr/local/rvm/ is almost complete:

  * First you need to add all users that will be using rvm to 'rvm' group,
    and logout - login again, anyone using rvm will be operating with `umask u=rwx,g=rwx,o=rx`.

  * To start using RVM you need to run `source /etc/profile.d/rvm.sh`
    in all your open shell windows, in rare cases you need to reopen all shell windows.

# Administrator,
#   Thank you for using RVM!
#   We sincerely hope that RVM helps to make your life easier and more enjoyable!!!
# ~Wayne, Michal & team.

In case of problems: http://rvm.io/help and https://twitter.com/rvm_io

Hmm, ok, let's try that.

# gpasswd -a sns rvm
Adding user sns to group rvm

I also added:

source /etc/profile.d/rvm.sh

to the end of ~/.bash_profile

$ rvm --version

rvm 1.25.19 (master) by Wayne E. Seguin <wayneeseguin@gmail.com>, Michal Papis <mpapis@gmail.com> [https://rvm.io/]

First let's see what Rubies it knows about:

$ rvm list known
# MRI Rubies

# GoRuby

# Topaz

# TheCodeShop - MRI experimental patches

# jamesgolick - All around gangster

# Minimalistic ruby implementation - ISO 30170:2012

# JRuby

# Rubinius

# Ruby Enterprise Edition

# Kiji

# MagLev

# Mac OS X Snow Leopard Or Newer

# Opal

# IronRuby

Wow - that's a lot of Rubies. Let's just constrain ourselves to MRI, and install the latest stable 2.1:

$ rvm install ruby
Searching for binary rubies, this might take some time.
No binary rubies available for: centos/6/x86_64/ruby-2.1.1.
Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.
Checking requirements for centos.
Installing requirements for centos.
Updating system.
Installing required packages: patch, libyaml-devel, libffi-devel, glibc-headers, gcc-c++, glibc-devel, patch, readline-devel, zlib-devel, openssl-devel, autoconf, automake, libtool, bisonsns password required for 'yum install -y patch libyaml-devel libffi-devel glibc-headers gcc-c++ glibc-devel patch readline-devel zlib-devel openssl-devel autoconf automake libtool bison':

The first thing to notice, and this is pretty cool, is that RVM will try to locate a precompiled binary. Unfortunately there isn't one for our platform, so it's going to go ahead and install one from source. It's going to install the various required packages, and then crack on.

This assumes that I've got sudo set up for my user. As it happens, I don't, but we can fix that. For a mac or an ubuntu machine this would be in place anyway. Please hold, caller... ...ok done. My sns user now has sudo privileges, and we can continue:

$ rvm install ruby
Searching for binary rubies, this might take some time.
No binary rubies available for: centos/6/x86_64/ruby-2.1.1.
Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.
Checking requirements for centos.
Installing requirements for centos.
Updating system.
Installing required packages: patch, libyaml-devel, libffi-devel, glibc-headers, gcc-c++, glibc-devel, patch, readline-devel, zlib-devel, openssl-devel, autoconf, automake, libtool, bisonsns password required for 'yum install -y patch libyaml-devel libffi-devel glibc-headers gcc-c++ glibc-devel patch readline-devel zlib-devel openssl-devel autoconf automake libtool bison':
Requirements installation successful.
Installing Ruby from source to: /usr/local/rvm/rubies/ruby-2.1.1, this may take a while depending on your cpu(s)...
ruby-2.1.1 - #downloading ruby-2.1.1, this may take a while depending on your connection...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11.4M  100 11.4M    0     0  61.0M      0 --:--:-- --:--:-- --:--:-- 64.9M
ruby-2.1.1 - #extracting ruby-2.1.1 to /usr/local/rvm/src/ruby-2.1.1.
ruby-2.1.1 - #configuring....................................................
ruby-2.1.1 - #post-configuration.
ruby-2.1.1 - #compiling...................................................................................
ruby-2.1.1 - #installing................................
ruby-2.1.1 - #making binaries executable.
Rubygems 2.2.2 already installed, skipping installation, use --force to reinstall.
ruby-2.1.1 - #gemset created /usr/local/rvm/gems/ruby-2.1.1@global
ruby-2.1.1 - #importing gemset /usr/local/rvm/gemsets/global.gems.....
ruby-2.1.1 - #generating global wrappers.
ruby-2.1.1 - #gemset created /usr/local/rvm/gems/ruby-2.1.1
ruby-2.1.1 - #importing gemsetfile /usr/local/rvm/gemsets/default.gems evaluated to empty gem list
ruby-2.1.1 - #generating default wrappers.
ruby-2.1.1 - #adjusting #shebangs for (gem irb erb ri rdoc testrb rake).
Install of ruby-2.1.1 - #complete
Ruby was built without documentation, to build it run: rvm docs generate-ri

Great - let's see what we have:

$ ruby --version
ruby 2.1.1p76 (2014-02-24 revision 45161) [x86_64-linux]

In order to verify our installation, we're going to test OpenSSL and Nokogiri. If these two work, we can be pretty confident that we have a functional Ruby:

$ gem install nokogiri --no-ri --no-rdoc
Fetching: mini_portile-0.5.2.gem (100%)
Successfully installed mini_portile-0.5.2
Fetching: nokogiri-1.6.1.gem (100%)
Building native extensions.  This could take a while...
Successfully installed nokogiri-1.6.1
2 gems installed

Let's test this now. Here's a simple test to prove that both SSL and Nokogiri are working:

require 'nokogiri'
require 'open-uri'
require 'openssl'
puts OpenSSL::PKey::RSA.new(512).to_pem
https_url = 'https://google.com'
puts Nokogiri::HTML(open(https_url)).css('input')

If this works, we should see a PEM file printed to screen, and the HTML of various inputs on the Google homepage:

$ ruby ruby_test.rb
<input name="ie" value="ISO-8859-1" type="hidden">
<input value="en-GB" name="hl" type="hidden">
<input name="source" type="hidden" value="hp">
<input autocomplete="off" class="lst" value="" title="Google Search" maxlength="2048" name="q" size="57" style="color:#000;margin:0;padding:5px 8px 0 6px;vertical-align:top">
<input class="lsb" value="Google Search" name="btnG" type="submit">
<input class="lsb" value="I'm Feeling Lucky" name="btnI" type="submit" onclick="if(this.form.q.value)this.checked=1; else top.location='/doodles/'">
<input type="hidden" id="gbv" name="gbv" value="1">

So, RVM was fairly painless. We had to fiddle about with users and sudo, and add a line to our shell profile, but once that was done, we were easily able to install Ruby.

Ruby Build

Ruby-build is a dedicated tool, also written in shell, designed specifically to install Ruby, and provide fine-grained control over the configuration, build and installation. In order to get it we need to install Git. As with RVM, I'll set up a user with sudo access in order to permit this. Once we have Git installed, we can clone the project and install it.

$ git clone https://github.com/sstephenson/ruby-build.git
Initialized empty Git repository in /home/sns/ruby-build/.git/
remote: Reusing existing pack: 3077, done.
remote: Total 3077 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3077/3077), 510.82 KiB | 386 KiB/s, done.
Resolving deltas: 100% (1404/1404), done.
$ cd ruby-build
$ sudo ./install.sh

Like RVM, we can find out what Rubies ruby-build knows about:

$ ruby-build --definitions

Let's opt for ruby 2.1 again. Now, unlike RVM, ruby-build doesn't make any attempt to install development tools suitable to allow Ruby to be built from source. It does provide a wiki with information on what is needed, but if you expect ruby-build to magically install your toolkit, you're going to be disappointed. Additionally, ruby-build requires you to state exactly where you want Ruby to be installed. It doesn't have a default, so you need to work out where it should go. Realistically this boils down to two choices - do you want to try to install it somewhere global, where everyone can see it, or just locally for your own use. The former adds some more complications around permissions and so forth, so in this instance we'll install to a local directory.

First, let's check the wiki for the build dependencies:

$ yum install gcc-c++ glibc-headers openssl-devel readline libyaml-devel readline-devel zlib zlib-devel

Now that's all installed, we can go ahead with building Ruby:

$ ruby-build 2.1.1 ~/local/mri-2.1.1
Downloading ruby-2.1.1.tar.gz...
-> http://dqw8nmjcqpjn7.cloudfront.net/e57fdbb8ed56e70c43f39c79da1654b2
Installing ruby-2.1.1...
Installed ruby-2.1.1 to /home/sns/local/mri-2.1.1

Not a very verbose output... in fact one might be forgiven for wondering at times if anything's happening at all! But, now we have a Ruby, let's add it to our shell path, and do our nokogiri/openssl test.

$ export PATH=$PATH:/home/sns/local/mri-2.1.1/bin/
$ ruby --version
ruby 2.1.1p76 (2014-02-24 revision 45161) [x86_64-linux]

$ gem install nokogiri --no-rdoc --no-ri
$ vi ruby_test.rb
$ ruby ruby_test.rb
<input name="ie" value="ISO-8859-1" type="hidden">
<input value="en-GB" name="hl" type="hidden">
<input name="source" type="hidden" value="hp">
<input autocomplete="off" class="lst" value="" title="Google Search" maxlength="2048" name="q" size="57" style="color:#000;margin:0;padding:5px 8px 0 6px;vertical-align:top">
<input class="lsb" value="Google Search" name="btnG" type="submit">
<input class="lsb" value="I'm Feeling Lucky" name="btnI" type="submit" onclick="if(this.form.q.value)this.checked=1; else top.location='/doodles/'">
<input type="hidden" id="gbv" name="gbv" value="1">

So, ruby-build did what it said on the tin. It didn't need us to create a group, or source any files in our profile. All we needed to do was update our shell path, which, of course, we'd add to our shell profile to make permanent. We did have to install the build dependencies manually, but this is documented on the wiki. It's a lighter-weight solution than RVM, optimised for local users.

Ruby Install

The third of our 'install from source' options is 'Ruby Install'. Another shell utility, it is more aligned with 'Ruby Build' than RVM, as it only really has one purpose in life - to install Ruby. It doesn't have all the extra features of RVM, but it does install build dependencies for most platforms.

To install, we obtain a tarball of the latest release:

$ wget -O ruby-install-0.4.0.tar.gz https://github.com/postmodern/ruby-install/archive/v0.4.0.tar.gz
$tar xzvf ruby-install-0.4.0.tar.gz
$ cd ruby-install-0.4.0
$ sudo make install
[sudo] password for sns:
for dir in `find etc lib bin sbin share -type d 2>/dev/null`; do mkdir -p /usr/local/$dir; done
for file in `find etc lib bin sbin share -type f 2>/dev/null`; do cp $file /usr/local/$file; done
mkdir -p /usr/local/share/doc/ruby-install-0.4.0
cp -r *.md *.txt /usr/local/share/doc/ruby-install-0.4.0/

This will make the ruby-install tool available. Again, we can see which Rubies are available:

$ ruby-install
Known ruby versions:
    1:      1.9.3-p484
    1.9:    1.9.3-p484
    1.9.1:  1.9.1-p431
    1.9.2:  1.9.2-p320
    1.9.3:  1.9.3-p484
    2.0:    2.0.0-p353
    2.0.0:  2.0.0-p353
    2:      2.1.0
    2.1:    2.1.0
    stable: 2.1.0
    1.7:    1.7.10
    stable: 1.7.10
    2.1:    2.1.1
    2.2:    2.2.5
    stable: 2.2.5
    1.0:    1.0.0
    1.1:    1.1RC1
    stable: 1.0.0
    1.0:    1.0.0
    stable: 1.0.0

Because we obtained the stable release of the ruby-install tool, we don't have the very latest version available. We could simply do as we did with ruby-build, and get the latest version straight from Git, but the process would be the same. For now, we're going to opt for the latest stable release the tool offers us, which is 2.1.0:

$ ruby-install ruby
>>> Installing ruby 2.1.0 into /home/sns/.rubies/ruby-2.1.0 ...
>>> Installing dependencies for ruby 2.1.0 ...
[sudo] password for sns:
... grisly details ...

The install is substantially more verbose than either ruby-build or RVM, showing the grisly details of the compiling and linking. Once installed, let's check the version and run our test:

$ export PATH=$PATH:/home/sns/.rubies/ruby-2.1.0/bin/
$ ruby --version
ruby 2.1.0p0 (2013-12-25 revision 44422) [x86_64-linux]

$ gem install nokogiri --no-ri --no-rdoc
Building native extensions.  This could take a while...
Successfully installed nokogiri-1.6.1
1 gem installed
$ vi ruby_test.rb
$ ruby ruby_test.rb
<input name="ie" value="ISO-8859-1" type="hidden">
<input value="en-GB" name="hl" type="hidden">
<input name="source" type="hidden" value="hp">
<input autocomplete="off" class="lst" value="" title="Google Search" maxlength="2048" name="q" size="57" style="color:#000;margin:0;padding:5px 8px 0 6px;vertical-align:top">
<input class="lsb" value="Google Search" name="btnG" type="submit">
<input class="lsb" value="I'm Feeling Lucky" name="btnI" type="submit" onclick="if(this.form.q.value)this.checked=1; else top.location='/doodles/'">
<input type="hidden" id="gbv" name="gbv" value="1">

So, ruby-install was pretty easy to use. We didn't need to mess about with groups or sourcing extra files in our shell profile. There was a default, local path for the Rubies, the dependencies were identified and installed for us, and everything worked. The only downside was that, on account of using the stable version of the tool, we didn't get access to the very latest Ruby. That's easily fixed by simply getting the tool straight from Git - which we'll look at when we come to discuss managing multiple versions of Ruby.


So at this stage in our journey, we've explored three approaches to simplifying the process of installing a Ruby from source - RVM, ruby-build and ruby-install. Given that the stated objective was to install a recent version of Ruby, from source, as simply as possible, in my view the order of preference goes:

  1. Ruby Install: This does the simplest thing that could possibly work. It has sane defaults, solves and installs dependencies, and requires nothing more than a path setting.

  2. RVM: This requires only slightly more setup than Ruby Install, and certainly meets the requirements. It's much more heavyweight, as it does many more things than just install Ruby, and these capabilities will come into consideration later in the series.

  3. Ruby Build: This brings up the rear. It works, but it doesn't have sane defaults, and needs extra manual steps to install build dependencies.

As we continue the series, we'll look into ways to manage multiple versions of Ruby on a workstation, strategies for getting the relevant versions of Ruby onto servers, running Ruby under Microsoft Windows, and automating the whole process. Next time we'll talk about Ruby on Windows. Until then, bye for now.

Girls at Devopsdays (see what I did there?)

Ben Hughes (@benjammingh) of Etsy, visiting from SF to speak on security, tells me he is horrified by the pitifully small number of women at the conference. He says he thought New York was bad 'til he saw London. I saw, maybe, six female persons here, among several hundred male ones. Personally I don't notice this much - and if I don't experience any direct discrimination or signs of overt sexism I tend to see people as people, regardless of gender. I don't see me as part of a minority, so maybe I forget to see other women as such. Talking with Ben has made me think this isn't too good.

As someone who's done a fair bit of hiring, and never had a single female applicant I shrug, I say "what can I do? They aren't applying". But after talking with Ben I'm thinking that's a lame cop out too. Something needs to be done and just because it isn't my fault as an evil misogynist hirer, doesn't mean I shouldn't be trying to make improvements. I think it's a slow burn. It probably requires a culture shift with girls moving nearer to the model of the classic teenage bedroom programmer, and boys moving further away from it, until they all meet in the middle in some kind of more social, sharing, cooperative way of being. Hackspaces for teenagers? With quotas? It probably also requires more, and more interesting, tech in schools; programming for all. Meanwhile, I'm going to start with my small daughters (and sons), and to set them a good example I'm going to upskill myself. I'm going to get around to learning Ruby, as I've been meaning to for years.

Oh but wait... did I say there was no overt sexism here at Devopsdays? So as I'm writing this there's an ignite talk going on. It's a humorous piece using Game of Thrones as a metaphor for IT company life. It starts ok - Joffrey, Ned, Robb and Tyrion get mentions as corporate characters, it rambles a bit but it gets some laughs, including from me. But then suddenly this happens...

The presenter shows a picture of Shae, saying "...and if you do it right you get a girl." Oh, you GET a "girl" do you, Mister Presenter? That well known ownable commodity, the female human. Oh and apparently you get one who's under eighteen... that's a little, er, inappropriate, isn't it?

And then, as if that wasn't enough, the presenter shows a photo of Danaerys Targaryen, saying "And here's a gratuitous picture of Danaerys". Danaerys is gratuitous apparently! Well who knew! Not anyone who's read or seen Game of Thrones. Oh and wait, it gets better - the guy now tells us, "if you do it really well you get two girls."

So, firstly in the mind of this man, women are present in fiction and in IT only as prizes for successful men to win. Secondly women can reasonably be described as girls. Thirdly, he assumes that his audience is exclusively male (I was sitting in the front row - he's got no excuse for that one).

So I spoke to the guy afterwards, and he didn't see a problem with it at all, happily defending the piece, though he did finish by thanking me for the feedback. I guess the proof will be in whether he gives the talk again in a modified form.

Such a lost opportunity though - GoT is packed with powerful and interesting women, good, evil and morally ambiguous. He could so easily have picked half and half. It's like he read it or watched it without even seeing the non-male protagonists.

Hmm, i just used the word guy... We say "guys" a lot round here, and i mostly assume it's meant in a gender neutral inclusive way. That's the way I take it, and use it. But now I'm wondering how personally contextualised is our understanding of terms like "guys". Atalanta management once upset a female colleague by sending out a group email, beginning, "Chaps...". She felt excluded, whereas we assumed everyone used the word as we do in a completely neutral way. But both terms of address are originally male-specific, so it should be no surprise that some people will understand them to be either deliberately or carelessly excluding.

I don't know... This is the kind of small detail that tends to get written off as irrelevant, over-reacting, being "touchy", political correctness gone mad, and so on. But in the absence of really overt sexism - leaving aside Game of Thrones guy's show - maybe we do need to look to the small details to explain why tech still seems to be an unattractive environment for most women.

I nearly proposed an open space on this subject, but a few things held me back. Firstly, which I'm not proud of, I thought "what if nobody cares enough to come along? A) I'd feel a little silly, and b) I'd be so disappointed in my peers for not seeing it as important". I should've been braver, taken the risk.

More importantly though, I think it's really important that this isn't seen as a "women's issue", to be debated and dealt with by women, but rather as a people's issue, something that affects us all negatively, reflects badly on us; something we should all be responsible for fixing. And for me, the only female person in the room at the time, to propose this talk felt like ghettoising it as a women-only concern. I may be wrong. Maybe lots of people would've seized the chance to work towards some solutions. I wish Ben had been around.

So, I didn't propose a talk. But what I did do was put on my organisers' t-shirt and silly hat, so at least it was visible and clear that there are female persons involved in creating a tech conference in London. Even if there weren't enough attending.

About the Author

Helena Nelson-Smith is CEO of Atalanta Systems, she's also, by some people's definition, a "girl".

The Return of the Hipster PDA

Years ago, when I first started using GTD, the work/life management system pioneered by David Allen, my first cut at a GTD implementation was the Hipster PDA. The HPDA (called hipster more because it fits in one's hip pocket, rather than because it's beloved of node.js developers in Shoreditch) is the ultimate in lo-fi technology. It's nothing more than a bunch of 5x3 index cards, a binder clip, a biro and optionally a highlighter.

I stopped using the hipster when I bought my first MacBook. Beguiled as I was by shiny new technology, tags, search, and multi-device synchronisation, I abandoned my index cards and went digital! I've been digital more or less ever since. My main tools have been Things and Omnifocus, although, as an avid Emacs user, I did also have had a brief flirtation with using org-mode for GTD. Here's the thing: on reflection, I'm no better organised, no less stressed, and no more productive with these digital tools than I was with my trusty hipster. If anything, I would say I've been less productive, on the whole. This isn't a criticism of GTD itself - I'm much much less stressed, better organised, and more productive than I was before I put GTD into practice. It's just that these days I'm not so sure that going digital was really such a smart move, for me.

Let's back up a few steps and review the core required components of a GTD system. What we need?

  • A way to reassure our brains that each incomplete item has been captured in a trusted system somewhere where we will see it again
  • Somewhere to write lists
  • Somewhere to capture appointments what GTD calls 'hard landscape'
  • Erm... that's it

So hang on: we don't need tags, reminders, multi-device synchronisation, search, pretty formatting, a beautiful UI, or any of the frippery electronic tools offer us. Note – I'm not saying that these additional features don't offer value. What I am saying is that they most definitely aren't required. That is, we can be perfectly, or even optimally, productive with nothing more than a pen, paper, and diary. What's more, I have reason to believe there may be significant advantages to the lo-fi approach.

One of the great temptations with a digital tool is to fill it up with data. Unchecked, what really needs to be no more than a one line definition of next action, or project title, soon becomes a dumping ground for ideas, actions, notes, and other gubbins. Now, one might argue that this is a great place for what GTD would call project support material. I'm unconvinced. Most projects, under GTD terms, are pretty simple. If they need more planning or project support, there's no reason not to create either a physical folder for them, or an electronic one, with supporting documents, spreadsheets, pictures etc. To my mind this is a separate thing from the core infrastructure of simple lists. It's all too easy to dump stuff in the project or action and never see it again.

This brings me to my second disadvantage of a digital tool. I haven't settled on an appropriate word for this phenomenon, so I'll just try to describe its characteristics. When using an analog, lo-fi, manual process, for example pen and paper, or indeed (and it's noteworthy that this phenomenon applies equally to the world of kanban/scrum) a wall packed with sticky notes, we are gifted with the very tactile benefit of being forced to both see and feel the extent of our commitments - internal or otherwise. I have a friend with more than 1000 open items in Things. I think I can fit about 15 items on one side of a 5x3 index card, if I keep my handwriting very very small. So that's 30 if I use both sides. I think by the time I had 33 cards in my pocket, I'd be doing some pretty brutal pruning and someday/maybe populating. The thing is, digital backlogs just don't feel that burdensome. You don't get the pressure-valve effect, the feedback that tells you you need to stop and rethink. Besides, there's just something joyously present about a pocket full of cards - I tend to review them, shuffle them and become intimately familiar with their contents. And there's a manner in which the weekly review process of spreading a bunch of cards out on a desk, or the floor, that's just undeniably both rewarding and tremendously fun!

OK, so I'm romanticising - of course I am. There are disadvantages of a manual approach. Disadvantages that are often the very reason that computers became so popular, and which account for the attractiveness and pervasive use of tools like Things or Omnifocus. The most pressing one is 'backup'. One's GTD system is vitally important. It's the epicentre of one's ability to be productive. If one were to lose it, one would be, so to speak, 'screwed'. Now, with a modern, digital tool, of course one could lose one's phone, ipad, or even bag containing computer, but, the data is often automatically synced between devices or to a 'cloud' service. Basically, one's data is pretty safe.

Now, in days of yore, my backup procedure was a photocopier. Once a week, at review time, I'd photocopy all my cards. It was primitive, clumsy even, but it worked. Furthermore, because I knew that if I lost my HPDA I'd be screwed, I was very, very assiduous in my weekly backup, and thus weekly review. Additionally, I made absolutely sure I never, ever, ever lost the HDPA. For all my careful backups, I never needed to use them. By contrast I've lost my phone more than once and have friends who have been unlucky enough to be the victims of theft. An iPhone5 is somewhat more desirable and therefore a rather more stealable item than a wadge of index cards and a binder clip! Furthermore, there's no rule that states that a lo-fi, analogue hipster-pda user can't use modern digital technology as well. With applications such as Evernote, and hardware such as a Doxie, making regular backups of index cards is now remarkably easy. So while there does seem to be at least a superficial data security issue, I'm not convinced it's actually a deal-breaker.

Another obvious advantage of a digital GTD system, or if you prefer, disadvantage of the HPDA, is the speed of capture. With a single keypress, I can be typing and entering an open loop into my system. Obviously this is an order of magnitude faster than getting out my pile of cards, finding the right ones, finding my pen, and writing down the thought I had. The speed of capture is even greater if the trigger was something I was reading online, or an email I received. A simple highlight of relevant text, and a copy and paste is enough to capture your incomplete. But... this speed of capture is deceptive, in a couple of ways. First, the capture is so quick and easy, it disables any cognitive filtering. If I have to get out a pen and card, and write something down, that 1 to 2 second delay (and let's be realistic, we're not really talking about minutes versus milliseconds) is often enough for me to think: do I really have or want a commitment to this? Am I really going to action this? Can I just let this thought go? Should it go into a someday/maybe with no immediate action? Or can I just wait and see if this thought returns with more intention?

Relatedly, ease of capture with electric system can lead to some bad habits. I don't know whether it's because my formative GTD habits were built on lo-fi technology, but if I capture an incomplete using my hipster, I'm very very likely, nearly certain, to think about and capture the next action at the same time. I tend to find that with the electric system, the combination of the removed cognitive filter, lack of context switch, and the sheer ease of capture, results in me creating several dozen open loops, without an associated action, and suddenly, by the end of the day, I have a list of 30 ill-defined partial thoughts on a list called inbox. This is starting to look a lot like David Allen's amorphous blob of undoability. I thought this was exactly what we were trying to escape!

Another interesting side effect of the relatively slow speed of capture is that when I have to go through the tactile, manual step of finding the two cards (open loops and context) and capturing the project and next action, I am far, far more likely to exercise the two-minute rule. There just seems to be something about the process which kicks my brain into thinking: you're going to write this down in two places, and spend a few seconds working out what the next action is, when the next action is probably not much more of an effort than you'll spend tracking this action. How about you just do it now?

So, from my perspective, as a seasoned GTD practitioner, I truly think there is a significant amount of experiential evidence, and theoretical justification, to support the return of the HPDA. Please understand – this is in no way an anti-digital tirade. My experiences will not match yours, my mind doesn't work like yours, and quite likely you have your own hacks, workflows, processes, and disciplines to help you make your digital systems work perfectly for you. As for me, I'm going to try to switch back to lo-fi, and analogue, and I'll report back in about a month and tell you how things went. Bye for now!

Using Test Doubles in ChefSpec

One of the improvements to ChefSpec in the 3.0 release is the ability to extend test coverage to execute blocks. Doing so requires the infrastructure developer to stub out shell commands run as part of the idempotence check. This is pretty simple as ChefSpec provides a macro to stub shell commands. However doing so where the idempotence check uses a Ruby block is slightly more involved. In this article I explain how to do both.

Quick overview of ChefSpec

ChefSpec is a powerful and flexible unit testing utility for Chef recipes. Extending the popular Ruby testing tool, Rspec, it allows the developer to make assertions about the Chef resources declared and used in recipe code. The key concept here is that of Chef resources. When we write Chef code to build infrastructure, we're using Chef's domain-specific language to declare resources - abstractions representing the components we need to build and configure. I'm not going to provide a from-the-basics tutorial here, but dive in with a simple example. Here's a test that asserts that our default recipe will use the package resource to install OpenJDK.

require 'chefspec'

RSpec.configure do |config|
  config.platform = 'centos'
  config.version = '6.4'
  config.color = true

  describe 'stubs-and-doubles' do

    let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }

    it 'installs OpenJDK' do
      expect(chef_run).to install_package 'java-1.7.0-openjdk'


If we run this first, we'll see something like this:

$ rspec -fd spec/default_spec.rb 


Recipe Compile Error

could not find recipe default for cookbook stubs-and-doubles

  installs OpenJDK (FAILED - 1)


  1) stubs-and-doubles installs OpenJDK
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
       could not find recipe default for cookbook stubs-and-doubles
     # ./spec/default_spec.rb:10:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:13:in `block (3 levels) in <top (required)>'

Finished in 0.01163 seconds
1 example, 1 failure

Failed examples:

rspec ./spec/default_spec.rb:12 # stubs-and-doubles installs OpenJDK

This is reasonable - I've not even written the recipe yet. Once I add the default recipe, such as:

package 'java-1.7.0-openjdk'

Now the test passes:

$ rspec -fd spec/default_spec.rb 

  installs OpenJDK

Finished in 0.01215 seconds
1 example, 0 failures

Test Doubles

ChefSpec works by running a fake Chef run, and checking that the resources were called, with the correct parameters. Behind the scenes, your cookbooks are loaded, but instead of performing real actions on the system, the Chef Resource class is modified such that messages are sent to ChefSpec instead. One of the key principles of Chef is that resources should be idempotent - the action should only be taken if required, and it's safe to rerun the resource. In most cases, the Chef provider knows how to guarantee this - it knows how to check that a package was installed, that a directory was created. However, if we use an execute resource - a resource where we're calling directly to the underlying operating system - Chef has no way to tell if the command we called did the right thing. Unless we explicitly tell Chef how to check, it will just run the command again and again. This causes a headache for ChefSpec, because it doesn't have a built-in mechanism for faking operating system calls - so when it comes across a guard, it requires us to help it out, by stubbing the command.

This introduces some testing vocabulary - vocabulary that is worth stating explicitly, for the avoidance of confusion. I'm a fan of the approach described in Gerard Meszaros' 2007 book xUnit Test Patterns: Refactoring Test Code, and this is the terminology used by Rspec. Let's itemise a quick glossary:

  • System Under Test (SUT) - this is the thing we're actually testing. In our case, we're testing the resources in a Chef recipe. Note we're explictly not testing the operating system.
  • Depended-on Component (DOC) - usually our SUT has some external dependency - a database, a third-party API, or on our case, the operating system. This is an example of a DOC
  • Test Double - when unit testing, we don't want to make real calls to the DOC. It's slow, can introduce unwanted variables into our systems, and if it becomes unavailable our tests won't run, or will fail. Instead we want to be able to interact with something that represents the DOC. The family of approaches to implement this abstraction is commonly referred to as Test Doubles.
  • Stubbing - when our SUT depends on some input from the DOC, we need to be able to control those. A typical approach is to stub the method that makes a call to the DOC, typically returning some canned data.

Let's look at a real example. The community runit cookbook, when run on a RHEL derivative, will, by default, build an RPM and install it. The code to accomplish this looks like this:

bash 'rhel_build_install' do
      user 'root'
      cwd Chef::Config[:file_cache_path]
      code <<-EOH
tar xzf runit-2.1.1.tar.gz
cd runit-2.1.1
rpm_root_dir=`rpm --eval '%{_rpmdir}'`
rpm -ivh '/root/rpmbuild/RPMS/runit-2.1.1.rpm'
      action :run
      not_if rpm_installed

Observe the guard - not_if rpm_installed. Earlier in the recipe, that method is defined as:

rpm_installed = "rpm -qa | grep -q '^runit'"

ChefSpec can't handle direct OS calls, and so if we include the runit cookbook in our recipe, we'll get an error. Let's start by writing a simple test that asserts that we include the runit recipe. I'm going to use Berkshelf as my dependency solver, which means I need to add a dependency in my cookbook metadata, and supply a Berksfile that tells Berkshelf to check the metadata for dependencies. I also need to add Berkshelf support to my test. My test now looks like this:

require 'chefspec'
require 'chefspec/berkshelf'

RSpec.configure do |config|
  config.platform = 'centos'
  config.version = '6.4'
  config.color = true

  describe 'stubs-and-doubles' do

    let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }

    it 'includes the runit recipe' do
      expect(chef_run).to include_recipe 'runit'

    it 'installs OpenJDK' do
      expect(chef_run).to install_package 'java-1.7.0-openjdk'


And my recipe like this:

include_recipe 'runit'
package 'java-1.7.0-openjdk'

Now, when I run the test, ChefSpec complains:

1) stubs-and-doubles includes the runit recipe
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
       Executing a real command is disabled. Unregistered command: `command("rpm -qa | grep -q '^runit'")`"

       You can stub this command with:

         stub_command("rpm -qa | grep -q '^runit'").and_return(...)
     # ./spec/default_spec.rb:11:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:14:in `block (3 levels) in <top (required)>'

ChefSpec tells us exactly what we need to do, but let's unpack it a little, using the vocabulary from above. The SUT, our stunts-and-doubles cookbook, has a dependency on the operating system - the DOC. This means we need to be able to insert a test double of the operating system, specifically a test stub, which will provide a canned answer to our rpm command. ChefSpec makes it very easy for us by providing a macro that does exactly this. We need to run this before every example, so we can put it in a before block. The new test now looks like this:

require 'chefspec'
require 'chefspec/berkshelf'

RSpec.configure do |config|
  config.platform = 'centos'
  config.version = '6.4'
  config.color = true

  describe 'stubs-and-doubles' do

    before(:each) do
      stub_command("rpm -qa | grep -q '^runit'").and_return(true)

    let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }

    it 'includes the runit recipe' do
      expect(chef_run).to include_recipe 'runit'

    it 'installs OpenJDK' do
      expect(chef_run).to install_package 'java-1.7.0-openjdk'


Now when we run the test, it passes:

$ rspec -fd spec/default_spec.rb 

  includes the runit recipe
  installs OpenJDK

Finished in 0.57793 seconds
2 examples, 0 failures

That's all fine and dandy, but suppose we execute some Ruby for our guard instead of a shell command. Here's an example from one of my cookbooks, in which I set the correct Selinux policy to allow apache to proxy to a locally running Netty server:

unless (node['platform'] == 'Amazon' or node['web_proxy']['selinux'] == 'Disabled')
  execute 'Allow Apache Network Connection in SELinux' do
    command '/usr/sbin/setsebool -P httpd_can_network_connect 1'
    not_if { Mixlib::ShellOut.new('getsebool httpd_can_network_connect').run_command.stdout.match(/--> on/) }
    notifies :restart, 'service[httpd]'

Now, OK, I could have used grep, but I prefer this approach, and it's a good enough example to illustrate how we handle this kind of case in ChefSpec. First, let's write a test:

it 'sets the Selinux policy to allow proxying to localhost' do
  expect(chef_run).to run_execute('Allow Apache Network Connection in SELinux')
  resource = chef_run.execute('Allow Apache Network Connection in SELinux')
  expect(resource).to notify('service[httpd]').to(:restart)

If we were to run this, ChefSpec would complain that we didn't have an execute resource with a run action on our run list. So we then add the execute block from above to the default recipe. I'm going to omit the platform check for simplicity, and just include the execute resource. We're also going to need to define an httpd service. Of course we're never going to actually run this code, so I'm not fussed that the service exists despite us never installing Apache. My concern in this article is to teach you about the testing, not write a trivial and pointless cookbook.

Now our recipe looks like this:

include_recipe 'runit'
package 'java-1.7.0-openjdk'

service 'httpd'

execute 'Allow Apache Network Connection in SELinux' do
  command '/usr/sbin/setsebool -P httpd_can_network_connect 1'
  not_if { Mixlib::ShellOut.new('getsebool httpd_can_network_connect').run_command.stdout.match(/--> on/) }
  notifies :restart, 'service[httpd]'

When we run the test, we'd expect all to be fine. We're asserting that there's an execute resource, that runs, and that it notifies the httpd service to restart. However, this is what we see:


  1) stubs-and-doubles includes the runit recipe
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
       No such file or directory - getsebool httpd_can_network_connect
     # /tmp/d20140208-30704-g1s3d4/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:23:in `block (3 levels) in <top (required)>'

  2) stubs-and-doubles installs OpenJDK
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
       No such file or directory - getsebool httpd_can_network_connect
     # /tmp/d20140208-30704-g1s3d4/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:27:in `block (3 levels) in <top (required)>'

  3) stubs-and-doubles sets the Selinux policy to allow proxying to localhost
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
       No such file or directory - getsebool httpd_can_network_connect
     # /tmp/d20140208-30704-g1s3d4/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:31:in `block (3 levels) in <top (required)>'

Finished in 1.11 seconds
3 examples, 3 failures

Boom! What's wrong? Well, ChefSpec isn't smart enough to warn us about the guard we tried to run, and actually tries to run the Ruby block. I'm (deliberately) running this on a machine without the ability to run the getsebool command to trigger this response, but on my usual workstation running Fedora, this will silently pass. This is what prompted me to write this article, because my colleague who runs these tests on his mac kept getting this No such file or directory - getsebool httpd_can_network_connect error, despite the Jenkins box (running CentOS) and my workstation working just fine. So - what's the solution? Well, we need to do something similar to that which ChefSpec did for us earlier. We need to create a test double, only this time it's Mixlib::ShellOut that we need to stub. There are three steps we need to follow. We need to capture the :new method that is called on Mixlib::ShellOut, and instead of returning canned data, like we did when we called stub_command, we want to return the test double, standing in for the real instance of Mixlib::Shellout, and finally we want to control the behaviour of the test double, making it return the output we want for out test. So, first we need to create the test double. We do that with the double method in Rspec:

shellout = double

This just gives us a blank test double - we can do anything we like with it. Now we need to stub the constructor, and return the double:


Finally, we specify how the shellout double should respond when it receives the :run_command method.

  allow(shellout).to receive(:run_command).and_return('--> off')

We want the double to return a string that won't cause the guard to be triggered, because we want to assert that the execute method is called. We can add these three lines to the before block:

before(:each) do
  stub_command("rpm -qa | grep -q '^runit'").and_return(true)
  shellout = double
  allow(shellout).to receive(:run_command).and_return('--> off')

Now when we run the test, we'd expect the Mixlib guard to be stubbed, the test double returned, and the test double to respond to having the :run_command method called be that it returns a string which doesn't match the guard, and thus the execute should run! Let's give it a try:


  1) stubs-and-doubles includes the runit recipe
     Failure/Error: let(:chef_run) {  ChefSpec::Runner.new.converge(described_recipe) }
       undefined method `stdout' for "--> off":String
     # /tmp/d20140208-30741-eynz5u/stubs-and-doubles/recipes/default.rb:8:in `block (2 levels) in from_file'
     # ./spec/default_spec.rb:20:in `block (3 levels) in <top (required)>'
     # ./spec/default_spec.rb:23:in `block (3 levels) in <top (required)>'

Alas! What have we done wrong? Look closely at the error. Ruby tried to call :stdout on a String. Why did it do that? Look at the guard again:

not_if { Mixlib::ShellOut.new('getsebool httpd_can_network_connect').run_command.stdout.match(/--> on/) }

Aha... we need another double. When the first double is called, we need to return something that can accept a stdout call, which in turn will return the string. Let's add that in:

before(:each) do
  stub_command("rpm -qa | grep -q '^runit'").and_return(true)
  shellout = double
  getsebool = double
  allow(shellout).to receive(:run_command).and_return(getsebool)
  allow(getsebool).to receive(:stdout).and_return('--> off')

Once more with feeling:

$ bundle exec rspec -fd spec/default_spec.rb 

  includes the runit recipe
  installs OpenJDK
  sets the Selinux policy to allow proxying to localhost

Finished in 0.7313 seconds
3 examples, 0 failures

Just to illustrate how the double interacts with the test, let's quickly change what getsebool returns:

allow(getsebool).to receive(:stdout).and_return('--> on')

Now when we rerun the test, it fails:


  1) stubs-and-doubles sets the Selinux policy to allow proxying to localhost
     Failure/Error: expect(chef_run).to run_execute('Allow Apache Network Connection in SELinux')
       expected "execute[Allow Apache Network Connection in SELinux]" actions [] to include :run
     # ./spec/default_spec.rb:31:in `block (3 levels) in <top (required)>'

This time the guard prevented the execute from running, and as such the resource collection didn't contain this resource, and so the test failed.


One of the great beauties of ChefSpec (and of course Chef) is that at its heart it's just Ruby. This means that at almost any point you can reach into the standard Ruby development toolkit for your testing or infrastructure development needs. Hopefully this little example will be helpful to you. If it inspires you to read more about mocking, I can recommend the following resources:

Some Thoughts on Reading "The Phoenix Project"

Like a whole bunch of people, I expect, I've just finished reading The Phoenix Project, and I thought it was pretty good. Readable and engaging, and full of good ideas.

If you haven't read it, the basic premise is that a middle-grade IT ops manager is unwillingly and abruptly catapulted into the role of acting VP of IT Operations in a big, traditional, but failing, parts manufacturing firm. The firm's on the brink of collapse, losing market share and full of problems that everyone can see but no-one knows how to solve. With the help of a rather mysterious mentor figure - who may or may not be on the board - our hero studies the manufacturing side of the company, heals the rifts between dev and ops, integrates IT throughout the whole company and learns to love Devops.

Something I didn't expect from reading it was that I was struck by how much of the 'reform' presented in it sounded like basic common sense and normal practice to me. I mean, what's so radical about continuous deployment?! I guess that just shows I've "grown up" professionally in the world of Devops, infrastructure as code, automation and all that. And when I last worked in a large or traditional company, IT was the guy who came in and fixed the printer.

So I wonder how many readers of The Phoenix Project found it radical and new, and how much is it just preaching to the choir? I guess that depends on how well it's being marketed outside the Devops space. And I guess how fast new people are starting to explore the Devops idea space.

I was at DevopsDays London earlier this month, as my company was one of the sponsors, and the book got a lot of mentions and a sponsored free Kindle download. In the pub on Monday night I got talking to Nick Stacey about the book, Devopsdays and Devops - among other things - and his perspective was interesting, and relevant to my thoughts on Phoenix.

So, Nick is new to Devops, Devopsdays was his first exploration of the concept, yet much of what he's seen and heard seem familiar to him. Some companies, he told me, in his experience have always been this way. Digital agencies and startups - which form his background - have to be that way to survive. "Anyone trying to empire build has to be stopped, because they'd impact profit. Agencies are constantly battling to make money, working on project basis and have to constantly drive the projects through".

This brought me to another observation I had which was how much of the book was specific to a very big, long-established firm. That's very interesting of course, but now I'd really like to read a similar book, only about a small but expanding startup. Any recommendations for further reading along these lines?

Chef: The Definitive Guide

Writing books is hard. I used to think it was a lot like writing a blog, only scaled up a bit. But it's much more involved than that. Once you're a published author, suddenly you have obligations and responsibilities to your publisher, and to the paying, reading public. The ante has been well and truly upped.

A few years ago I wrote a slim volume - Test-driven infrastructure with Chef. At the time I wrote it, I was pretty much the only person trying to do infrastructure code in a fashion inspired by the TDD and BDD schools of thought which I practiced as a software developer. When it was published, it was remarkably popular, and despite really being a book about test-driven development, and infrastructure code, it was widely read as a book 'about' Chef.

The problem was, this was the only published book on Chef. Chef as a tool was growing in popularity, and regular complaints were heard about the quality of the public documentation on the Opscode wiki, and about the failure of my volume to be a comprehensive introduction to Chef. Notwithstanding the observation that my first book was never intended to be a comprehensive introduction to Chef, both O'Reilly and I took this on board, and began work on a "Definitive Guide" - a full-length, comprehensive book on Chef. And that's where the problems started.

Chef is a large project, with a very active community. It's also a young project, moving very quickly indeed. Any attempt to capture the 'recommended approach' at a point in time was quickly rendered obsolete. New features were being added at breakneck speed. New community tools were being developed, and 'best practice' was in a constant state of flux. It was clear that writing a 'definitive' guide was going to be a significant undertaking, and quite possibly a flawed one.

At the same time, interest in test-driven infrastructure was exploding. New testing approaches and tools were blossoming, dozens of talks and discussions were being had, and my little introduction to the idea was getting slammed by its readership for covering only one approach, for being too short, and for not being a definitive guide to Chef. Did I mention that writing books is hard?

I had to make a decision. With limited time available, with a busy speaking, training and consulting schedule, and a large family, where should I focus my attention? After discussions with O'Reilly, and friends and colleagues, I decided that I should work on updating the initial Test-driven Infrastructure book. Now, in microcosm, we had the very same problem of a rapidly growing toolset and community to deal with. Tools came and went, underwent complete rewrites, and best practices continued to evolve. I also felt it was necessary to try to give a more thorough Chef overview, in response to feedback on the first volume. I worked incredibly hard on the 2nd edition. Frankly, too hard. I made myself ill, got entirely burned out, and let down family, friends, colleagues and readers. But, by the time of Velocity, this summer, we were pretty much finished. I met with my editor at the conference, and we surveyed the landscape.

The much-hated wiki had been replaced by a greatly improved documentation website. Ospcode had hired a technical writer to work on documentation, and community engagement in improving and maintaining that resource was growing. We remained convinced that trying to write a "definitive guide" at this stage was not a wise choice.

Additionally, Seth Vargo had begun an excellent project: learnchef. Aimed at being the ultimate hands-on quickstart guide, this was immediately well-received, and together with my second edition, filled the requirement for an introduction to Chef. The intermediate, reference-level specification of the core Chef functionality was adequately covered by docs.opscode.com. What we felt was missing was deep-dive, subject-specific discussions. How do I build infrastructure on Windows? How can I build a continuous delivery pipeline using Chef? Advanced programming in the Chef environment. That sort of thing.

We agreed to cancel the definitive guide project, with a view to working on these subject-specific guides. I tweeted about this, more than once, and shared our intention with friends in the community. What we didn't do, either O'Reilly, or me, was make a formal, public announcement. That was a mistake. I can't apologise on behalf of O'Reilly, but I can apologise personally. That was unprofessional of me: I'm sorry.

So, it's now summer, and I'm utterly exhausted. But in the spirit of the invincible superhero, I continued to take on work, travel, speak, and generally over-commit. My physical and mental health deteriorated further, until late August when I had to accept I was at the point of complete breakdown. I took about 6 weeks off, recovered my perspective, got some good rest, and left the editorial process in the safe hands of O'Reilly, and Helena.

Fast-forward to now. The 2nd edition of Test-Driven Infrastructure is out, and early reviews are positive. I'm rested, healthy, and hopefully wiser. I've learned a lot about Chef, and about writing, and am ready to start on my next project... this time with co-authors from day one. I have ideas on what we should cover first, but I'm open to suggestions and requests.

In the meantime, we have good resources available. Use learnchef, join the IRC channels, participate in the mailing list, read my book. Matthias Marschall has just finished his book on Chef, which is also excellent. People who lament the quality of the official documentation - please: give specific examples of where you feel information is missing, the writing is poor, the material is misleading. Remember: this is a community project - if you think you can improve the documentation, submit a pull request, and make it better for everyone. Opscode is committed to great documentation, and the decision not to try to write a definitive guide forces us as a community to build this reference ourselves, openly.

To conclude - I acknowledge that I've let down the people who were so eagerly anticipating "The Definitive Guide". I also accept that we handled the communication of our decision badly. But I think it's the right decision. And I think we're in a strong position to move forward and build on the resources we already have as a community. Will you help?