How I make interesting technical presentations

Whenever I talk at conferences, I am routinely asked how I go about preparing and making my presentations.

There are no hard and fast rules, but these are some things I have learnt:

Start analog

The most limiting thing you can do when you start putting together a presentation is to reach for slideware. I use a paper notebook to brainstorm my ideas with multicoloured pens, then scan it so I can refer back to it quickly when putting the slides together.

mindmapping a talk

Don't create slides linearly

I focus on an idea in the brainstorm that surprised me the most when I wrote it down, and use it as a jump-off point for creating slides. I've found exploring that initial idea helps set the tone for the rest of the presentation.

Weave a story

Kathy Sierra used to bang on about this heaps. We're wired as a species to find stories interesting, so use this to your advantage.

But don't concoct a story just for the talk - try to relate the content back to your own experiences. Nobody wants to hear about Alice and Bob, they want to hear you and your co-workers rise above adversity and the setbacks you had along the way.

Chris Fegan's NBNCo talk at Puppet Camp Sydney 2013 was a good example of how to weave technical detail into an organisational growth story.

Use slides appropriately

They are a visual aid, and a visual aid alone. People's attention should be on you - you are the speaker after all! Use lots of supporting visuals, and minimal text. No bullet point lists! Put each point on a separate slide.

I use Flickr's Creative Commons search to find relevant images, and favourite them when I want to use them again across multiple presentations. Sometimes they even provide a visual trigger that moves the presentation in a direction I wasn't expecting.

If I post the slides after the presentation, it's always nice to comment on the picture on Flickr to let the photographer know I appreciate their contributions to Open Culture.

Don't rely on the slides

Ideally if your laptop died 5 minutes before the talk, you should know your material well enough that you could deliver it by voice alone.

Be thorough

Shortcuts are obvious to your audience. I spend at least 20 hours preparing each presentation.

A lot of that time is research (I spent 10 hours alone doing research on AF447 before I created a single slide, and that research was probably too little given the depth of subject matter), and a lot of it is finding images on Flickr. :-)

Maybe 20 hours is a lot, but every minute you put into preparation pays off.

Tailor your content

It's ok to give the same talk at multiple conferences, but make sure you alter the content so it's relevant to your audience.

I gave my cucumber-nagios talk tens of times over an 18 month period, but the talk was different every time.

If I was at a developer conference, I would talk about how to reuse your existing tests as monitoring checks. If I was at a sysadmin conference, I would talk about testing systems infrastructure. If I was at a DevOps conference, I would talk about encoding & communicating business processes in your monitoring.

Practice, practice, practice

Know the timing of your talk. Work out what the average time you should spend on each slide. I generally rehearse each talk at least 3-5 times before I give it the first time, and will revise and rehearse at least 1-2 times on subsequent presentations.

Don't wait until you've finished the presentation before you start practicing. I'll often practice the 20% I've put together and discover it feels mechanical, or the ideas don't flow well into one another. Refactor.

Test your equipment

Plug your laptop into the projector at least once, preferably twice, before your talk. I carry multiple adapters for every conceivable display type out there, some display cables, a power board, and a clicker. Test everything, then test it again.

Mirror your display

It's tempting to use your laptop screen for presenter notes and stopwatch widgets. Don't. Know your material. Use a physical stopwatch. Split displays will break unexpectedly, and you'll lose your flow. Besides, mirroring is always easier than craning your neck to see what your audience is seeing.

Watch yourself

If you're lucky to talk at a conference where your talk is recorded, go back and watch your talk. This is vitally important for working out what bits flowed well and what bits were stilted.

--

The most important thing is to speak at many events as often as possible. You're only going to get better at presenting if you present. Start working towards that 10,000 hours of mastery!

DevOps Down Under 2012 – what happened?

Almost 2 days ago Patrick kicked off a discussion about organising another Australian DevOps conference in 2013 amongst a small group of passionate DevOps who are actively involved in the Australian community.

While the discussion was trundling on without me, I felt I owed everyone involved an explanation of what happened with this year's unrealised conference, and why the conference fell flat.

Let's start at the beginning.

Having come back from a year of backpacking around Europe and attending the first DevOpsDays conference, I took it upon myself to try and replicate the success by organising the first DevOps Down Under conference in 2010.

It was a relatively small affair held downstairs at Atlassian's Corn Exchange offices in Sydney, and I put the thing together on a shoestring budget in my spare time with some on-the-ground help from Atlassian's Nicholas Muldoon.

The event was successful, with people from all across Australia and New Zealand to attending. At the end of the conference, each attendee was asked to write down one thing they loved, and one thing they hated about the conference.

Stacks of love and hate

This gave me a great starting point to build another conference on, and in early 2011 I started getting the itch to do another. At the same time, Evan Bottcher pinged me about ThoughtWorks lending a hand to organise another DevOps Down Under in Melbourne later in 2011.

The most consistent feedback we got from the 2010 conference was that the coffee was "a little bit shit", so we fixed that by moving the whole conference to Melbourne.

After an initial planning meeting, ThoughtWorks kindly lent Chris Bushell and Natalie Drucker to assist with organising.

I was just starting a new position at work, and wasn't able to dedicate nearly as much time to organising as I had in 2010. I provided the initial vision and direction, but without Chris and Natalie's tireless efforts and persistent pestering of me to get my arse into gear, the conference would have been but a shadow of itself.

Attendees at #dodu2011

By the time DevOps Down Under 2011 wrapped up in July, I was tired and wasn't feeling fired up about putting on another conference just yet. I decided to wait and see how I felt in the new year.

Around March this year I started thinking about doing another conference, but the spark wasn't there like in other years. I decided to press on regardless, motivated by my perceived expectation that people wanted another conference.

The vision for DevOps Down Under 2012 was to build a quiet, intimate, and safe atmosphere that was removed from the rat race. To achieve this, the plan was to cap the number of attendees at 140, find a venue outside a major capital city, and source high quality talks.

Venue shot for #dodu2012

The venue & budget was in place, and we got a really great collection of talks submitted. I simply failed to execute on anything beyond that.

The main reasons why execution failed were:

  • I had lost the passion for organising the conference, and was motivated by the wrong reasons.
  • I had even less time to commit.
  • Everyone involved was similarly time poor.
  • There was no organisational cadence.
  • I didn't lean enough on other people to help me do the grunt work.
  • I didn't have the time to fix any of these problems.

With the benefit of hindsight, I simply shouldn't have tried to put it on.

Seeing people putting their hands up to organise a 2013 conference takes a huge mental weight off my shoulders.

Through my own actions and inactions, I have felt the responsibility of leading the conference organisation year-on-year has fallen to me. In 2012 that pressure became paralysing, and my eventual coping mechanism was to ignore the conference entirely.

As for my future involvement: I am still burnt out, and it would simply be unfair to myself, the organisers, speakers, and attendees to commit to taking an active role in organising a 2013 conference.

I have provided the current crop of potential organisers a collection of resources to get them started, and I am extremely confident they will manage to pull off something spectacular.

Drawing on my battered experience of organising several conferences, these are the key actionable things I believe you need to make an event like DevOps Down Under happen:

  • Have at least 3 people who can each dedicate 2+ hours a week to doing the grunt work. Anyone who tells you organising a conference is anything but a hard slog is either lying to you, or doesn't know what they are talking about.
  • Do weekly catchup meetings to keep things on track. Increase the frequency of these closer to the conference date.
  • Use a mailing list for asynchronous organisation.
  • Nominate someone to lead & own the conference vision & organisation.

I hope the above arms you with enough information to avoid falling into the same traps I did.

Ript: quick, reliable, and painless firewalling

Running your own servers? Hate managing firewall rules?

For the last year at Bulletproof Networks I've been working on a little tool called Ript to make writing firewall rules a joy, and applying them quick, reliable, and painless.

Ript is a clean and opinionated Domain Specific Language for describing firewall rules, and a tool with database migrations-like functionality for applying these rules with zero downtime.

The DSL

At Ript's core is an easy to use Ruby DSL for describing both simple and complex sets of iptables firewall rules. After defining the hosts and networks you care about:

Liquid error: No such file or directory - posix_spawnp

...you use Ript's helpers for accepting, dropping, & rejecting packets, as well as for performing DNAT and SNAT:

Liquid error: No such file or directory - posix_spawnp

The DSL provides many helpful shortcuts for DRYing up your firewall rules, and tries to do as much of the heavy lifting for you as possible.

Part of Ript being opinionated is that it doesn't expose all the underlying features of iptables. This was done for several reasons:

  • The DSL would become complex, and thus harder to use.
  • Not all features within iptables map to Ript's DSL
  • Ript caters for the simple-to-moderately complex use cases that 80% of users have. If you need to use iptables features documented deep within the man pages, Ript is almost certainly not the tool for you.

Rule application

While the DSL is pretty, we didn't write Ript because of it - we wrote it because we're working with tens of thousands of iptables rules & making several changes a day to those rules, and the traditional way of applying changes doesn't cut it at scale.

Most tools try to apply firewall rules by flushing all the loaded rules and loading in new ones. This works fine if you only have a few hundred rules, but as soon as you start scaling into thousands of rules, the load time becomes very noticable.

The effects of this are fairly simple: the rule load time manifests itself as downtime.

Because the ruleset has to be applied serially, rules at the end of the set are held up by rules still being applied at the beginning of the set. From a service provider's perspective, this means that a rule change for one customer can end up causing downtime for other completely unrelated customers. Not cool.

iptables-save and iptables-restore help with this, but you still end up writing + applying rules by hand - a tedious task if you're making lots of firewall changes every day.

Ript's killer feature is incrementally applying rules.

Ript generates firewall chains in a very specific way that allows it to apply new rules incrementally, and clean out old rules intelligently. Here's an example session:

Liquid error: No such file or directory - posix_spawnp

Getting started

Ript has been Open Sourced under an MIT license, and is available on GitHub. To get you going, Ript ships with extensive DSL usage documentation, and a boatload of examples used by the tests.

I'll also be giving a talk about Ript at linux.conf.au in Canberra in January 2013.

Happy Ripting!

Incentivising automated changes

Matthias Marschall wrote a great peice last week on the pitfalls of making manual changes to production systems. TL,DR; Making manual changes in the heat of the moment will bite you at the most inopportune times.

The article finishes with this suggestion:

You should have your configuration management tool (like Puppet or Chef) setup so that you can try out possible solutions without having to go in and do it manually.

In my experience, this is the key to solving the problem.

Rather than coercing people to follow a "no manual changes" policy, you make the incentives for making changes with automation better than for making changes manually.

Specifically:

  • Make it simple. Reduce the number of steps to make the change with automation. It should be quicker to find the place in your Chef or Puppet code and deploy than logging into the box, editing a file, and restarting a service.
  • Make it fast. The time from thinking about the change to the change being applied should be shorter with automation than by doing it manually.
  • Make it safe. Provide a rollback mechanism for changes. A safety harness can be as simple as a thin process around "git revert" + deploy.

It's a perfect example of how tools should complement culture.

OS X Workstation Management with Chef

I have written twice before about managing my OS X workstations with Chef. The first post has one of my highest hit counts of any blog post, so it is certainly a topic of interest to people.

This post is a rewrite of the original, and now has an accompanying Chef Repository where all the code I talk about is available.

Background

The current incarnation of this repository lives in the more general private Chef Repository I use for my home network, since I manage more than just workstations with Chef. I have used the main recipe (workstation::default) with great success on several Mac OS X systems: Macbook Pro, iMac, and Macbook Air running various versions between 10.6 and 10.8. I recently used it to configure a replacement Macbook Pro for work, and then again after upgrading to Mountain Lion.

The Setup

Also known as “bootstrapping Chef”, the system needs to be set up to run Chef.

  • Opscode Hosted Chef is my Chef Server
  • I run everything as a non-privileged user

Installation

I use the Opscode full-stack installer on all my systems, including OS X, because it includes everything Chef needs, including Ruby.

Whether you’re unpacking a brand new Mac, or using an existing system, use this command:

1
curl -L https://opscode.com/chef/install.sh | sudo bash

Symbolic links are created for Chef’s binaries in /usr/bin.

Mountain Lion Note: I did this on Lion before upgrading to Mountain Lion. Apple removed X11 from Mountain Lion, and the installer opens an xterm, so I don’t know how/if this works the same on a brand new Mountain Lion system.

Configuration

Next, create a configuration file for the Chef Server, and copy the validation key into place. If this is a new Mac, you’ll need to get your validation key copied to the system.

1
2
3
sudo mkdir -p /etc/chef
sudo vi /etc/chef/client.rb
cp ~/Downloads/ORGNAME-validation.pem /etc/chef/validation.pem

I am using Opscode Hosted Chef, and this is my /etc/chef/client.rb. The path options are so Chef writes its files in a location my user has write access.

1
2
3
4
5
6
7
base_dir = "/Users/USERNAME/.chef"
chef_server_url         'https://api.opscode.com/organizations/ORGNAME'
validation_client_name  'ORGNAME-validator'
checksum_path           "#{base_dir}/checksum"
file_cache_path         "#{base_dir}/cache"
file_backup_path        "#{base_dir}/backup"
cache_options({:path => "#{base_dir}/cache/checksums", :skip_expires => true})

The Repository

I have made the repository available on GitHub.

1
git clone git://github.com/jtimberman/workstation-chef-repo.git

Normally, systems that are configured with Chef wouldn’t have the Chef Repository on them. For the purpose of this post, clone the repository to the local system. Presumably, one might do further development to it.

Before we upload it and run Chef, let’s explore what is included.

Data Bags

The repository contains two data bags with a single item each. One is for the local user, the other is for the workstation setup.

The USERNAME should be changed to the local user that is being configured on the workstation. To ensure that the correct value is used, run the following and use the value returned.

1
2
% ruby -retc -e 'puts Etc.getlogin'
jtimberman

Thus, I use jtimberman for my systems.

The user data bag item is used in two cookbooks, users and workstation. This is described below under Cookbooks.

The workstation data bag item contains various data about the workstation itself, software that should be installed, property list files dropped off, etc. The JSON file in the repository contains several examples. Modify this as required for your own system.

Roles

There are three roles.

base

This is the role I apply on all my systems, not just workstations. Aside from the contents of the role file in the repository, I also set attributes across my systems for a variety of other purposes like postfix, munin, ntp and so forth. For the workstation setup purposes, it contains the attributes I use for installing Ruby under Rbenv, and the gems I want available on all my systems that aren’t project specific (I use bundler for those).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
name "base"
description "Base role for all nodes"
override_attributes(
  "ruby_build" => {
    "git_ref" => "v20120524",
    "upgrade" => true,
    "install_pkgs" => []
  },
  "rbenv" => {
    "install_pkgs" => [],
    "user_installs" => [
      {
        "user" => "USERNAME",
        "rubies" => ["1.9.3-p194"],
        "global" => "1.9.3-p194",
        "gems" => {
          "1.9.3-p194" => [
            {"name" => "bundler", "version" => "1.1.1"},
            {"name" => "git-up"},
          ]
        }
      }
    ]
  }
)

Edit the list of gems as required for your preferences. The ones included in the role are what I find useful or required for my day to day work on Chef and Chef-related projects (like opscode-cookbooks).

The base role does not have a run list. It is included instead by OS specific roles that I apply to Ubuntu or OS X systems respectively. As this is a post for my OS X workstations, let’s look at that role next.

mac_os_x

The mac_os_x role is applied on all my OS X systems. Of note, it includes the base role, and the homebrew recipe. The homebrew cookbook includes a package resource provider that replaces the Chef default for OS X, macports, with homebrew.

1
2
3
4
5
6
7
name "mac_os_x"
description "Role applied to all Mac OS X systems."
run_list(
  "role[base]",
  "recipe[build-essential]",
  "recipe[homebrew]"
)

This role probably doesn’t need to be edited.

workstation

This is the role of interest, which contains the workstation specific run list and attributes.

The role itself is very long, so I won’t include it here. You can view it in the repository.

Do note that recipe[mac_os_x::firewall] requires root access, and will prompt for the sudo password (and pause the whole run until entered).

Edit the mac_os_x settings as required for your own preferences. Edit other attributes as required for software you wish to use or install.

Cookbooks

The repository uses a number of cookbooks, most of which are published on the Chef Community site as well. I’m not going to describe all the cookbooks in this post, just the ones that are most relevant for workstation setup.

Development Essentials

These are:

  • build-essential
  • homebrew
  • git
  • ruby_build
  • rbenv

On OS X, build-essential will install Kenneth Reitz’s OS X GCC installer - XCode is not required. Of course, you may have other reasons why you want to have XCode, and that is outside the scope of this repository. If so, remove the build-essential recipe from the roles.

The homebrew cookbook uses Homebrew as the default package provider on OS X. The default recipe will install Homebrew, Git with homebrew, and ensure that the formulae are updated.

IMPORTANT NOTE Homebrew recently “broke” (in my opinion) the output of brew info. I manually patched my local copy of Library/Homebrew/cmd/info.rb after discovering this halfway through setup of my new system.

The git cookbook installs git using the Git OS X installer, and the git binary will be /usr/bin/git. This is redundant with git installed from homebrew, but at some point I had issues and I don’t remember if they were resolved. If you wish to use git from homebrew, use /usr/local/bin/git instead.

The ruby_build and rbenv cookbooks are by Fletcher Nichol, and are quite excellent for installing per-user Rubies of a specific version, and gems using the rbenv_gem LWRP. The base role has the attributes set up for how I like this, YMMV.

users

I use an older, modified version of Opscode’s users cookbook, pre users_manage LWRP. The recipe adds capability for distributing arbitrary files, such as dotfiles for users. To use this, create a “files” section of the users data bag item. The USERNAME.json item includes examples of this. Each file needs to be copied to cookbooks/users/files/default/USERNAME/ as the source file name used in the data bag item.

workstation

The workstation cookbook has a recipe that does all the work of reading the workstation data bag item and setting up the system per the data available.

The README.md in the cookbook contains detailed information about its use, and the data bag item already has the structure to get started.

If the plists array is used, then each plist file should be copied into the files/default/ directory.

mac_os_x

My mac_os_x cookbook has two LWRPs that I use elsewhere in this repository:

  • mac_os_x_plist - drops off property list (plist) files in ~/Library/Preferences
  • mac_os_x_userdefaults - writes OS X user settings with the defaults(1) system

The plist files used by mac_os_x_plist should be added in the files/default directory of the cookbook where the resource is used in a recipe.

The mac_os_x::settings recipe will read the node[‘mac_os_x’][‘settings’] attribute for user defaults to apply.

See the mac_os_x cookbook’s README for more information.

Applications

The following are application specific cookbooks that I use:

  • iterm2
  • virtualbox
  • ghmac
  • 1password
  • xquartz

The iTerm2 cookbook will set up iTerm 2 and optionally add tmux integration. I wrote about this awhile back.

We use Vagrant extensively at Opscode, which requires VirtualBox. This recipe will install it per the attributes set in the workstation role.

Install GitHub for Mac with the ghmac cookbook. Local setup for it is on your own.

I have 1password in here because I used to install it from the zip file, but I may remove this at some point since I install it from the Mac App Store now.

Note that the versions of these apps may be old, but they have Sparkle.framework or can otherwise update themselves to newer versions easily. Click the buttons, it’s cool.

I don’t have a recipe for managing application installation through the Mac App Store. It’s really not that hard to fire up the app and click the “Install” button next to the apps you want though. Seriously, it would take longer to figure out a command-line or API way to do this, if it’s even possible. Just click the button.

Others

The other cookbooks in the repository are there as dependencies and may or may not be used specifically.

Upload Repository, Run Chef

Once it is cloned, all the components need to be uploaded to the Chef Server with Knife. As that is installed with Chef, it will be available. The knife config file and user key do need to be copied to .chef in the chef-repo. If necessary, download them from Opscode Hosted Chef (or your Chef Server).

1
2
3
4
cd workstation-chef-repo
mkdir .chef
cp ~/Downloads/knife.rb .chef
cp ~/Downloads/USERNAME.pem .chef

Make your changes to the data bags and roles. I’ll wait here.

Then, upload everything.

1
2
3
4
5
6
knife data bag create users
knife data bag create apps
knife data bag from file users USERNAME.json
knife data bag from file apps workstation.json
knife role from file base.rb mac_os_x.rb workstation.rb
knife cookbook upload -a

Finally, run Chef!

1
2
3
4
5
6
% whoami
jtimberman
% chef-client
INFO: *** Chef 10.12.0 ***
… loads of output, hooray …
INFO: Chef Run complete in 45.116912 seconds

FAQ

These aren’t necessarily questions anyone asked, but a more preemptive FAQ :).

This seems heavyweight, why all this effort?

As a sysadmin, I want to do something once and automate it afterward. That includes all the stuff I need to do to have a useful, usable work environment. This means that when I get a new computer, or have to wipe and reinstall (rare, but happens), I can get back to a productive environment very quickly.

I have three OS X systems I use regularly (work laptop, personal laptop, family iMac). Having them in a Chef Server gives me access to information about these systems easily with knife.

Also, this post is focused specifically on OS X, however this setup works pretty much as is on Linux. I simply don’t use Linux as a desktop OS, but I do have “workstation-like” systems that I SSH into, and this is generally fine for those.

Why Chef Server? Why not Chef Solo?

Honestly, I don’t actually use Chef Solo except as a way to setup a Chef Server. Since I use Chef Client/Chef Server so often, it is second nature for me to do it. You’re free to adapt this to work with Solo.

Will you support Windows with this repository?

No. I don’t use Windows as a workstation/desktop anymore.

It might just work on Windows though. It did once, but I haven’t tried in a few months.

I want to make this moar awesome, will you merge my pull request?

Thank you. I appreciate that you want to help me, or other members of the community. However I consider this pretty much “feature complete”, as it meets all my needs, and I don’t plan to merge any pull requests.

For individual cookbooks, they have their own repositories linked from their pages on the Chef Community site.

Why do you have redundancy or inconsistent use?

Such as plist file location, dmg installation, etc.

Because: Reasons. This codebase has been developed over ~2 years. It works for me.

How can I get help?

You can email me. However, as I said before this is a free time project, so I might not respond right away. If you’re an Opscode Hosted or Private Chef customer, please contact Opscode support. Finally, community based support is available through our community resources.

Further resources

If this is a topic of interest to you, I’d also like to point out a few similar projects that may be interesting. They have inspired me and things I have implemented in my own setup, so thank you Ben, Corey and Matthew and Brian at Pivotal!

Mountain Lion Upgrade

I upgraded my work laptop to Mountain Lion today. It was not as smooth as previous OS X upgrades have been for me, despite my efforts in managing my workstation(s) with Chef.

I received a replacement laptop for the one that was damaged at ChefConf a couple weeks ago (champagne spill - long story, maybe for another blog post). As this is a new laptop, it is eligible for the free Mountain Lion upgrade. So on ML release day, I submitted my information to Apple for the redemption code, which I received this morning. As I’m actually on vacation this week, I thought there was no better time to upgrade.

You may recall that I have managed my workstations with Chef for quite some time. This has been all well and good so far, though the Mountain Lion installation didn’t seem to like something in my preferences along the way.

After the installation finished and the system rebooted, I logged in, expecting to be greeted with my already configured system. This was not the case, however! My desktop was that light grey of the OS X boot screen, and the Dock was not running at all. I put on my sysadmin hat (like I ever take it off?!), and started debugging. I found the issue pretty quickly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Jul 27 08:42:35 champagne.local Dock[423]: -[__NSCFBoolean isEqualToString:]: unrecognized selector sent to instance 0x7fff75d0fab0
Jul 27 08:42:35 champagne.local Dock[423]: *** Terminating app due to uncaught exception ‘NSInvalidArgumentException’, reason: ‘-[__NSCFBoolean isEqualToString:]: unrecognized selector sent to instance 0x7fff75d0fab0’
        *** First throw call stack:
        (
                0   CoreFoundation                      0x00007fff8fca9716 __exceptionPreprocess + 198
                1   libobjc.A.dylib                     0x00007fff8dc63470 objc_exception_throw + 43
                2   CoreFoundation                      0x00007fff8fd3fd5a -[NSObject(NSObject) doesNotRecognizeSelector:] + 186
                3   CoreFoundation                      0x00007fff8fc97c3e ___forwarding___ + 414
                4   CoreFoundation                      0x00007fff8fc97a28 _CF_forwarding_prep_0 + 232
                5   Dock                                0x000000010da92786 Dock + 681862
                6   Dock                                0x000000010d9f10b2 Dock + 20658
                7   Dock                                0x000000010dab9aed Dock + 842477
                8   libdyld.dylib                       0x00007fff852c17e1 start + 0
        )
Jul 27 08:42:35 champagne com.apple.launchd.peruser.501[267] (com.apple.Dock.agent[423]): Job appears to have crashed: Abort trap: 6
Jul 27 08:42:35 champagne com.apple.launchd.peruser.501[267] (com.apple.Dock.agent): Throttling respawn: Will start in 1 seconds
Jul 27 08:42:35 champagne.local ReportCrash[302]: Saved crash report for Dock[423] version 1.8 (1168) to /Users/jtimberman/Library/Logs/DiagnosticReports/Dock_2012-07-27-084235_champagne.crash

This happened every second, constantly crashing and restarting, and generating a new crash report. What is the problem? Well, let’s look at one of the reports:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Process:         Dock [21816]
Path:            /System/Library/CoreServices/Dock.app/Contents/MacOS/Dock
Identifier:      Dock
Version:         1.8 (1168)
Code Type:       X86-64 (Native)
Parent Process:  launchd [1039]
User ID:         501

Date/Time:       2012-07-27 11:26:58.819 -0600
OS Version:      Mac OS X 10.8 (12A269)
Report Version:  10

Crashed Thread:  0  Dispatch queue: com.apple.main-thread

Exception Type:  EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000

Application Specific Information:
*** Terminating app due to uncaught exception ‘NSInvalidArgumentException’, reason: ‘-[__NSCFBoolean isEqualToString:]: unrecognized selector sent to instance 0x7fff7b2a2ab0’
abort() called
terminate called throwing an exception

Application Specific Backtrace 1:
0   CoreFoundation                      0x00007fff8cd77716 __exceptionPreprocess + 198
1   libobjc.A.dylib                     0x00007fff8e6df470 objc_exception_throw + 43
2   CoreFoundation                      0x00007fff8ce0dd5a -[NSObject(NSObject) doesNotRecognizeSelector:] + 186
3   CoreFoundation                      0x00007fff8cd65c3e ___forwarding___ + 414
4   CoreFoundation                      0x00007fff8cd65a28 _CF_forwarding_prep_0 + 232
5   Dock                                0x000000010d1b6786 Dock + 681862
6   Dock                                0x000000010d1150b2 Dock + 20658
7   Dock                                0x000000010d1ddaed Dock + 842477
8   libdyld.dylib                       0x00007fff8d7827e1 start + 0

I’ll spare you the detail of the rest of the file. Suffice to say, it is less informative than one might want for troubleshooting the issue. I (foolishly) spent about an hour and a half trying to get to the root of the problem. I found out that the problem was isolated to my own user, and something between ~/Library/Preferences and ~/Library/Application Support. I’m not sure what the problem was - I eventually decided to just do this:

1
sudo rm -rf ~/Library/Preferences ~/Library/Application\ Support

Then I logged back in, and everything was well. I have a back up from before the upgrade (Yay, TimeMachine!), so I wasn’t concerned with losing anything important, and I knew that Chef would bring back most of my settings anyway.

The first thing I did was, of course, /usr/bin/chef-client. This worked great. Until the Dock was restarted as part of my recipes. The symptom was the same as before - the desktop went light grey, the dock wasn’t running and launchd was respawning it contnually, with the same errors from above.

I decided to have some quality time with my configuration and get to the bottom of the problem. I went through all the settings I modify through recipe[mac_os_x::settings] attributes, and did a careful comparison of manual settings through the OS X system preferences, and the files changed in ~/Library/Preferences.

Side note: This handy hint comes from Ben Bleything:

1
2
3
4
cd ~/Library/Preferences
git init
git add .
git commit -m ‘initial commit’

Then, make a change in system preferences, you can use git status to see what plist files are updated. Of course, most of the plist files are binary so you can’t really diff them, but the filename will indicate which domain to use with the defaults(1) command. Handy, thanks Ben! End side note.

Anyway, it took some time, but I tuned all my configuration for the things I wanted to ensure happened on any new system, and nothing else. I believe what happened is that some setting I had is no longer supported on Mountain Lion and its presence makes Dock.app grumpy, but that is purely speculation. The end result now, though, is that I have a pretty sane set of configuration that is automatically applied in a more data driven way, and I’m not using settings that I don’t know 100% what they do.

That aside, Mountain Lion is nice so far. The GCC Installer for 10.7 seems to be working just fine, though I haven’t tried installing a new Ruby under it yet. I am looking forward to wider use and adoption of the Notification Center as a replacement for Growl.

I hope this post is helpful in some way. Unfortunately I don’t have an answer to the Dock crash problem itself, but I have now remedied the issue for my own use. I’m going to write a new post about how I’m managing my workstations, to bring the information posted previously up to date, so stay tuned.

Boostrapping the Infrastructure Coders Meetup

Earlier this year, David Lutz and I were discussing the lack of an infrastructure as code meetup in Melbourne. We sat down and mapped out our vision for the meetup:

  • Regular meetup - Monthly meetups, held in the second week of every month.
  • Technology agnostic - No preference on tools. We want all conversations, from concept to implementation.
  • Fresh, relevant content - Being techno-agnostic, this also keeps the content fresh so we had to ensure it is relevant to the meetup.
  • Interesting venues - Bad venues can break a meetup. Ensure that the location is comfortable and central to the members.
  • Minimal sponsorship - Sponsorship is great, but it doesn’t mean editorial control. We will accept sponsors, however no sales or marketing talks.

Having established our vision, we needed to prove if Melbournites wanted such a meetup, so we created an Infrastructure Coders meetup and promoted it via Twitter. We had a great response, but it wasn’t enough. We approached the Evan Bottcher organiser of Devops Melbourne for a short speaker slot to promote Infrastructure Coders and within a few days we doubled our members.

David and I soon realised we had reached critical mass, so we had a discussion around hosting the meetup. We required a hosting strategy that allowed the meetup to remain free for members; bootstrapping the meetup ourselves, we decided we would find a host for each meetup. The host would be an organisation in Melbourne that recognised the relevance of infrastructure as code, with the value being - we organise the meetup and speakers; they provide food, drinks and a space. Hosts are given a speaking spot, provided it is on topic and it is an opportunity to promote their company. We needed to test this concept, so I approached realestate.com.au to host the inaugural meetup.

The date was set, drinks were purchased and food was ordered. The first meetup was small and informal, so we took the opportunity for everyone to introduce themselves and what they wanted out of Infrastructure Coders. Afterwards we retired to the kitchen for dinner, where discussions on infrastructure as code continued. We marked the meetup a success and David immediately organised the second host, 99designs.

So what did we learn from this experience?

  • Have a vision - What is the goal of the meetup? Where do you want to take it?
  • Know your audience - What does your audience want? What will they take away from your meetup?
  • Validate your meetup - Create an online space where people can register their interest. We used Meetup.com (pricing available here), but other online event tools, such as Eventbrite would work too.
  • Market your meetup - Twitter is a great way of getting the word out. Register an account for your meetup and decide on an hashtag. Go to other meetups and promote your meetup.
  • Gather feedback - Feedback will allow the meetup to improve organically with your audience. We have had some great, our members really enjoyed going into workplaces of companies in Melbourne. This also allowed the employees of that organisation to stay back and listen to a few talks before heading home.
  • From the initial concept to now - we have hosted four meetups, two scheduled for the upcoming months and I am having discussions with organisations for meetups which will book us out until early next year. In addition, I have had discussions with Scott Lowe, to start Infrastructure Coders Denver.

If you are interested in hosting Infrastructure Coders or starting a new meetup, please get in touch.

Autostarted Services

It is quite common in Debian and Ubuntu that when installing a package that provides a daemon, said daemon is started by the init script(s) included in the package. This is a matter of Debian Policy, though I don’t interpret that section to literally mean it is required. However, it is common enough practice that several people have asked (or ranted) about the topic.

The main issue of course is that the default configuration for the software being installed may not be appropriate before starting up the service and making it available on the network. Users of other Linux distributions may be smugly smirking as their distribution doesn’t start the service on package installation.

This post isn’t about that.

Instead, this post describes how this problem is resolved using configuration management, specifically Chef. I’m also going to discuss a couple nuances about service management, so watch carefully.

For the example service I’m going to use memcached, from the memcached package. It is started on package installation as demonstrated:

1
2
3
4
5
6
7
8
vagrant@precise-housepub:~$ sudo apt-get install memcached
Setting up memcached (1.4.13-0ubuntu2) …
Starting memcached: memcached.
vagrant@precise-housepub:~$ service memcached status
 * memcached is running
vagrant@precise-housepub:~$ ps awux | grep memcached
 memcache 15176  0.0  0.3 323212  1180 ?        Sl   04:32   0:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1

As we can see, the memcached service is started. Of course, it is using the default configuration, which means that it has a very small memory size, and listens on localhost. While the recipe would be very simple:

1
package "memcached"

This wouldn’t be very useful for discussion, or practical use purposes. For now, I’m going to post the entire recipe I’m going to discuss, and then break it down.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
node.set['memcached']['memory_max'] = node['memory']['total'].to_i / 1024 * 0.65

package "memcached"

service "memcached" do
  supports :restart => true, :status => true
  action :enable
end

template "/etc/memcached.conf" do
  source "memcached.conf.erb"
  owner "root"
  group "root"
  mode 00644
  notifies :restart, "service[memcached]"
  variables(
    :memory_max => node['memcached']['memory_max']
    :ip_addr => node['ipaddress']
  )
end

service "memcached" do
  action :start
end

This recipe is fairly straightforward. First, it sets a node attribute based on a calculation of the amount of memory installed in the system. Then, it will install the memcached package. This of course will start up the service with the unsuitable defaults already discussed.

The first service resource occurance makes sure that the service is enabled. This is the default behavior of the package manager, but this also gives clear indication as to the intention of the recipe. Next, the configuration file is managed. The exact content of the memcached.conf.erb file isn’t particularly important. Let us presume that the variables passed in are what we care about - that we want to use 65% of the system’s total memory for memcached, and listen on the default IP address. Maybe other tuning is happening, maybe not. Of course, when the configuration is updated, we need to notify the memcached service to restart.

Finally, the memcached service is started if it is not already running. This occurs at the end to help remedy an issue where the service might have been halted, or the configuration file was rendered incorrectly (a typo?), so that we can correct such configuration problems with Chef in a single subsequent run. This uses a feature of Chef where resources can be declared multiple times with different actions (or if desired, parameters).

The first time this recipe is run on a node, memcached will be installed, started, configured, restarted. We’re not aiming to prevent the auto-start from occurring at all, but we do automate the additional steps required for handling that easily.