Respect can be a local currency

In the IT industry we are reputed to be serial job hoppers. While this may seem a little unfair, if it applies to you then you should consider where you’re spending your limited additional time and effort. First, a disclaimer: you need to invest enough time and effort into your current job to stay employed.

Now that’s out of the way let’s look at our normal days. All those extra hours and hard work you put in everyday? That’s all local currency. In the best case your current employer and co-workers will hopefully appreciate it, you’ll be recognised for your outcome and ability, and hopefully be considered an integral part of the team. But that’s almost as far as it goes. No one outside your employer, and depending on the size and organisation, maybe not even throughout it, will ever know that you pulled a 70 hour week to get that one release done and dusted or stepped up and handled a Sunday emergency. It’s possible that a small part will transfer into the wider work. This typically appears as references, LinkedIn praise etc but most of it won’t go with you when you change roles.

If you’re someone who likes to change jobs, short permanent roles or contracting, you should carefully consider the balance between local and remote respect. Writing blog posts and articles, releasing open source projects, giving presentations, these things have value in the wider world as well as hopefully at work and may serve you better in reaching your career goals. Some companies are wonderful at unifying these two threads but at the end of the day it’s your career and you need to deliberately weigh the options.

All of those possible career value adds eat at your time, and not everyone is in a position to do all or even some of them, but where you can it helps to build up a portfolio of subjects larger than your day job and makes future interviews more about culture than demonstrating technical minutia. Nothing beats a pre-warmed audience, especially one that already uses your code or reads your blog.

This has been on my mind recently as my working hours creep up and my personal projects wither and I think it’s something worth taking a moment to deliberately consider every few quarters.

Accessing an iPads file system from Linux

Despite using Linux on pretty much every computer I’ve owned for the last 20 years I’ve made an exception when it comes to tablet devices and adopted an iPad into my life as commute friendly “source of all books.” Overtime it’s been occasionally pressed into service as a camera and I recently realised I’ve never backed any of those photos up. “That’s something easy to remedy” I naively thought as I plugged my iPad into a laptop and watched as it didn’t appear as a block device.

While there are many pages on the internet that explain parts of the process of accessing your iPad file system from Linux it was awkward enough to piece together that I decided to summarise my own commands in this post for future me. I used the following commands on a Fedora 28 install to access an iPad Air 2.

First add the software needed to make the connection work:

    # install the required packages (on fedora)
    sudo dnf install ifuse libimobiledevice-utils

Once this is installed unlock the iPad and run idevicepair pair to pair the iPad with your Linux host. You should see a message saying that pairing was successful. Now we have access to the device let’s access its file system. Create the mount point and make the current user the owner:

    sudo install -d /mnt/ipad -o $USER

Finally, mount the iPad so we can access its file system:

    ifuse /mnt/ipad

    ls -alh /mnt/ipad/

If this fails ensure the ifuse module is loaded by running lsmod, and run modprobe ifuse if it isn’t. Once you’ve finished exploring don’t forget to umount /mnt/ipad to release the iPad.

Slightly Shorter Meetings

A few jobs ago, as the number of daily meetings increased, I picked up a tiny meeting tweak that I’ve carried with me and deployed at each place I’ve worked since. End all meetings five minutes early. Instead of half past, end it at 25 and instead of on the hour (complex maths ahead) end at 55.

My reasoning is simple and selfish, I hate being late for things. This approach gives people time to get to their next meeting.

Rediscovering Age of Kings

About a year ago, I decided it’d been long enough since I last wasted significant amounts of time playing computer games that I could buy a gaming machine and play for a sensible amount of time and not impact other demands for my time. I looked at all of the current generation consoles and to be honest I was put off by the price of the games. I’m aware of the Steam sale and considering it’s been a decade since I played anything seriously (I still miss you, Left 4 Dead 2) my plan was to quickly recoup the extra cost of a gaming PC by sticking to the best games of a few years ago.

Other than obsessively 100 percenting a handful of Lego games (Lego Spider- man! Lego Quasar!) I’ve not really played anything new, instead I have an overly powerful, at least for its current usage, machine that is essentially now for Age of Empires II. I had fond memories of the game from when I was a kid and after looking a few things about it up I discovered that there’s actually a seriously skilled community keeping the game alive.

I’m only a passive observer, I played one online game and my god was it embarrassing, but there’s an amazing amount of Twitch content from a number of community games and even sponsored competitions. Most of the material I’ve seen has been cast by Zero Empires or T-90Official and there is currently the Escape Champions League tournament (with a 60k USD prize) that’s show casing some amazing team play. It’s great to see such an awesome old game still going strong.

The 4PM stand-up

I’m not a morning person. I never have been and I doubt it’ll suddenly become one of my defining characteristics. In light of this I’ve always had a dislike of the daily stand-up happening first thing in the morning, instead over the years I’ve become to much prefer having it at about 4PM.

A late afternoon stand-up isn’t a common thing. Some people absolutely hate the idea and with no scientific studies to back me up I’m essentially just stating an opinion but I do have a few reasons.

People are sometimes late in the mornings. Having the stand-up at the very start of the day means that anyone having issues getting to work, dropping the kids off at school or dealing with tube delays for example, will probably miss it. When things are slightly off having the added pressure of your team all standing there with trello up as you stumble in, soaked and stressed, doesn’t exactly give the best of openings for the day.

My second main point is the lack of situational awareness. A lot of my day will change based on the rest of the departments. To understand what impact that will have it’s helpful to have some time for other people to start disseminating information. Did we have a small on-call issue last night? Is anyone off sick on my team or the ones they deal with? Is there a security alert for Nginx?

By having my stand-ups at a much later point, such as 4PM, all the urgent issues have normally been raised at the start of the day, and hopefully dealt with. People know about unusual circumstances and who’s not in. At the stand-up itself people are less aspirational and actually get to talk about what they’ve done, not what they intended to, and I’ve still got some time to try and get anything blocked sorted before the next day. Later sessions can also work better if you’re dealing with Americans. It’s not too early to have to deal with them, and the time zones sync up more. But then you can rightly say, “They are having their stand-ups first thing in the morning!” and you’d be right, but they have bear claws available (thanks ckolos!), so it all balances out in the end.

There are downsides to a later time. Team wide issues might lay hidden for longer in the morning. People might leave early to pick the kids up and some people will find the later slot more disruptive to their afternoon flow. It’s not going to be for everyone but if a morning slot isn’t working for the team then maybe it’s time to shake things up a little and try a later time. Maybe 4PM.

Some talk submission thoughts

The summer conference submission season is slowly subsiding and after reading through a combined total of a few thousand submissions I’ve got some hastily compiled thoughts. But before we get started, a disclaimer: I don’t publicly present. My views on this are from the perspective of a submission reviewer and audience member. And remember, we want to say yes. We have slots to fill and there’s nothing more satisfying than giving a new speaker a chance and seeing the feedback consist of nothing but 10’s. Hopefully some of these points will help me say yes to you in the future.

Firstly I’ll address one of the most common issues, even when it’s not a completely fair one - people submitting on the subject their employer focuses on. As an organiser you want your speakers to have solid and wide experience on their chosen topic and it’s often easier to find those depths in people that live and breath a certain thing day in and day out. However it’s easy to submit what appears to be a vendor sales pitch. In non-anonymised submissions there will always be a moment of “This company has product / service in that area. Is this a sales pitch?” Audiences have paid for their tickets and being trapped for a 45 minute white paper spiel is a sure way to fill the twitter stream with complaints.

To balance that I’m much more careful when dealing with those kind of submissions. Despite my defensive watchfulness there are things you can do to make it easier to say yes. If it’s unrelated to your paid work but in the same industry, say so. You should also state how the talk relates to your product. Is it a feature overview for enterprise customers or all generic theory anyone can use? Be explicit about how much of the talk is product specific, “20 minutes on the principles, 10 on the open source offerings and 10 on the enterprise product additions” might not be exactly what I want to see but it’s better than my assumption. I should also note no matter how much it hurts your chances you should be honest. Event organisers chat. A lot of the Velocity program chairs know each other outside of work, there’s a lot of cross over between DevOpsDays events and London isn’t that big. If you’re given the benefit of the doubt and you were less than honest then good luck in the future. As an aside this also applies to sponsors. We know who’s a joy to deal with and who’s going to keep us dangling for 8 months.

Onto my next bugbear. Submissions that include things like “8 solutions to solving pipeline problems.” If you have a number in your submission topic or introduction and don’t tell me what they are in the body of the submission I’ll assume you don’t know either. Context and content are immensely important in submissions and it’s so hard to highly rate a talk with no actual explanation of what it’s covering. If you say “The six deadly secrets of the monkey king” then you better list those six secrets with a little context on each or expect to be dropped a point or three. The organisers probably won’t be in the session and without enough context to know what the audience will be seeing, neither will you.

My third personal catch is introducing a new tool at a big event. Unless you’re someone like Hashicorp or AWS then you need to be realistic about your impact. I will google technologies and programs I don’t recognise in submissions and if the entire result set is your GitHub page and some google group issues then it’s probably not ready for one of the bigger events. Instead start at a user group or two. A couple of blog posts and then maybe something bigger at a site like dzone or the new stack. Build some buzz and presence so I can tell that people are adopting it and finding merit. There’s often an inadvertent benefit to this, a lot of user groups record and upload their sessions and this is great benefit for after the anonymised stage of the reviews. Being able to see someone present and know that they can manage an audience and don’t look like they are about to break into tears for 25 minutes is reassuring and a great presentation style can help boost your submission.

Other than my personal idiosyncrasies there are a few things you should always consider. What’s the audience getting out of this? Why are you the person to give it to them? What’s the actionable outcome from this session? You don’t have to be a senior Google employee but you do need to have an angle on the material. This is especially true on subjects like career paths or health issues where it’s easy to confuse personal anecdotes with data. Does your employer have evangelists or advocates that spend a large amount of their time presenting or reviewing submissions? If so reach out and ask them for a read through. It’s in their interests to not see 10 submissions from the same company that all get rejected for not having enough information to be progressed. I wouldn’t normally single someone out but if, as an example, you’re working for Microsoft and submitting to a conference, especially a DevOpsDays, and you’ve not asked Bridget Kromhout to review your submission then you’re missing a massive opportunity. She’s seen everything get submitted at least once and can nearly always find something constructive to improve. There’s probably a similar person at many large tech companies and getting their opinion will almost always help the process.

In general it’s a pleasure to read so many thoughtful submissions but with just a little bit more effort in the right places it becomes a lot easier to get the reviewers to say yes. And then comes the really difficult part for us.

pre-commit hooks and terraform- a safety net for your repositories

I’m the only infrastructure person on a number of my projects and it’s sometimes difficult to find someone to review pull requests. So, in self-defence, I’ve adopted git precommit hooks as a way to ensure I don’t make certain tedious mistakes before burning through peoples time and goodwill. In this post we’ll look at how pre-commit and terraform can be combined.

pre-commit is “A framework for managing and maintaining multi-language pre-commit hooks” that has a comprehensive selection of community written extensions. The extension at the core of this post will be pre-commit-terraform, which provides all the basic functionality you’ll need.

Before we start you’ll need to install pre-commit itself. You can do this via your package manager of choice. I like to run all my python code inside a virtualenv to help keep the versions isolated.

$ pip install pre-commit --upgrade
Successfully installed pre-commit-1.10.4

To keep the examples realistic I’m going to add the precommit hook to my Terraform SNS topic module. Mostly because I need it on a new project; and I want to resolve the issue raised against it.

# repo cloning preamble
git clone git@github.com:deanwilson/tf_sns_email.git
cd tf_sns_email/
git co -b add_precommi

With all the preamble done we’ll start with the simplest thing possible and build from there. First we add the basic .pre-commit-config.yaml file to the root of our repository and enable the terraform fmt hook. This hook will ensure all our terraform code matches what would be produced by running terraform fmt over your codebase.

cat <<EOF > .pre-commit-config.yaml
- repo: git://github.com/antonbabenko/pre-commit-terraform
  rev: v1.7.3
  hooks:
    - id: terraform_fmt
EOF

We then install the pre-commit within this repo so it can start to provide our safety net.

$ pre-commit install
pre-commit installed at /tmp/tf_sns_email/.git/hooks/pre-commit

Let the pain commence! We can now run pre-commit over the repository and see what’s wrong.

$ pre-commit run --all-files
[INFO] Initializing environment for git://github.com/antonbabenko/pre-commit-terraform.
Terraform fmt............................................................Failed
hookid: terraform_fmt

Files were modified by this hook. Additional output:

main.tf
outputs.tf
variables.tf

So, what’s wrong? Only everything. A quick git diff shows that it’s not actually terrible. My indentation doesn’t match that expected by terraform fmt so we accept the changes and commit them in. It’s also worth adding .pre-commit-config.yaml in too to ensure anyone else working on this branch gets the same precommit checks. Once the config file is commited you should never again be able to commit incorrectly formatted code as the precommit will prevent it from getting that far.

A second run of the hook and we’re back in a good state.

$ pre-commit run --all-files
Terraform fmt..............Passed

The first base is covered, so let’s get a little more daring and ensure our terraform is valid as well as nicely formatted. This functionality is only a single line of code away as the pre-commit extension does all of the work for us:

cat <<EOF >> .pre-commit-config.yaml
    - id: terraform_validate_with_variables
EOF

This line of config enables another of the hooks. This one ensures all terraform files are valid and that all variables are set. If you have more of a module than a project and are not supplying all the possible variables you can change terraform_validate_with_variables to terraform_validate_no_variables and it will be much more lenient.

New config in place we rerun the hooks and prepare to be disappointed.

> pre-commit run --all-files
Terraform fmt..................................Passed
Terraform validate with variables..............Failed
hookid: terraform_validate_with_variables


Error: 2 error(s) occurred:

* provider.template: no suitable version installed
  version requirements: "(any version)"
  versions installed: none
* provider.aws: no suitable version installed
  version requirements: "(any version)"
  versions installed: none

And that shows how long it’s been since I’ve used this module; it predates the provider extraction work. Fixing these issues requires adding the providers, a new variable (aws_region) to allow specification of the AWS region, and adding some defaults. Once we fix those issues the precommit hook will fail due to the providers being absent, but that’s an easy one to resolve.

...
* provider.template: no suitable version installed
  version requirements: "1.0.0"
  versions installed: none
...

> terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "template" (1.0.0)...
- Downloading plugin for provider "aws" (1.30.0)...

One more precommit run and we’re in a solid starting state.

Terraform fmt.............................Passed
Terraform validate without variables......Passed

With all the basics covered we can go a little further and mixin the magic of terraform-docs too. By adding another line to the pre-commit config -

cat <<EOF >> .pre-commit-config.yaml
    - id: terraform_docs
EOF

And adding a placeholder anywhere in the README.md -

+### Module inputs and outputs
+
+<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
+<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
+
+

terraform-docs will be invoked and add generated documentation to the README for all of the variables and outputs. If they ever change you’ll need to review and commit the differences but the hooks will stop you from ever going out of sync. Now we have this happening automatically I can remove the manually added, and error prone, documentation for variables and outputs. And be shamed into adding some useful descriptions.

pre-commit hooks will never replace a competent pull request reviewer but they help ensure basics mistakes are never made and allow your peers to focus on the important parts of the code, like structure and intent, rather than formatting and documentation consistencies. All of the code changes made in this post can be seen in the Add precommit pull request

Managing AWS Default VPC Security Groups with Terraform

When it comes to Amazon Web Services support Terraform has coverage that’s second to none. It includes most of Amazons current services, rapidly adds newly released ones, and even helps granularise existing resources by adding terraform specific extensions for things like individual rules with aws_security_group_rule. This awesome coverage makes it even more jarring when you encounter one of the rare edge cases, such as VPC default security groups.

It’s worth taking a step back and thinking about how Terraform normally works. When you write code to manage resources terraform expects to fully own the created resources life cycle. It will create it, ensure that changes made are correctly reflected (and remove those made manually), and when resources code is removed from the .tf files it will destroy it. While this is fine for 99% of the supported Amazon resources the VPC default security group is a little different.

Each Amazon Virtual Private Cloud (VPC) created will have a default security group provided. This is created by Amazon itself and is often undeletable. Rather than leaving it unmanaged, which happens all too often, we can instead add it to terraforms control with the special aws_default_security_group resource. When used this resource works a little differently than most others. Terraform doesn’t attempt to create the group, instead it’s adopted under its management umbrella. This allows you to control what rules are placed in this default group and stops the security group already exists errors that will happen if you try to manage it as a normal group.

The terraform code to add the default VPC security group looks surprisingly normal:

resource "aws_vpc" "myvpc" {
  cidr_block = "10.2.0.0/16"
}

resource "aws_default_security_group" "default" {
  vpc_id = "${aws_vpc.myvpc.id}"

  # ... snip ...
  # security group rules can go here
}

One nice little tweak I’ve found useful is to customise the default security group to only allow inbound access on port 22 from my current (very static) IP address.

# use the swiss army knife http data source to get your IP
data "http" "my_local_ip" {
    url = "https://ipv4.icanhazip.com"
}

resource "aws_security_group_rule" "ssh_from_me" {
  type            = "ingress"
  from_port       = 22
  to_port         = 22
  protocol        = "tcp"
  cidr_blocks     = ["${chomp(data.http.my_local_ip.body)}/32"]

  security_group_id = "${aws_default_security_group.default.id}"
}

Automatic Terraform documentation with terraform-docs

Terraform code reuse leads to modules. Modules lead to variables and outputs. Variables and outputs lead to massive amount of boilerplate documentation. terraform-docs lets you shortcut some of these steps and jump straight to consistent, easy to use, automatically generated documentation instead.

Terraform-docs, a self-contained binary implemented in Go, and released by Segment, provides an efficient way to add documentation to your terraform code without requiring large changes to your workflow or massive amounts of additional boilerplate. In its simplest invocation it reads the descriptions provided in your variables and outputs and displays them on the command line:

    /**
    *
    * A sample terraform file with a variable and output
    *
    */

variable "greeting" {
  type        = "string"
  description = "The string used as a greeting"
  default     = "hello"
}

output "introduction" {
  description = "The full, polite, introduction"
  value       = "${var.greeting} from terraform"
}

Running terraform-docs against this code produces:

A sample terraform file with a variable and output

  var.greeting (hello)
  The string used as a greeting

  output.introduction
  The full, polite, introduction

This basic usage makes it simpler to use existing code by presenting the official interface without over-burdening you with implementation details. Once you’ve added descriptions to your variables and outputs, something you should really already be doing, you can start to expose the documentation in other ways. By adding the markdown option -

terraform-docs markdown .

you can generate the docs in a GitHub friendly way that provides an easy, web based, introduction to what your code accepts and returns. We used this quite heavily in the GOV.UK AWS repo and it’s been invaluable. The ability to browse an overview of the terraform code makes it simpler to determine if a specific module does what you actually need without requiring you to read all of the implementation.

A collection of terraform variables and their defaults

When we first adopted terraform-docs we hit issues with the code being updated without the documentation changing to match it. We soon settled on using git precommit hooks, such as this terraform-docs githook script by Laura Martin or the heavy handed GOV.UK update-docs script. Once we had these in place the little discrepancies stopped slipping through and the reference documentation became a lot more trusted.

As an aside if you plan on using terraform-docs as part of your automated continuous integration pipeline you’ll probably want to create a terraform-docs package. I personally use FPM Cookery for this and it’d been an easy win so far.

I’ve become a big fan of terraform-docs and it’s great to see such self-contained tools making such a positive impact on the terraform ecosystem. If you’re writing tf code for consumption by more than just yourself (and even then) it’s well worth a second look.

Automatic datasource configuration with Grafana 5

When I first started my Prometheus experiments with docker-compose one of the most awkward parts of the process, especially to document, were the manual steps required to click around the Grafana dashboard in order to add the Prometheus datasource. Thanks to the wonderful people behind Grafana there has been a push in the newest major version, 5 at time of writing, to make Grafana easier to automate. And it really does pay off.

Instead of forcing you to load the UI and play clicky clicky games with vague instructions to go here, and then the tab on the left, no, the other left, down a bit… you can now configure the data source with a YAML file that’s loaded on startup.

# from datasource.yaml
apiVersion: 1

datasources:
- name: Prometheus
  type: prometheus
  access: proxy
  isDefault: true
  url: http://prometheus:9090
  # don't set this to true in production
  editable: true

Because I’m using this code base in a tinkering lab I set editable to true. This allows me to make adhoc changes. In production you’d want to make this false so people can’t accidentally break your backing store.

It only takes a little code to link everything together, add the config file and expose it to the container. You can see all the changes required in the Upgrade grafana and configure datasource via a YAML file pull request. Getting the exact YAML syntax, and confusing myself over access proxy vs direct was the hardest part. It’s only a single step along the way to a more automation friendly Grafana but it is an important one and a positive example that they are heading in the right direction.