AWS security audits with Scout2

Inspired by a link in the always excellent Last Week in AWS I decided to investigate Scout2, a “Security auditing tool for AWS environments”. Scout2 is a command line program, written in Python, that runs against your AWS account, queries your configuration data and presents common issues and misconfigurations via a set of local HTML files.

The dashboard itself is simple, but effective, and displays a nice overview of all the checks Scout2 ran.

Screen shot of the Scout2 dashboard

Installing the program and generating a report against your own infrastructure is remarkably easy and has no external requirements. In my experiments I decided to run it locally under a virtualenv against AWS using an existing profile.

cd /tmp

virtualenv scout

cd scout/

source  bin/activate

pip install awsscout2

# set up your access here

Scout2 --profile <your profile name> --regions eu-west-1

In the above example I use a named profile from ~/.aws/credentials rather than specifying the values in environment variables. As an aside: I have two profiles defined for each of my AWS accounts, one with permissions to use all the list, read and describe functions but nothing that allows changes (which I used for this experiment), and another with more admin powers. If you’re running Scout2 in AWS you can use an IAM profile with the default Scout2 IAM policy.

Once you’ve run the tool there’s a pleasant little trick where the report is opened in your local web browser, unless you’re running under something like Jenkins, in which case you should specify --no-browser. Behind the dashboard there are per service pages with the configs that require attention, here’s a peek of the IAM services in my experimentation VPC.

Scout2 IAM service dashboard

Although I’ve not tried to extend Scout2 yet the default reports highlighted a couple of configuration details that I’ll have to think about, which shows that it provides some immediate value. It’s been quite an easy tool to set up and run and I highly recommend taking it for a spin.

A Terraform equivalent to CloudFormations AWS::NoValue ?

Sometimes, when using an infrastructure as code tool like Terraform or CloudFormation, you only want to include a property on a resource under certain conditions, while always including the resource itself. In AWS CloudFormation there are a few CloudFormation Conditional Patterns that let you do this, but and this is the central point of this post, what’s the Terraform equivalent of using AWS::NoValue to remove a property?

Here’s an example of doing this in CloudFormation. If InProd is false the Iops property is completely removed from the resource. Not set to undef, no NULLs, simply not included at all.

    "MySQL" : {
      "Type" : "AWS::RDS::DBInstance",
      "DeletionPolicy" : "Snapshot",
      "Properties" : {
        ... snip ...
        "Iops" : {
          "Fn::If" : [ "InProd",
            "1000",
            { "Ref" : "AWS::NoValue" }
          ]
        }
        ... snip ...
      }
    }

While Terraform allows you to use the, um, ‘inventive’, count meta-parameter to control if an entire resource is present or not -

resource "aws_security_group_rule" "example" {
    count = "${var.create_rule}"
    ... snip ...
}

It doesn’t seem to have anything more fine grained.

One example of when I’d want to use this is writing an RDS module. I want nearly all the resource properties to be present every time I use the module, but not all of them. I’d only want replicate_source_db or snapshot_identifier to be present when a certain variable was passed in. Here’s a horrific example of what I mean

resource "aws_db_instance" "default" {
    ... snip ...
    # these properties are always present
    storage_type         = "gp2"
    parameter_group_name = "default.mysql5.6"
    # and then the optional one
    replicate_source_db = "${var.replication_primary | absent_if_null}"
    ... snip ...
}

But with a nice syntax rather than that horrible made up one above. Does anyone know how to do this? Do I need to write either two nearly identical modules, with one param different or slightly better, have two database resources, one with the extra parameter present and use a count to choose one of those? Help me Obi-internet! Is these a better way to do this?

Refreshing a keyboard and mouse – 2017

After having some work done at home I recently found myself in need of both a new keyboard and mouse on very short notice. Also wallpaper paste and electronics, not good friends. I’m very set in my ways when it comes to peripherals and over the years I’ve grown very fond of a Das Keyboard and, as a left handed mouse user, Microsoft IntelliMouse Optical combination.

The keyboard should’ve been an easy replacement, unfortunately Das take a few weeks to be delivered, and these days are inching closer and closer to the 200 GBP price point. The cheap plastic, dead flesh feeling, standby with was starting to annoy me so I went for a browse through Amazon Prime and its next day delivery section and settled on a Cooler Master MasterKeys. You can see the two keyboards together here:

Photo of a Das Keyboard and a Cooler Master Masterkeys

The Cooler Master has a number of fancy features that I’ll probably never investigate but it does have nice Cherry Brown switches. They are comfortable to type on and make about as much noise as my old Das, which I think has Cherry Blue switches. I did start to investigate other options in a little more in depth before I placed the order but when keyboard reviews talk about on board CPU specs I started to zone out a little. It’s also half the price of the Das.

I’ve been using it for a week or so and currently have no complaints. Other than one evening coding with the keyboard back light on full, which was bright enough to work by, and should make on call a little more pleasant for everyone else in the house I’m using it as a solid, dumb keyboard.

Selecting a new mouse was more of an issue. In a nearly unforgivable move Microsoft stopped selling the IntelliMouse Optical quite a few years ago. I’ve always considered it to be the pinnacle of mouse technology (although I also consider all UIs after Windows 2000 to be superfluous so I’m not to be trusted) and so I spent a chunk of time trying to hunt one down. The second hand market has stupidly high markups and the idea of using a second hand mouse was a little unsettling so I had to find an alternative. That could be used comfortably in the left hand.

The first attempt was a logitec M220, which I bought on the recommendation of a left handed friend. Who apparently has tiny, tiny hands. And bad taste in mice. I like a sharp click and the accompanying noise when I click, the M220 key presses are very soft and squidgy with no real click sensation. I found myself second guessing if the click had taken. It was also way too small for me to use comfortably. It felt like I was dragging most of my hand over the desk when I was using it. I very nearly surrendered and bought a Razor Death Adder, the mouse I used to play games with quite a lot a few years ago but the left handed model seems to have a lot less features than the right handed one so I hesitated and asked a few groups of techies for recommendations. A couple of people, who were kind enough to measure their hands for me, suggested a Roccat Kova, which should be fine for either hand and has very good, community supplied drives and config software for Linux.

I’ve put all three mice in one photo here. If you can’t see the Logitech one it’s because Ghost Rider is holding it.

Photo of a Intellimouse and a Roccat Kova

The Roccat is a little smaller, has quite a few more buttons and has been very comfortable to use for the few weeks I’ve had it. I’ve tried to avoid getting too tweaky with it but I’ve remapped a few of the extra buttons to run certain commands and it’s been very solid, on or off a mouse mat. Some left handed mice are very uncomfortable for right handed users but I’ve had no complaints about the Roccat yet. I don’t know if it’ll last as long as the Intellimouse, which has seen nearly a decade of daily use, but it wasn’t too expensive, feels comfortable in use and means I can buy another one for the office.

I know this post might seem like a lot of words over something very trivial but if you’re going to use a few tools for 6-12 hours a day it can pay dividends to find decent ones, even if they cost a little more than the default plastic ones you get free with every PC.

I did consider ordering the newest model Das Keyboard for use in the office but then I noticed the ‘The Cloud Connected Keyboard’ tag line and removed it from my basket. I don’t even use a wireless keyboard, a cloud connected one… Really?

Testing multiple Puppet versions with TravicCI

When it comes to running automated tests of my public Puppet code TravisCI has long been my favourite solution. It’s essentially a zero infrastructure, second pair of eyes, on all my changes. It also doesn’t have any of my local environment oddities and so provides a more realistic view of how my changes will impact users. I’ve had two Puppet testing scenarios pop up recently that were actually the same technical issue once you start exploring them, running tests against the Puppet version I use and support, and others I’m not so worried about.

This use case came up as I have code written for Puppet 3 that I need to start migrating to Puppet 4 (and probably to Puppet 5 soon) and on the other hand I have code on Puppet 4 that I’d like to continue supporting on Puppet 3 until it becomes too much of burden. While I can do the testing locally with overrides, rvm and gemfiles, I wanted the same behaviour on TravisCI.

It’s very easy to get started with TravisCI. Once you’ve signed up (probably with github auth) it only requires two quick steps to get going. The first step is to enable your repo on the TravisCI site.

Enable repo UI

You should then add a .travis.yml file to the repo itself. This contains the what and how of building and testing your code. You can see a very minimal example, that just runs rake spec with a specific ruby version, below:

---
language: ruby
rvm:
  - 2.1.0
script: "bundle exec rake spec"

This provides our basic safety net, but now we want to allow multiple versions of puppet to be specified for testing. First we’ll modify our Gemfile to install a specific version of the puppet gem if an environment variable is passed in via the TravisCI build config. If this is missing we’ll just install the newest and run our tests using that. The lines that implement this, the last five in our sample file, are the important ones to note.

To support testing under multiple versions of Puppet we’ll modify our Gemfile to install a specific version of the puppet gem if an environment variable is passed in, otherwise we’ll just install the newest and run our tests using that. The code that implements this, last five lines in our sample, are the important ones to note.

#!ruby
source 'https://rubygems.org'

group :development, :test do
  gem 'json'
  gem 'puppetlabs_spec_helper', '~> 1.1.1'
  gem 'rake', '~> 11.2.0'
  gem 'rspec', '~> 3.5.0'
  gem 'rubocop', '~> 0.47.1', require: false
end

if puppetversion = ENV['PUPPET_GEM_VERSION']
  gem 'puppet', puppetversion, :require => false
else
  gem 'puppet', :require => false
end

Now we’ve added this capability to the Gemfile we’ll modify our .travis.yml file to take advantage of it. Add an env array, with a version from each of the two major versions we want to test under, with the same variable name as we use in our Gemfile.

---
language: ruby
rvm:
  - 2.1.0
bundler_args: --without development
script: "bundle exec rake spec SPEC_OPTS='--format documentation'"
env:
  - PUPPET_GEM_VERSION="~> 3.8.0"
  - PUPPET_GEM_VERSION="~> 4.10.0"
notifications:
  email: dean.wilson@gmail.com

Now our .travis.yml is getting a little mode complicated you might want to lint it to confirm it’s valid. You can use the online TravisCI linter or install the TravisCI YAML gem and work offline. The example file above will trigger two separate builds when TravisCI receives the trigger from our change. If you want to explicitly test under two versions of Puppet, and fail the tests if anything breaks under either version, you are done. Congratulations!

If however you’d like to test against an older, best effort but unsupported version or start testing a newer version that you’re willing to accept failures from, assuming the main other version still passes, while you migrate you’ll need to add another config option to your .travis.yml file - matrix.

matrix:
  allow_failures:
    - env: PUPPET_GEM_VERSION="~> 3.8.0"

In this case (in combination with the config file above) failures under Puppet 4 fail the build, but we allow, and essentially ignore, failures against Puppet 3 as we no longer explicitly support it. If we were planning a move to Puppet 5 we’d add its version here and even on builds that failed we’d start to collect information on what needs to be investigated and fixed while still ensuring our code passes tests under Puppet 4.

I’d also recommend adding an explicit fast_finish to your matrix config if you allow_failures. This allows TravisCI to signal when required tests are finished, even if the results of those allowed to fail are not yet known, as you don’t need them to know if a run has been successful or not.

matrix:
  fast_finish: true
  allow_failures:
    - env: PUPPET_GEM_VERSION="~> 3.8.0"

Here’s an example of a build with Allowed Failures in the UI:

Build with allowed failures

Little ruby libraries – Testing with Timecop

When it comes to little known rubygems that help with my testing I’m a massive fan of the relatively unknown Timecop. It’s a well written, highly focused, gem that lets you control and manipulate the date and time returned by a number of ruby methods. In specs where testing requires certainty of ‘now’ it’s become my favoured first stop.

The puppet deprecate function is a good example of when I’ve needed this functionality. The spec scenarios should exercise a resource with the time set to before and after the deprecation time in separate tests. The two obvious options are to hard code the dates, which won’t work here as we’re black box testing the function or mocking the calls, something Timecop excels at and saves you writing yourself.

require 'timecop'

# explicitly set the date.
Timecop.freeze(Time.local('2015-01-24'))

...
  # success: we've explicitly set the date above to be before 2015-01-25
  # so this resource hasn't been deprecated
  should run.with_params('2015-01-25', 'Remove Foo at the end of the contract.')
...
  # failure: we're using a date older than that set in the freeze above
  # so we now deprecate the resource
  should run.with_params('2015-01-20', 'Trigger expiry')
...

# reset the time to the real now
Timecop.return

This allows us to pick an absolute point in time and use literal strings in our tests that relate to the point we’ve picked. No more intermediate variables with manually manipulated date objects to ensure we’re 7 days in the future or 30 days in the past. Removing this boilerplate code itself was a win for me. If you need to ensure all your specs run with the same time set you can call the freeze and return in the before and after methods.

before do
  # all tests will have this as their time
  Timecop.freeze(Time.local(1990))
end

after do
  # return to normal time after the tests have run
  Timecop.return
end

I’ve shown the basic, and for me most commonly used functionality above, but there are a few helper methods that elevate Timecop from “I could quickly write that myself” to “this deserves a place in my gemfile. The ability to freeze time in the future with a simple Timecop.freeze(Date.today + 7) is handy, the auto-return time to normal block syntax is pure user experience refinement but the Timecop.scale function, that lets you define how much time passes for every real second, isn’t something you need every day, but when you do you’ll be very glad you don’t have to write it yourself.

Announcing multi_epp – Puppet function

As part of refreshing my old puppet modules I’ve started to convert some of my Puppet templates from the older ERB format to the newer, and hopefully safer, Embedded Puppet (EPP).

While it’s been a simple conversion in most cases, I did quickly find myself lacking the ability to select a template based on a hierarchy of facts, which I’ve previously used multitemplate to address. So I wrote a Puppet 4 version of multitemplate that wraps the native EPP function, adds matching lookup logic and then imaginatively called it multi_epp. You can see an example of it in use here:

class ssh::config {

  file { '/etc/ssh/sshd_config':
    ensure  => present,
    mode    => '0600',
    # note the array of files.
    content => multi_epp( [
                            "ssh/${::fqdn}.epp",
                            "ssh/${::domain}.epp",
                            'ssh/default_sshdconfig.epp',
                          ], {
                                'port'          => 22222,
                                'ListenAddress' => '0.0.0.0',
                          }),
  }

}

This was the first function I’ve written using the new, Puppet 4 function API and in general it feels like an improvement to the previous API. The dispatch blocks and related functions encourage you to keep the individual sections of code quite small and isolated but will require some diligence to ensure you don’t duplicate a lot of nearly similar code between signatures. I also couldn’t quite do what I wanted (a repeating set of params followed by one optional) in the API but I’ve worked around that by requiring all the files to check be given as an array; which works but is a little icky. I’ve not gone full “all the shiny” yet and included things like function return values and types but I can see myself converting some of my other functions over to gain the benefit of easier parameter checking and basic types.

So what’s next on the path to EPP? For me it’ll be to get my no ERB template puppet-lint check running cleanly over a few local modules and to double check I don’t slip back in to old habits.

Non-intuitive downtime and possibly not lost sales

One of the things you’ll often read in web operation books is the idea that while you’re experiencing downtime your customers are fleeing in droves and taking their orders to your competitors out of frustration. However this isn’t always the truism that people take it for.

If your outages are rare, and your site is normally performant and easy to use (or has a monopoly), you’ll find this behaviour a lot less common that you’ve been told. Most people have a small set of sites they are comfortable using and have gradually built up trust and an order history with. This is especially true if you operate in certain niches, such as being the fashion site, or have a very strongly defined brand.

After a period of a few months of short but recurring outages we went back over our traffic logs and ran some queries to see how badly we’d been impacted and help us create our business case for more resources. The results were a little surprising for the more ‘conventional wisdom’ trusting members of the team.

Expected behaviour

Instead of seeing a reverse hockey stick graph of our customers deserting us in our hours of need before stabilising at a lower than before constant we saw that while orders did drop off during production outages, as you’d expect from a dead system, as long as recovery times stayed in the range of minutes, and very rarely a small number of hours, we always saw the daily order volume and sales totals bounce back to within a few percentages points of a normal day. In some cases we even saw brief periods of higher than usual levels as everyone finished their pending transactions as soon as we returned.

Actual behaviour

After witnessing this we had a few discussions and made some minor changes while waiting for the larger issues to be resolved. For example one aspect to consider is that if you can architect your failures to help users preserve even some of their effort you heavily increase the odds of them finishing. Keeping services like baskets and wishlists active make it increasingly likely they’ll return to complete their transaction with you. Once they’ve gone to the effort of finding their newest ‘must have’ you have a small amount of grace points to spend while you’re getting everything back to normal before they’ll discard their own time investment and move on.

It seems that as an industry we’ve managed to train our users to accept small amounts of failure, especially if your customers favour mobile devices on cellular networks. While i don’t want to try and convince you that downtime has no impact I do think it’s worth going over the numbers after your incidents to see what the slightly longer term impact was and how far away from a normal day your recovery curve gets you.

I should also note that this doesn’t cover security issues. Those have very different knock on effects and are typically orders of magnitude worse.

Smaller Debian Docker tips – apt lists

One of the hidden gems of GitHub is Jess Frazelle’s Dockerfiles Repo, a collection of Dockerfiles for applications she runs in containers to keep her desktop clean and minimal. While reading the NMap Dockerfile I noticed a little bit of shell I’d not seen before.

I’ve included the file itself below. The line in question is && rm -rf /var/lib/apt/lists/*, a tiny bit of shell that does some additional cleanup once apt has installed the required packages.

FROM debian:stretch
LABEL maintainer "Jessie Frazelle <jess@linux.com>"

RUN apt-get update && apt-get install -y 
	nmap 
	--no-install-recommends 
	&& rm -rf /var/lib/apt/lists/*

ENTRYPOINT [ "nmap" ]

Curiosity got the best of me and I decided to see how much of a saving that line provides. First I built the Docker image as Jess intended:

sudo docker build -t nmap-rm-lists -f Dockerfile-rm-lists .

> sudo docker images
REPOSITORY           TAG      IMAGE ID       CREATED             SIZE
nmap-rm-lists        latest   9a4a697649f9   10 seconds ago      131.1 MB

As you can see in the output this creates an image 131.1 MB in size. If we remove the rm line (and the continuation character from the line above) and rebuild the image we should see a larger image.

sudo docker build -t nmap-with-apt-lists -f Dockerfile-with-apt-lists .

...

> sudo docker images
REPOSITORY           TAG      IMAGE ID       CREATED              SIZE
nmap-with-apt-lists  latest   d8459f6f2b93   About a minute ago   146.6 MB

And indeed we do, the image is just over 10% larger without that little optimisation. That’s going to be quite a nice saving over a few dozen container images. While looking through some of the other code in that repo I saw mention of a debian:stretch-slim image so I thought it was worth running an additional experiment with it as the base. Making the small change from FROM debian:stretch to FROM debian:stretch-slim in our Dockerfile, with the rm -rf /var/lib/apt/lists/* command also present, results in a much smaller image at just 86 MB

> sudo docker images
REPOSITORY           TAG      IMAGE ID       CREATED             SIZE
nmap-rm-lists-slim   latest   8fa72fad3929   About a minute ago  86.78 MB

For completeness (Hi Wes!) if we leave the lists in and use the debian:stretch- slim image we have a significantly larger image at 102 MB. This helps show that even with smaller base image the removal of the apt list files is still well worth it.


REPOSITORY             TAG      IMAGE ID      CREATED        SIZE
nmap-with-lists-slim   latest   26e65d974ae6  8 seconds ago  102.2 MB

While an Alpine image would be even smaller it’s nice to see this kind of size saving on Debian based images that look a lot closer to what I’d normally run in my VMs.

Nicer Jenkins Views – Build Monitor Plugin

While migrating and upgrading an old install of Jenkins over to version 2 the topic of adding some new views came up in conversation and the quite shiny Jenkins CI Build Monitor Plugin came up as a pretty, and quick to deploy, option.

Using some canned test jobs we did a manual deploy of the plugin, configured a view on our testing machine, and I have to say it looks as good, and as easily readable from a few desks away, as we’d hoped.

Screen shot of the Jenkins Build Monitor Plugin. Lots of green and red boxes

The next step is to apply the true utility test, leave it in place for a week or so and then remove it and see if anyone notices. If they do we’ll add some puppet scaffolding and roll it out to all the environments.

Tales from the Script

A number of roles ago the operations and developer folk were blessed with a relatively inexperienced quality assurance department that were, to put it kindly, focused on manual exploratory testing. They were, to a person, incapable of writing any kind of automated testing, running third party tools or doing anything in a reproducible way. While we’ve all worked with people lacking in certain skills what makes this story one of my favourites is that none of us knew they couldn’t handle the work.

The manager of the QA team, someone I’d never have credited with the sheer audacity to pull off this long con, always came to our meetings with an earnest face and excuses about the failure of “The Script”. We, being insanely busy modern technical people, took this at face value; how would you run all the regression tests without a script? “There was a problem running the script”, “the newest changes to the script had caused regressions” and similar were always on the tip of their tongue and because the developers were under a lot of time pressure no deep investigations were done. Everyone was assumed to be doing their best and what a great QA manager they were in protecting their people from any fallout from the failures. On it went, all testing was done via “the script” and everything was again good. Or so we assumed.

In one of our recurring nightmare incident reviews, this one after something we’d previously covered had come back for the third time, a few of us began to get suspicious. We decided to build our own little response team and do some digging for the sake of every ones sanity. Now, this was before the days of GitHub and everyone being in one team of sharing and mutual bonding. We knew we’d have to go rooting around other departments infrastructure to see what was going on. Over the course of the next few days the group targeted one of the more helpless QA engineers and began to help him with everything technical he needed. He had the most amazing, fully hand held, on-boarding the department had ever seen and we, in little bits and pieces, began to pierce the veil of secrecy that was the QA teams process.

One day, just before lunch, one of the senior developers involved in our investigation hit the mother-load. The QA engineer had paired with them on adding testing to “the script” for a new feature the developer had written and suddenly he had a full understanding of the script and its architecture.

It was an Excel spreadsheet.

It was a massive, colour coded, interlinked Excel spreadsheet. Each row was a separate test path or page journey. Some rows were 40 fields of references to other rows to form one complete journey. Every time we did a release to staging they’d load up the Excel document from the network share and arrow key their way through row upon row of explicit instructions. Seeing it in use was like watching an insane cross between snake and minesweeper. Some of the cells were links to screen grabs of expected outputs and page layouts. Some of them had a red background to show steps that had recently failed. It was a horrific moment of shared confusion. A team of nearly forty testers had ended up building this monstrosity and running it for months. It was like opening up a word doc and having Cthulhu glare back at you. So we did the only thing we could think of, went to lunch and mostly sat in stunned silence.

And I almost forgot the best part of the story, the Excel spreadsheet? It was named “The_Script.old.xls”