AWS IoT Help Button

We recently moved into our new building at 1034 Wealthy in Grand Rapids. The new building is much larger than our old one, and I find myself running around much more and stationing myself in different areas, depending on what I am actively doing.

To help ensure I can provide timely hands-on help—particularly for our printer (which is a common source of problems)—I procured one of the new AWS IoT buttons and programmed it to page me when pushed.

AWS IoT Button Overview

The AWS IoT Button is a simple device consisting primarily of a push button, an LED indicator, and a WiFi card. It can be configured to connect to the AWS IoT (Internet of Things) service in order to deliver data about button pushes. The data consists primarily of the type of button push (short, double, or long), but it also includes the button’s serial number and the battery voltage. This data can then be acted upon to do any number of interesting things using AWS or calling out to other services.

AWS IoT Button


The AWS IoT Button requires Internet connectivity to communicate with the AWS IoT service. When configuring the button, you choose a WiFi SSID to connect to and provide the appropriate passphrase. The button also receives an ARN identifier, so that it can be uniquely referenced within the AWS ecosystem.


When configuring the AWS IoT Button, a new PKI certificate and private key are generated and uploaded to the button. This allows the button to communicate securely with AWS, and it allows AWS to validate the identity of that button. This becomes important when writing specific policies to enable button data to trigger events.

How I Use the AWS IoT Button

When someone presses the AWS IoT button, it kicks off a process which will simultaneously e-mail me, send me an SMS message, and show me a notification on Slack. The message contains a timestamp and an identifier for the button that was pressed (in case there are more such buttons deployed in the future).

I have set the expectation that if I am available, and not in a meeting or otherwise occupied, I will respond to a button push within five minutes. This is often much faster than someone would be able to locate me and request assistance otherwise. While there have been a few instances where people1 have pushed the button simply to test my response time, I’ve had over a dozen instances of legitimate button pushes. As a result of these, I was able to respond to the notification and assist with a printer problem in a timely fashion.

AWS IoT Help Button

I’m not sure how well this setup would scale, but for the moment, it is working well.

The AWS Service Flow


AWS IoT is a platform to allow devices to interact with AWS cloud services.

For AWS IoT Buttons, there is a device or “thing,” rule, certificate, and policy associated with each physical button. A “device” associates a button (by serial number) with a particular HTTP REST endpoint and MQTT topic. The “certificate,” uploaded to the button during configuration, is linked to a “device” and allows the button to securely submit data, while also authenticating it to AWS. The “rule” specifies how messages from the MQTT topic are used, defining what actions to take when a query matches a message. A “policy” authorizes a specific device to take AWS IoT actions, such as publishing to an MQTT topic.

In my setup, I added the endpoint information for my “device” to the AWS IoT button, which is linked to the specific “certificate” I uploaded, along with its private key:

  • REST API Endpoint:
  • MQTT Topic: iotbutton/G030JF055234P6KJ

I gave the ‘device’ permission to publish to the MQTT topic:

  "Version": "2012-10-17",
  "Statement": [
      "Action": "iot:Publish",
      "Effect": "Allow",
      "Resource": "arn:aws:iot:us-west-2:979276642162:topic/iotbutton/G030JF055234P6KJ"

I specified that any messages received on the specific MQTT topic are forwarded to an AWS Lambda function:

  • Query String: SELECT * FROM 'iotbutton/G030JF055234P6KJ'
  • Action: Lamda Action; Function Name AWS_IoT_Button_aogr_1_resource_area

In this case, the messages are JSON-formatted data representing events containing the button’s serial number, battery voltage, and type of button push:

  "serialNumber": "G030JF055234P6KJ",
  "batteryVoltage": "1568mV",
  "clickType": "SINGLE"

AWS Lambda

As I have written about before2, AWS Lambda allows functions written in a few different languages (Python, Java, and Node.js) to be executed in response to events.

For my AWS IoT Button, I created a function to publish to an AWS SNS topic.

My Lambda function is quite simple;

// Node.JS 4.3 Runtime
const AWS = require('aws-sdk');
const SNS = new AWS.SNS({ apiVersion: '2010-03-31' });
function findTopicArn(topicName, cb) {
    SNS.createTopic({ Name: topicName }, (err, data) => {
        if (err) console.log(err, err.stack);
        else cb(data.TopicArn);
function publishToTopic(params, cb) {
    SNS.publish(params, (err, data) => {
        if (err) console.log(err, err.stack);
        else cb(data.MessageId);
exports.handler = (event, context, callback) => {
    console.log('Received event:', event.clickType);
    var datetime = new Date();
    var topicArn = findTopicArn('aws-iot-button-sns-topic', (topicArn) => {
        console.log(`Publishing to topic ${topicArn}`);
        var params = {
            Message: `aogr_1_resource_area help request at ${datetime}`,
            Subject: `Help Request: aogr_1_resource_area`,
            TopicArn: topicArn
        var messageId = publishToTopic(params, (messageId) => {
            console.log(`Published to topic with messageId: ${messageId}`);    
    } );

The function finds an SNS topic ARN by name, and then publishes a message to the topic. The event data is not actually passed along to SNS as this Lambda function is unique to the AWS IoT Button that I am using (specified in the “plan” for the “device” in AWS IoT), so I just send a custom message crafted for the purposes of the button. It would not be difficult to modify the function to send data about the button push (such as the push type), or send additional data from AWS IoT (such as the button serial number).


AWS SNS allows sending push messages to subscribers of specific topics.

For my purposes, I created an AWS SNS topic called “aws-iot-button-sns-topic” to which I added two subscriptions: one e-mail and one SMS. As I wanted to receive notifications from pushes to the AWS IoT button, I used my e-mail address and mobile number. When messages are published to the topic from the Lambda function, I receive them via e-mail and on my phone.


While certainly not a very sophisticated use of IoT, creating the IoT Help Button provided a good opportunity for me to learn about AWS’s IoT offering. I also took the chance to build a nifty device which helps make me a little bit more effective at my job. I’m excited to try out connecting other devices to AWS IoT, so I can build something larger which drives a more complex set of services.

1. Shawn Anderson
2. Managing AWS Route 53 Hosted Zones with AWS Lambda and Managing AWS CloudFront Security Group with AWS Lambda

The post AWS IoT Help Button appeared first on Atomic Spin.

Easy Secure Web Serving with OpenBSD’s acme-client and Let’s Encrypt

As recently as just a few years ago, I hosted my personal website, VPN, and personal email on a computer running OpenBSD in my basement. I respected OpenBSD for providing a well-engineered, no-nonsense, and secure operating system. But when I finally packed up that basement computer, I moved my website to an inexpensive cloud server running Linux instead.

Linux was serviceable, but I really missed having an OpenBSD server. Then I received an email last week announcing that the StartSSL certificate I had been using was about to expire and realized I was facing a tedious manual certificate replacement process. I decided that I would finally move back to OpenBSD, running in the cloud on Vultr, and try the recently-imported acme-client (formerly “letskencrypt”) to get my HTTPS certificate from the free, automated certificate authority Let’s Encrypt.

Why You Should Get Your Certificates from ACME

Let’s Encrypt uses the Automated Certificate Management Environment protocol, more commonly known as ACME, to automatically issue the certificates that servers need to identify themselves to browsers. Prior to ACME, obtaining certificates was a tedious process, and it was no surprise when even high-profile sites’ certificates would expire. You can run an ACME client periodically to automatically renew certificates well in advance of their expiration, eliminating the need for the manual human intervention that can lead to downtime.

There are plenty of options for using ACME on your server, including the Let’s Encrypt-recommended Certbot. I found acme-client particularly attractive not just because it will ship with the next release of OpenBSD, but also because it’s well-designed, making good use of the privilege separation technique that OpenBSD pioneered as well as depending only on OpenBSD’s much-improved LibreSSL fork of OpenSSL.


To follow along with me, you’ll need OpenBSD. You can use the 6.0 release and install acme-client. If you’re feeling adventurous and are willing to maintain a bleeding-edge system, you can also run the -current branch, which already has acme-client.

If you do the smart thing and choose to use the release version, you’ll need to do a little extra setup after installing acme-client to align with the places things are in -current:

# mkdir -p /etc/acme /etc/ssl/acme/private /var/www/acme
# chmod 700 /etc/acme /etc/ssl/acme/private

And whenever you use acme-client, you’ll need to specify these paths, e.g.:

# acme-client 
        -C /var/www/acme 
        -c /etc/ssl/acme 
        -k /etc/ssl/acme/private/privkey.pem 
        -f /etc/acme/privkey.pem

Everything will work as advertised otherwise.

A note before we get started: If you’re new to OpenBSD, you owe it to yourself to get familiar with man(1). OpenBSD has amazingly good documentation for just about everything, and you can access it all by typing e.g. man httpd or man acme-client. Everything in this article came from my reads of these manpages. If you get stuck, try man first!

ACME will use a web server as part of its challenge-response process with the Let’s Encrypt service. To get this started, we’ll build out a basic /etc/httpd.conf based on our readings of httpd.conf(5) and acme-client(1):

server "default" {
        listen on * port 80
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                root strip 2

This is enough to start up a basic web server that will serve the challenge responses that acme-client will produce. Now, start httpd using rcctl(8):

# rcctl enable httpd
# rcctl start httpd

Getting Your First Certificate

Once httpd is up and running, you’re ready to ask acme-client to perform all that heavy lifting that you used to have to do by hand, including:

  1. Generating your web server private and public key
  2. Giving your public key to the certificate authority
  3. Proving to the certificate authority that you’re authorized to have a certificate for the domains you’re requesting
  4. Retrieving the signed certificate

You can do all of this with a single command:

# acme-client -vNn

man acme-client will explain all that’s going on here:

  1. -v says we want verbose output, because we’re curious.
  2. -N asks acme-client to create the private key for our web server, if one does not already exist.
  3. -n asks acme-client to create the private key for our Let’s Encrypt account, if one does not already exist.
  4. and are the domains where we want our certificate to be valid—note that our web server must be reachable via those names for this process to work!

If this worked correctly, there will be some new keys and certificates on your system ready to be used to serve HTTPS.

Using the New Certificates with httpd

To get httpd working with our new certificates, we just need to expand /etc/httpd.conf a little:

server "default" {
        listen on * port 80
        listen on * tls port 443
        tls certificate "/etc/ssl/acme/fullchain.pem"
        tls key "/etc/ssl/acme/private/privkey.pem"
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                root strip 2

The three new lines above add a new HTTPS listener to our configuration, telling httpd where to find the certificate it should present and the private key it should use.

Once this configuration is in place, ask httpd to reload its configuration file:

# rcctl reload httpd

At this point, your server should be online with a valid Let’s Encrypt certificate, serving HTTPS—though giving you an error page, because httpd is not yet configurated to serve any content. That bit is left as an exercise for the reader. (Consult httpd.conf(5) for further help there.)

Automating Yourself Out of a Certificate Renewal Job

By far the best part about ACME is that it can be easily configured to automatically renew your certificates before you notice they’re about to expire. Note that acme-client is written so that you simply need to run it periodically. Once the certificates are 30 days from expiration, it will get a fresh signature from Let’s Encrypt.

Making this happen is as simple as dropping the following into /etc/daily.local (cf. daily(8)):

# renew Let's Encrypt certificate if necessary
if [ $? -eq 0 ]
        rcctl reload httpd

And now acme-client will now run every night (by default at 1:30 a.m.) and renew your certificate when necessary.

Further Reading

This is a simple configuration, but it’s enough to run my web site and give me painless HTTPS that scores an A out-of-the-box on SSL Labs’ server test. I added a few lines to /etc/httpd.conf to serve the static content on my site, and I was done.

If you have a more complex configuration, though, chances are that httpd and acme-client are up to the task. To find out all they can do, read the man pages:

If you want to know more about OpenBSD in general, check out the comprehensive OpenBSD FAQ.

Happy secure serving!

The post Easy Secure Web Serving with OpenBSD’s acme-client and Let’s Encrypt appeared first on Atomic Spin.

Conference Room A/V Build-Out

We recently moved to our new building at 1034 Wealthy. We took the opportunity to update the A/V equipment for our conference rooms. Previously, we largely relied on projectors for presentation capabilities, an external USB microphone/speaker for audio, built-in webcams on laptops for video, and a table where we staged everything. This worked, but it was certainly not ideal. With the new building, I had the opportunity to standardize a new conference room A/V build-out that would be better suited to our needs.

All of our new conference rooms now have a mobile TV stand which holds all of our A/V equipment. This includes a large flatscreen TV, dedicated webcam, dedicated microphone/speaker, and all necessary cables and connectors. Our new setup provides important capabilities required for many of our meetings, especially teleconferences: mobility, audio input, audio output, video input, and video output.



I choose the Kanto Living MTM82PL mobile TV mount, which includes the mounting hardware for a flatscreen TV, a small shelf, and a shelf for a webcam above the TV. It is a sleek, yet sturdy platform which allows our A/V build-out to be mobile. While largely dedicated to conference rooms, it can also be moved out to other areas–such as our cafe–for events or meet-ups.

Video Output

The Samsung 65″ Class KU6300 6-Series 4K UHD TV was selected as our primary display. This provides a much better picture and much higher resolution than the old projectors we were using. It has a native resolution of 3840 x 2160, a 64.5″ screen (diagonal), and 3 HDMI ports. While not all of our devices can support that resolution at this point (for example, AppleTVs only support up to 1080p), it still seemed like a worthwhile investment to help future-proof the solution.

Video Input

I chose the Logitech HD Pro Webcam C920 for video capabilities. It supports 1080p video when used with Skype for Windows, and 720p video when used with most other clients. The primary benefit of this webcam is that it can be mounted above the TV on the mobile stand, providing a wide view of the entire room–rather than just the person directly in front of the built-in laptop webcam.

Audio Input/Output

We had previously made use of the Phoenix Audio Duet PCS as a conference room “telephone” for web meetings–it provides better audio capabilities for a group of people than a stand-alone laptop. We placed one of these in each of the conference rooms as part of the A/V build-out. It acts as the microphone and speaker, while using the Logitech webcam for video input and the Samsung TV for video output.


Of course, I needed a few other items to tie all of these different capabilities together.


I purchased 20 ft. Luxe Series High-Speed HDMI cables so people can connect directly to the Samsung TVs for presentations. This type of connection allows computers to utilize the full resolution of the new TVs.


The Moshi Mini DisplayPort to HDMI adapter provides connectivity for those Atoms whose MacBooks do not natively support HDMI.

Presentation Helpers

I decided to purchase Apple TVs to allow for wireless presentation capabilities. With AirPlay, Macs (and other compatible devices) can transmit wirelessly to the TV–without the need for an HDMI cable. This is convenient for getting up and running quickly without any cable clutter, but it isn’t always appropriate (which is why a direct HDMI connection is available as well).

Cable Management

In addition to the standard cable ties and other cable management tricks, I’ve found that Cozy Industries, makers of the popular MagCozy, also makes a DisplayCozy. This helps keep the Moshi HDMI adapter with the HDMI cable.

Power Distribution

While the mobile TV cart provides a great deal of flexibility, the new building also has wide spaces between electrical outlets. To ensure that the A/V build-out would be usable in most spaces, I decided to add a surge protector with an extra-long cord. The Kensington Guardian 15′ works well for this.

Finished Product

Atomic A/V Cart
Atomic Mobile A/V Solution

The post Conference Room A/V Build-Out appeared first on Atomic Spin.

How (and Why) to Log Your Entire Bash History

For the last three and a half years, every single command I’ve run from the command line on my MacBook Pro has been logged to a set of log files.

Uncompressed, these files take up 16 MB of disk space on my laptop. But the return I’ve gotten on that small investment is immense. Being able to go back and find any command you’ve run in the past is so valuable, and it’s so easy to configure, you should definitely set it up today. I’m going to share how to do this so you can take advantage of it as well.

Bash Configuration File

You’ll need to configure an environment variable so that it’s loaded in every command line session. On my MacBook Pro, I use the .bash_profile file. On other operating systems, the .bashrc file is an option. See this blog post on .bash_profile vs .bashrc for more on the differences.


The Bash Prompt HOWTO describes the PROMPT_COMMAND environment variable as follows:

Bash provides an environment variable called PROMPT_COMMAND. The contents of this variable are executed as a regular Bash command just before Bash displays a prompt.

We’re going to set the PROMPT_COMMAND variable to be something that logs the most recent line of history to a file. To do this, add the following to your chosen Bash configuration file (.bash_profile for me):

export PROMPT_COMMAND='if [ "$(id -u)" -ne 0 ]; then echo "$(date "+%Y-%m-%d.%H:%M:%S") $(pwd) $(history 1)" >> ~/.logs/bash-history-$(date "+%Y-%m-%d").log; fi'

First, this checks to make sure we’re not root.

If that checks out, it appends a line that includes the current timestamp, the current working directory, and the last command executed to a log file that includes the current date in the filename.

Having the commands stored in separate files like this really helps when you’re trying to find a command you ran sometime last month, for example.

> grep -h logcat ~/.logs/bash-history-2016-04*
2016-04-01.10:18:03 /Users/me 66555  adb logcat
2016-04-01.10:19:56 /Users/me 66555  adb logcat
2016-04-01.11:01:36 /Users/me 66555  adb logcat
2016-04-05.09:50:25 /Users/me/git/android-project 66368  adb logcat
2016-04-05.13:42:54 /Users/me/git/android-project 66349  adb -s emulator-5554 logcat
2016-04-06.10:40:08 /Users/me/git/android-project 66390  adb logcat
2016-04-06.10:48:54 /Users/me/git/android-project 66342  adb logcat


It will only take a few seconds to update your PROMPT_COMMAND so that it logs every command to a file.

And the next time you’re trying to remember the command line options you used with find that one time (but can’t find in your current session’s history), you’ll be able to look it up in the log files.

Oh, and if you want to know how many times you’ve done a git push in the last three and a half years, you can look that up, too (5,585 git pushes for me)!

The post How (and Why) to Log Your Entire Bash History appeared first on Atomic Spin.

Ansible Communication with AWS EC2 Instances on a VPC

I’ve recently started using Ansible to manage Elastic Compute Cloud (EC2) hosts on Amazon Web Services (AWS). While it is possible to have public IP addresses for EC2 instances on an AWS Virtual Private Cloud (VPC), I opted to place the EC2 instances on a private VPC subnet which does not allow direct access from the Internet. This makes communicating with the EC2 instances a little more complicated.

While I could create a VPN connection to the VPC, this is rather cumbersome without a compatible hardware router. Instead, I opted to create a bastion host which allows me to connect to the VPC, and communicate securely with EC2 instances over SSH.

VPC Architecture

I run a fairly simple VPC architecture with four subnets, two public and two private, with one of each type paired in separate availability zones. The public subnets have direct Internet access, whereas the private subnets cannot be addressed directly, and must communicate with the Internet via a NAT gateway.


In the diagram, my computer at wants to run Ansible against an EC2 instance at in “Private Subnet 2.” is a Class B private network; its addresses cannot be routed over the Internet. Furthermore, as “Private Subnet 2” does not have direct access to the Internet (it is via the NAT gateway at, there is no way to assign a public IP address.

On this network, in order to communicate with, my computer must either be connected to the VPC with a VPN connection, or by forwarding traffic via the bastion host. In my case, a VPN connection is not feasible, so I made use of the bastion host, which has both a publicly routable IP address (, and a private address on the network at

SSH Jump Hosts

A common practice to reach hosts on an internal network which are not directly accessible is to use an SSH jump host. Once an SSH connection is made to the jump hosts, additional connections can be made to hosts on the internal network from the jump host.

Generally, this looks something like:

jk@localhost:~$ ssh ubuntu@
ubuntu@$ ssh ubuntu@

ssh jump host

This could also be simplified as one command invocation:

jk@localhost:~$ ssh -t ubuntu@ 'ssh ubuntu@'

(Note the -t to force pseudo-TTY allocation.)

The connections from the jump host to other hosts do not necessarily need to be SSH connections. For example, a socket connection can be opened:

jk@localhost:~$ ssh ubuntu@ 'nc 22'
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.4

SSH ProxyCommand

Ansible makes use of SSH to connect to remote hosts. However, it does not support configuration of an explicit SSH jump host. This would make it impossible for Ansible to connect to a private IP address without other networking (e.g. VPN) magic. Fortunately, Ansible takes common SSH configuration options, and will respect the contents of a system SSH configuration file.

The ProxyCommand option for SSH allows specifying a command to execute to connect to a remote host when connecting via SSH. This allows us to abstract the specifics of connecting to the remote host to SSH; we can get SSH to provide a jump host connection transparently to Ansible.

Essentially, ProxyCommand works by substituting the standard SSH socket connection with what is specified in the ProxyCommand option.

ssh -o ProxyCommand="ssh ubuntu@ 'nc 22'" ubuntu@nothing

The above command will, for example, first connect to via SSH, and then open a socket to on port 22. The socket connection (which is connected to the remote SSH server) is then passed to the original SSH client command invocation to utilize.

The ProxyCommand allows the interpolation of the original host and port to connect to with the %h and %p delimeters.


ssh -o ProxyCommand="ssh ubuntu@ 'nc %h %p'" ubuntu@

Is equivalent to running:

ssh -o ProxyCommand="ssh ubuntu@ 'nc 22'" ubuntu@

SSH Configuration File

Using the ProxyCommand in conjunction with an SSH configuration file, we can make SSH connections to a private IP address appear seamless to whichever application is executing SSH.

For my VPC architecture described above, I could add the following to an SSH configuration file:

  ProxyCommand ssh ubuntu@ nc %h %p

This makes all SSH connections to the private IP address seamless:

ssh -F ./mysshconfig_file ubuntu@

And, if using the default .ssh/config for storing your SSH configuration options, you don’t even need to specify the -F option:

ssh ubuntu@

All Together Now

Using the ProxyCommand option, it is simple to abstract away the details of the underlying connection to the EC2 instances on the private VPC subnet and allow Ansible to connect to those hosts normally. Any hosts on the private VPC subnet can be added explicitly to an SSH configuration file, or the pattern can be expanded. For example, we can apply the ProxyCommand option to all hosts on the VPC subnet:

Host 172.31..
  ProxyCommand ssh ubuntu@ nc %h %p

When running Ansible, the host’s inventory can simply specify the private IP address (such as as the connection hostname/address, and SSH will handle the necessary underlying connections to the bastion host.

Generally, the system or user SSH configuration file (~/.ssh/config)can be used, but Ansible-specific SSH configuration options can also be included in the ansible.cfg file.

This is particularly convenient when using dynamic host inventories with EC2, which can automatically return the private IP addresses of new EC2 instances from the AWS APIs.

Additional SSH and nc flags can be addded to the ProxyCommand option to enhance flexibility.

For example, adding in -A to enable SSH agent forwarding, -q to suppress extra SSH messages, -w to adjust the timeout for nc, and any other standard SSH configuration options:

Host 172.31..
  User ec2-user
  ProxyCommand ssh -q -A ubuntu@ nc -w 300 %h %p

The post Ansible Communication with AWS EC2 Instances on a VPC appeared first on Atomic Spin.

Managing AWS CloudFront Security Group with AWS Lambda

One of our security groups on Amazon Web Services (AWS) allows access to an Elastic Load Balancer (ELB) from one of our Amazon CloudFront distributions. Traffic from CloudFront can originate from a number of a different source IP addresess that Amazon publishes. However, there is no pre-built security group to allow inbound traffic from CloudFront.

I constructed an AWS Lambda function to periodically update our security group so that we can ensure all CloudFront IP addresses are permitted to access our ELB.

AWS Lambda

AWS Lambda allows you to execute functions in a few different languages (Python, Java, and Node.js) in response to events. One of these events can be the triggering of a regular schedule. In this case, I created a scheduled event with an Amazon CloudWatch rule to execute a lambda function on an hourly basis.

CloudWatch Schedule to Lambda Function

The Idea

The core of my code involves calls to authorize_ingress and revoke_ingress using the boto3 library for AWS. AWS Lambda makes the boto3 library available for Python functions.

print("the following new ip addresses will be added:")
print("the following new ip addresses will be removed:")

Amazon publishes the IP address ranges of its various services online.

response = urllib2.urlopen('')
jsondata = json.loads(
newipranges = [ x['ipprefix'] for x in jsondata['prefixes'] if x['service'] == 'cloudfront' ]

I can easily compare the allowed ingress address ranges in an existing security group with those retrieved from the published ranges. The authorized_ingress and revoke_ingress functions then allow me to make modifications to the security group to keep it up-to-date, and permit traffic from CloudFront to access my ELB.

for ip in new_ip_ranges:
    if ip not in current_ip_ranges:
        authorize_dict['ipranges'].append({u'cidrip': ip})
for ip in current_ip_ranges:
    if ip not in new_ip_ranges:
        revoke_dict['ipranges'].append({u'cidrip': ip})

The AWS Lambda Function

The full lambda function is written as a standard lambda_handler for AWS. In this case, the event and context are ignored, and the code is just executed on a regular schedule.

Lambda Function

Notice that the existing security group is directly referenced as sg-3xxexx5x.

from __future__ import print_function
import json, urllib2, boto3
def lambda_handler(event, context):
    response = urllib2.urlopen('')
    json_data = json.loads(
    new_ip_ranges = [ x['ip_prefix'] for x in json_data['prefixes'] if x['service'] == 'cloudfront' ]
    ec2 = boto3.resource('ec2')
    security_group = ec2.securitygroup('sg-3xxexx5x')
    current_ip_ranges = [ x['cidrip'] for x in security_group.ip_permissions[0]['ipranges'] ]
    params_dict = {
        u'prefixlistids': [],
        u'fromport': 0,
        u'ipranges': [],
        u'toport': 65535,
        u'ipprotocol': 'tcp',
        u'useridgrouppairs': []
    authorize_dict = params_dict.copy()
    for ip in new_ip_ranges:
        if ip not in current_ip_ranges:
            authorize_dict['ipranges'].append({u'cidrip': ip})
    revoke_dict = params_dict.copy()
    for ip in current_ip_ranges:
        if ip not in new_ip_ranges:
            revoke_dict['ipranges'].append({u'cidrip': ip})
    print("the following new ip addresses will be added:")
    print("the following new ip addresses will be removed:")
    return {'authorized': authorize_dict, 'revoked': revoke_dict}

The Security Policy

The above lamdba function presumes permissions to be able to edit the referenced security group. These permissions can be configured with an AWS Identity and Access Management (IAM) policy, applied to the role which the lamdba function executes as.

Lambda function role

Notice that the security group resource, sg-3xxexx5x, is specifically scoped to the us-west-2 AWS region.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:logs:*:*:*"
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:ec2:us-west-2:*:security-group/sg-3xxexx5x"

Making It All Work

In order to get everything hooked up correctly, an appropriate security group needs to exist. The identifier for the group needs to be referenced in both the Lambda script, and the policy used by the role that the lambda script executes as. The IAM policy uses the Amazon Resource Name (ARN) instead of the security group identifier. The AWS Lambda function presumes that Amazon will publish changes to the CloudFront IP address range in a timely manner, and that running the function once per hour will be sufficient to grant ingress permissions on the security group. If the CloudFront ranges change frequently, or traffic is particularly crucial, the frequency of the lambda function run should be increased.

The post Managing AWS CloudFront Security Group with AWS Lambda appeared first on Atomic Spin.

Bash Completion, Part 2: Programmable Completion

Don’t miss the previous post in this series: Bash Tab Completion

With Bash’s programmable completion functionality, we can create scripts that allow us to tab-complete arguments for specific commands. We can even include logic to handle deeply nested arguments for subcommands.

Programmable completion is a feature I’ve been aware of for some time, but I only recently took the time to figure out how it works. I’ll provide some links to more in-depth treatments at the end of this post, but for now, I want to share what I learned about using these other resources.

Completion Specifications

First, let’s take a look at what “completion specifications” (or “compspecs”) we have in our shell already. This list of compspecs essentially acts as a registry of handlers that offer completion options for different starting words. We can print a list of compspecs for our current shell using complete -p. The complete built-in is also used to register new compspecs, but let’s not get ahead of ourselves.

Here’s a sampling of compspecs from my shell:

$ complete -p
complete -o nospace -F _python_argcomplete gsutil
complete -o filenames -o nospace -F _pass pass
complete -o default -o nospace -F _python_argcomplete gcloud
complete -F _opam opam
complete -o default -F _bq_completer bq
complete -F _rbenv rbenv
complete -C aws_completer aws

Here, we have some rules for completing the arguments to the following commands:

  • gsutil
  • pass
  • gcloud
  • opam
  • bq
  • rbenv
  • aws

If I type any one of those commands into my shell followed by <TAB><TAB>, these rules will be used to determine the options Bash offers for completion.

OK, so, what are we looking at? Each of the compspecs in our list starts with complete and ends with the name of the command where it will provide programmable completion. Some of the compspecs here include some -o options, and we’ll get to those later. Each of these compspecs includes either -C or -F.

Completion Commands

The compspec for aws uses -C to specify a “completion command,” which is a command somewhere in our $PATH that will output completion options.

As input, the command will receive from Bash two environment variables: COMP_LINE and COMP_POINT. These represent the current line being completed, and the point at which completion is taking place.

As output, the completion command is expected to produce a list of completion options (one per line). I won’t go into the details of this approach, but if you’re curious, you can read the source for the aws_completer command provided by Amazon’s aws-cli project.

Completion Functions

A more common approach to completion is the use of custom completion functions. Each of the compspecs containing -F registers a completion function. These are simply Bash functions that make use of environment variables to provide completion options. By convention, completion functions begin with an underscore character (_), but there’s nothing magical about the function names.

Like the completion commands, completion functions receive the COMP_LINE and COMP_POINT environment variables. However, rather than providing line-based text output, completion functions are expected to set the COMPREPLY environment variable to an array of completion options. In addition to COMP_LINE and COMP_POINT, completion functions also receive the COMP_WORDS and COMP_CWORD environment variables.

Let’s look at some of these completion functions to see how they work. We can use the Bash built-in type command to print out these function definitions (even before we know where they came from).

$ type _rbenv
_rbenv is a function
_rbenv ()
    local word="${COMP_WORDS[COMP_CWORD]}";
    if [ "$COMP_CWORD" -eq 1 ]; then
        COMPREPLY=($(compgen -W "$(rbenv commands)" -- "$word"));
        local words=("${COMP_WORDS[@]}");
        unset words[0];
        unset words[$COMP_CWORD];
        local completions=$(rbenv completions "${words[@]}");
        COMPREPLY=($(compgen -W "$completions" -- "$word"));

This example demonstrates a few common patterns. We see that COMP_CWORD can be used to index into COMP_WORDS to get the current word being completed. We also see that COMPREPLY can be set in one of two ways, both using some external helpers and a built-in command we haven’t seen yet: compgen. Let’s run through some possible input to see how this might work.

If we type:

$ rbenv h<TAB><TAB>

We’ll see:

$ rbenv h
help hooks

In this case, COMPREPLY comes from the first branch of (COMP_CWORD is 1). The local variable word is set to h, and this is passed to compgen along with a list of possible commands generated by rbenv commands. The compgen built-in returns only those options from a given wordlist (-W) that start with the current word of the user’s input, $word. We can perform similar filtering with grep:

$ rbenv commands | grep '^h'

The second branch provides completion options for subcommands. Let’s walk through another example:

$ rbenv hooks <TAB><TAB>

Will give us:

$ rbenv hooks
exec    rehash  which

Each of these options simply comes from rbenv completions:

$ rbenv completions hooks

And since we haven’t provided another word yet, compgen is filtering with an empty string, analogous to:

$ rbenv completions hooks | grep '^'

If we instead provide the start of a word, we’ll have it completed for us:

$ rbenv hooks e<TAB>

Will give us:

$ rbenv hooks exec

In this case, our compgen invocation might be something like:

$ compgen -W "$(rbenv completions hooks)" -- "e"

Or we can imagine with grep:

$ rbenv completions hooks | grep '^e'

With just a single result in COMPREPLY, readline is happy to complete the rest of the word exec for us.

Registering Custom Completion Functions

Now that we know what it’s doing, let’s use Bash’s extended debugging option to find out where this _rbenv function came from:

$ shopt -s extdebug && declare -F _rbenv && shopt -u extdebug
_rbenv 1 /usr/local/Cellar/rbenv/0.4.0/libexec/../completions/rbenv.bash

If we look in this rbenv.bash file, we’ll see:

$ cat /usr/local/Cellar/rbenv/0.4.0/libexec/../completions/rbenv.bash
_rbenv() {
  local word="${COMP_WORDS[COMP_CWORD]}"
  if [ "$COMP_CWORD" -eq 1 ]; then
    COMPREPLY=( $(compgen -W "$(rbenv commands)" -- "$word") )
    local words=("${COMP_WORDS[@]}")
    unset words[0]
    unset words[$COMP_CWORD]
    local completions=$(rbenv completions "${words[@]}")
    COMPREPLY=( $(compgen -W "$completions" -- "$word") )
complete -F _rbenv rbenv

We’ve already seen all of this! This file simply declares a new function and then registers a corresponding completion specification using complete. For this completion to be available, this file only needs to be sourced at some point. I haven’t dug into how rbenv does it, but I suspect that something in the eval "$(rbenv init -)" line included in our Bash profile ends up sourcing that completion script.

Parting Thoughts


The unsung hero of Bash’s programmable completion is really the readline library. This library is responsible for turning your <TAB> key-presses into calls to compspecs, as well as displaying or completing the resulting options those compspecs provide.

Some functionality of the readline library is configurable. One interesting option that can be set tells readline to immediately display ambiguous options after just one <TAB> key-press instead of two. With this option set, our above examples would look a little different. For example:

$ rbenv h<TAB><TAB>
help hooks

would only need to be:

$ rbenv h<TAB>
help hooks

If this sounds appealing, put the following in your ~/.inputrc:

set show-all-if-ambiguous on

To find out about other readline variables we could set in our ~/.inputrc (and to see their current values), we can use the Bash built-in command bind, with a -v flag.

$ bind -v
set bind-tty-special-chars on
set blink-matching-paren on
set byte-oriented off
set completion-ignore-case off
set convert-meta off
set disable-completion off
set enable-keypad off
set expand-tilde off
set history-preserve-point off
set horizontal-scroll-mode off
set input-meta on
set mark-directories on
set mark-modified-lines off
set mark-symlinked-directories off
set match-hidden-files on
set meta-flag on
set output-meta on
set page-completions on
set prefer-visible-bell on
set print-completions-horizontally off
set show-all-if-ambiguous off
set show-all-if-unmodified off
set visible-stats off
set bell-style audible
set comment-begin #
set completion-query-items 100
set editing-mode emacs
set keymap emacs

For more information, consult the relevant Bash info page node:

$ info -n '(bash)Readline Init File Syntax'

More on Completion

Larger completion scripts often contain multiple compspecs and several helpers. One convention I’ve seen several times is to name the helper functions with two leading underscores. If you find you need to write a large amount of completion logic in Bash, these conventions may be helpful to follow. As we’ve already seen, it’s also possible to handle some, most, or even all of the completion logic in other languages using external commands.

There is a package available from Homebrew called bash-completion that contains a great number of completion scripts for common commands. After installation, it also prompts the user to configure their Bash profile to source all of these scripts. They all live in a bash-completions.d directory under $(brew --prefix)/etc and can be good reading. A similar package should also be available for Linux (and probably originated there).

Speaking of similar features for different platforms, I should also mention that while this post focuses specifically on the programmable completion feature of the Bash shell, other shells have similar functionality. If you’re interested in learning about completion for zsh or fish, please see the links at the end of this post.

Further Reading

This is only the tip of the iceberg of what’s possible with Bash programmable completion. I hope that walking through a couple of examples has helped demystify what happens when tab completion magically provides custom options to commands. For further reading, see the links below.

The post Bash Completion, Part 2: Programmable Completion appeared first on Atomic Spin.

Bash Completion, Part 1: Using Tab Completion

One of the most useful features I learned when I first started working with Linux was the “tab completion” feature of Bash. This feature automatically completes unambiguous commands and paths when a user presses the <TAB> key. I’ll provide some examples to illustrate the utility of this feature.

Using Tab Completion

Completing Paths

I can open, and at the prompt ($), I can type:

$ open ~/Des<TAB>

This will automatically be completed to:

$ open ~/Desktop/

At this point, I can also use tab completion to get a list of ambiguous completion options, given what I’ve already entered. Here I have to press <TAB> twice.

$ open ~/Desktop/<TAB><TAB>

Will show me:

$ open ~/Desktop/
.DS_Store   .localized  hacker.jpg  rug/        wallpapers/
$ open ~/Desktop/

(I keep my desktop clean by periodically sweeping everything under the rug/ directory.)

Completing Commands

This completion feature can also be used to complete commands.
For example, if I type:

$ op<TAB><TAB>

I’ll see:

$ op
opam              opam-switch-eval  opensnoop
opam-admin        open              openssl
opam-installer    opendiff          opl2ofm
$ op

Or if I type:

$ ope<TAB>

I’ll see:

$ open

Learning Shell Commands with Tab Completion

This is useful for learning one’s way around a shell because it includes all the commands in the $PATH. When I first learned to use Bash and Linux, I used to tab-complete all the available commands starting with different letters of the alphabet. Then I’d pick those that sounded interesting, use which to find out where they were located, and use man to read about them.

For example, I might ask myself, what is opensnoop?

$ which opensnoop

Well, it’s located in /usr/bin, so it probably shipped with OS X–it isn’t something I installed with Homebrew since those commands end up in /usr/local/bin. I wonder what it does?

$ man opensnoop

This brings up the manual page, which tells me, among other things, that opensnoop is a command to “snoop file opens as they occur.” I also learn that it “Uses DTrace.” (If reading these manual pages or “manpages” is new to you, you can use the arrow keys to scroll up and down and press ‘q’ to quit when you’re done.)

Sometimes when I tried to open the manual page for a command, I was brought to a manual page for Bash’s own shell built-ins. This manpage was somewhat informative, but it didn’t really tell me much about how to use the command. I later learned that Bash has a help command that gives a brief overview of each built-in command. There’s also  much more information available in Bash’s info documentation.

You may find command line interfaces opaque at first, but there is often helpful documentation available (without resorting to Google) if you know how to access it. Tab completion was an important first step for me when learning how to access traditional UNIX documentation.

Come back tomorrow, when I’ll explain programmable completion in Bash.

The post Bash Completion, Part 1: Using Tab Completion appeared first on Atomic Spin.

SSL Certificate Expiration Checker

IT Operations teams frequently have the responsibility to ensure that SSL certificates for various websites are valid and renewed on a regular basis. While SSL certificate vendors often provide reminders and warnings when the certificates are about to expire, this is not always effective–especially when a variety of different SSL vendors have been used, or different parties are responsible for purchasing and maintaining the certificate.

To prevent SSL certificate expirations from going unnoticed, I wrote an application that checks the certificates from a variety of sites and ensures that they will remain valid for a certain number of days in the future.

In order to track the different sites, we set up integration with Dead Man’s Snitch (DMS). This ensures that SSL certificate checks are being performed and that the results are valid. If a check does not execute or does not return a valid response (because the certificate will be expiring within a configured look-ahead period), DMS creates an alert and lets me know that action is required.

The Concept

The application takes a set of hostnames and associated DMS snitch IDs, retrieves the SSL certificate for each host, evaluates if the SSL certificate’s expiration date is valid within the configured look-ahead period, and then notifies the DMS snitch if the expiration date is valid. It’s intended to be run on a regular basis–at least daily. The configured look-ahead period should be long enough to ensure that any expiring certificates can be renewed in time to avoid a service interruption.

The application uses SSLSocket from Ruby’s OpenSSL library to retrieve the SSL certificate from a given host:

tcpsocket ='', 443)
sslsocket =
certificate = sslsocket.peercert

It returns an OpenSSL::X509::Certificate object:

=> #<OpenSSL::X509::Certificate: subject=#<OpenSSL::X509::Name:0x007f8769015cc8>, issuer=#<OpenSSL::X509::Name:0x007f8769015ca0>, serial=#<OpenSSL::BN:0x007f8769015c78>, notbefore=2015-04-24 00:00:00 UTC, notafter=2016-04-24 23:59:59 UTC>

The SSL certificate expiration date is stored in the not_after attribute:

expiration =

which produces:

=> #<DateTime: 2016-04-24T23:59:59+00:00 ((2457503j,86399s,0n),+0s,2299161j)>

If the certificate expiration date is further into the future than our configured look-ahead date, then the SSL certificate is considered valid, and the associated DMS snitch is notified.

For example, if we configured our look-ahead period to be four weeks (28 days), the expiration date would be considered valid:

lookahead = + 28
expiration > lookahead
=> true

If we configured our look-ahead period to be four months (120 days), the expiration date would not be considered valid:

lookahead = + 120
expiration > lookahead
=> false

The Application

I’ve published the application on GitHub. You can test it out by cloning it. All you need to do is add a config.yml following the example provided in config.yml.example.

There are two rake tasks: one that simply performs checks, and another that performs checks and notifies the configured DMS snitch. The current application only sends notifications to DMS, but it could be easily extended to notify other services. Our default look-ahead period is two weeks (14 days), but that can be altered within the Rakefile.

A config.yml like:

- host:
  comment: "dummy certificate"
  port: 443
  snitch: aabbccddff
- host:
  comment: "primary certificate"
  port: 443
  snitch: aabbccddee

will check the SSL certificates for ‘’ and ‘’. The following is sample output from running bundle exec rake ssl:check_and_notify:

E, [2016-01-10T13:06:13.617778 #9772] ERROR -- : Certificate for is expired.
I, [2016-01-10T13:06:13.807550 #9772]  INFO -- : Certificate for is within expiry.
I, [2016-01-10T13:06:13.807608 #9772]  INFO -- : Notifying snitch aabbccddee
I, [2016-01-10T13:06:14.011433 #9772]  INFO -- : Notifying snitch aabbccddee succeeded.

Running bundle exec rake ssl:check will perform the checks without notifying DMS.

This application helps ensure that the SSL certificates for all of Atomic’s sites and many of our client sites are kept current and not allowed to expire. Whenever one of the checks for a SSL certificate fails, we are immediately notified. This gives us time to begin the process of renewing the certificate so that we can install it on the necessary servers well ahead of the expiration date.

This application has been extremely valuable to us, and I hope it will be useful to others as well.

The post SSL Certificate Expiration Checker appeared first on Atomic Spin.

Distributing Command Line Tools with Docker

Last time, I covered some of the basics of using Docker for isolated local development environments. This time, I’d like to talk about how Docker can be used to distribute command line tools with complex dependencies in a portable way.

Before I go any further, I want to point out that I am not the first person to use Docker in this way. For another example, see the command line interface for Code Climate’s new platform.


Why would you want to distribute a command line application with a container instead of running it directly on your host? One reason could be that your application has a complicated setup and installation process. For example, your application might require a lot of additional libraries to be installed. Or, your language of choice might not provide a good means of distributing applications without first installing all of the developer tools (e.g. Ruby1,2). There are often language-specific alternatives to this approach, but using Docker as a distribution mechanism can work for most anything you can install within a Linux container.

Simple Example: GNU Date

For a contrived example, let’s say you want to make use of the version of date(1) distributed with Ubuntu instead of the version available on OS X. (Yes, you can get GNU coreutils from Homebrew–this is a contrived example!) Let’s say we want to use date to get an ISO8601-formatted date from a relative date, say “next Friday.” We can do that using docker run like so:

$ docker run --rm -ti ubuntu:12.04 date -d "next Friday" -I

As you can see, we can directly invoke a command contained in a specific image, and pass it arguments. Let’s take this a step further and make a wrapper script:

# gnu-date - a wrapper script for invoking `date(1)` from within a Docker image
docker run --rm -ti ubuntu:12.04 date "$@"

If we save this as gnu-date, mark it as executable, and put it somewhere in our $PATH, we can invoke it like so:

$ gnu-date -d "next Friday" -I

Using a wrapper script like this to invoke docker run allows us to distribute our own applications.

Custom Images

As a more realistic example, let’s assume we have a GLI-based Ruby command line app we’d like to distribute to users who are not Ruby developers, but do have Docker Toolbox installed. We can write a Dockerfile to build an image based on the ruby:2.2 image like so:

FROM ruby:2.2
COPY ./ruby-cli-app /app
RUN cd /app 
 && bundle install
ENTRYPOINT ["ruby-cli-app"]

And we can build our image:

$ docker build -t ruby-cli-app .

And run it:

$ docker run --rm -ti ruby-cli-app help
ruby-cli-app - Describe your application here</code>
ruby-cli-app [global options] command [command options] [arguments...]
	-f, --flagname=The name of the argument - Describe some flag here (default: the default)
	--help - Show this message
	-s, --[no-]switch - Describe some switch here
	--version - Display the program version
	help - Shows a list of commands or help for one command

By using an ENTRYPOINT, all of the arguments to docker run following our image name are passed as arguments to our application.

Distributing via Docker Hub

To actually distribute our application in this way, we can publish our custom image on Docker Hub. Here’s a Makefile and a more advanced wrapper script:


PREFIX ?= /usr/local
VERSION = "v0.0.1"
all: install
	mkdir -p $(DESTDIR)$(PREFIX)/bin
	install -m 0755 ruby-cli-app-wrapper $(DESTDIR)$(PREFIX)/bin/ruby-cli-app
	@$(RM) $(DESTDIR)$(PREFIX)/bin/ruby-cli-app
	@docker rmi atomicobject/ruby-cli-app:$(VERSION)
	@docker rmi atomicobject/ruby-cli-app:latest
	@docker build -t atomicobject/ruby-cli-app:$(VERSION) . 
	&& docker tag -f atomicobject/ruby-cli-app:$(VERSION) atomicobject/ruby-cli-app:latest
publish: build
	@docker push atomicobject/ruby-cli-app:$(VERSION) 
	&& docker push atomicobject/ruby-cli-app:latest
.PHONY: all install uninstall build publish


# ruby-cli-app
# A wrapper script for invoking ruby-cli-app with docker
# Put this script in $PATH as `ruby-cli-app`
PROGNAME="$(basename $0)"
# Helper functions for guards
  echo "ERROR: $2" >&2
  echo "($PROGNAME wrapper version: $VERSION, error code: $error_code )" &>2
  exit $1
  which $cmd > /dev/null 2>&1 || error 1 "$cmd not found!"
# Guards (checks for dependencies)
check_cmd_in_path docker
check_cmd_in_path docker-machine
docker-machine active > /dev/null 2>&1 || error 2 "No active docker-machine VM found."
# Set up mounted volumes, environment, and run our containerized command
exec docker run 
  --interactive --tty --rm 
  --volume "$PWD":/wd 
  --workdir /wd 
  "atomicobject/ruby-cli-app:$VERSION" "$@"

Now that we have a container-based distribution mechanism for our application, we’re free to make use of whatever dependencies we need within the Linux container. We can use mounted volumes to allow our application to access files and even sockets from the host. We could even go as far as the Code Climate CLI does, and take control of Docker within our container to download and run additional images.


The biggest downside of this approach is that it requires users to first have Docker installed. Depending on your application, however, having a single dependency on Docker may be much simpler to support. Imagine, for example, having dependencies on multiple libraries across multiple platforms and dealing with other unexpected interactions with your users’ system configurations–this would be a great situation to choose Docker.

There’s another gotcha to watch out for when running more complex setups: It can be confusing to keep track of which files are and are not accessible via mounted volumes.


All of the examples above can also be found on our GitHub.


I am actively using this approach on an internal tool (to build and deploy Craft CMS-based websites) right now. If you also try out this approach, I’d love to hear about it! Please leave questions or comments below. Thanks!

The post Distributing Command Line Tools with Docker appeared first on Atomic Spin.