Bash Completion, Part 2: Programmable Completion

Don’t miss the previous post in this series: Bash Tab Completion

With Bash’s programmable completion functionality, we can create scripts that allow us to tab-complete arguments for specific commands. We can even include logic to handle deeply nested arguments for subcommands.

Programmable completion is a feature I’ve been aware of for some time, but I only recently took the time to figure out how it works. I’ll provide some links to more in-depth treatments at the end of this post, but for now, I want to share what I learned about using these other resources.

Completion Specifications

First, let’s take a look at what “completion specifications” (or “compspecs”) we have in our shell already. This list of compspecs essentially acts as a registry of handlers that offer completion options for different starting words. We can print a list of compspecs for our current shell using complete -p. The complete built-in is also used to register new compspecs, but let’s not get ahead of ourselves.

Here’s a sampling of compspecs from my shell:

$ complete -p
complete -o nospace -F _python_argcomplete gsutil
complete -o filenames -o nospace -F _pass pass
complete -o default -o nospace -F _python_argcomplete gcloud
complete -F _opam opam
complete -o default -F _bq_completer bq
complete -F _rbenv rbenv
complete -C aws_completer aws

Here, we have some rules for completing the arguments to the following commands:

  • gsutil
  • pass
  • gcloud
  • opam
  • bq
  • rbenv
  • aws

If I type any one of those commands into my shell followed by <TAB><TAB>, these rules will be used to determine the options Bash offers for completion.

OK, so, what are we looking at? Each of the compspecs in our list starts with complete and ends with the name of the command where it will provide programmable completion. Some of the compspecs here include some -o options, and we’ll get to those later. Each of these compspecs includes either -C or -F.

Completion Commands

The compspec for aws uses -C to specify a “completion command,” which is a command somewhere in our $PATH that will output completion options.

As input, the command will receive from Bash two environment variables: COMP_LINE and COMP_POINT. These represent the current line being completed, and the point at which completion is taking place.

As output, the completion command is expected to produce a list of completion options (one per line). I won’t go into the details of this approach, but if you’re curious, you can read the source for the aws_completer command provided by Amazon’s aws-cli project.

Completion Functions

A more common approach to completion is the use of custom completion functions. Each of the compspecs containing -F registers a completion function. These are simply Bash functions that make use of environment variables to provide completion options. By convention, completion functions begin with an underscore character (_), but there’s nothing magical about the function names.

Like the completion commands, completion functions receive the COMP_LINE and COMP_POINT environment variables. However, rather than providing line-based text output, completion functions are expected to set the COMPREPLY environment variable to an array of completion options. In addition to COMP_LINE and COMP_POINT, completion functions also receive the COMP_WORDS and COMP_CWORD environment variables.

Let’s look at some of these completion functions to see how they work. We can use the Bash built-in type command to print out these function definitions (even before we know where they came from).

$ type _rbenv
_rbenv is a function
_rbenv ()
    local word="${COMP_WORDS[COMP_CWORD]}";
    if [ "$COMP_CWORD" -eq 1 ]; then
        COMPREPLY=($(compgen -W "$(rbenv commands)" -- "$word"));
        local words=("${COMP_WORDS[@]}");
        unset words[0];
        unset words[$COMP_CWORD];
        local completions=$(rbenv completions "${words[@]}");
        COMPREPLY=($(compgen -W "$completions" -- "$word"));

This example demonstrates a few common patterns. We see that COMP_CWORD can be used to index into COMP_WORDS to get the current word being completed. We also see that COMPREPLY can be set in one of two ways, both using some external helpers and a built-in command we haven’t seen yet: compgen. Let’s run through some possible input to see how this might work.

If we type:

$ rbenv h<TAB><TAB>

We’ll see:

$ rbenv h
help hooks

In this case, COMPREPLY comes from the first branch of (COMP_CWORD is 1). The local variable word is set to h, and this is passed to compgen along with a list of possible commands generated by rbenv commands. The compgen built-in returns only those options from a given wordlist (-W) that start with the current word of the user’s input, $word. We can perform similar filtering with grep:

$ rbenv commands | grep '^h'

The second branch provides completion options for subcommands. Let’s walk through another example:

$ rbenv hooks <TAB><TAB>

Will give us:

$ rbenv hooks
exec    rehash  which

Each of these options simply comes from rbenv completions:

$ rbenv completions hooks

And since we haven’t provided another word yet, compgen is filtering with an empty string, analogous to:

$ rbenv completions hooks | grep '^'

If we instead provide the start of a word, we’ll have it completed for us:

$ rbenv hooks e<TAB>

Will give us:

$ rbenv hooks exec

In this case, our compgen invocation might be something like:

$ compgen -W "$(rbenv completions hooks)" -- "e"

Or we can imagine with grep:

$ rbenv completions hooks | grep '^e'

With just a single result in COMPREPLY, readline is happy to complete the rest of the word exec for us.

Registering Custom Completion Functions

Now that we know what it’s doing, let’s use Bash’s extended debugging option to find out where this _rbenv function came from:

$ shopt -s extdebug && declare -F _rbenv && shopt -u extdebug
_rbenv 1 /usr/local/Cellar/rbenv/0.4.0/libexec/../completions/rbenv.bash

If we look in this rbenv.bash file, we’ll see:

$ cat /usr/local/Cellar/rbenv/0.4.0/libexec/../completions/rbenv.bash
_rbenv() {
  local word="${COMP_WORDS[COMP_CWORD]}"
  if [ "$COMP_CWORD" -eq 1 ]; then
    COMPREPLY=( $(compgen -W "$(rbenv commands)" -- "$word") )
    local words=("${COMP_WORDS[@]}")
    unset words[0]
    unset words[$COMP_CWORD]
    local completions=$(rbenv completions "${words[@]}")
    COMPREPLY=( $(compgen -W "$completions" -- "$word") )
complete -F _rbenv rbenv

We’ve already seen all of this! This file simply declares a new function and then registers a corresponding completion specification using complete. For this completion to be available, this file only needs to be sourced at some point. I haven’t dug into how rbenv does it, but I suspect that something in the eval "$(rbenv init -)" line included in our Bash profile ends up sourcing that completion script.

Parting Thoughts


The unsung hero of Bash’s programmable completion is really the readline library. This library is responsible for turning your <TAB> key-presses into calls to compspecs, as well as displaying or completing the resulting options those compspecs provide.

Some functionality of the readline library is configurable. One interesting option that can be set tells readline to immediately display ambiguous options after just one <TAB> key-press instead of two. With this option set, our above examples would look a little different. For example:

$ rbenv h<TAB><TAB>
help hooks

would only need to be:

$ rbenv h<TAB>
help hooks

If this sounds appealing, put the following in your ~/.inputrc:

set show-all-if-ambiguous on

To find out about other readline variables we could set in our ~/.inputrc (and to see their current values), we can use the Bash built-in command bind, with a -v flag.

$ bind -v
set bind-tty-special-chars on
set blink-matching-paren on
set byte-oriented off
set completion-ignore-case off
set convert-meta off
set disable-completion off
set enable-keypad off
set expand-tilde off
set history-preserve-point off
set horizontal-scroll-mode off
set input-meta on
set mark-directories on
set mark-modified-lines off
set mark-symlinked-directories off
set match-hidden-files on
set meta-flag on
set output-meta on
set page-completions on
set prefer-visible-bell on
set print-completions-horizontally off
set show-all-if-ambiguous off
set show-all-if-unmodified off
set visible-stats off
set bell-style audible
set comment-begin #
set completion-query-items 100
set editing-mode emacs
set keymap emacs

For more information, consult the relevant Bash info page node:

$ info -n '(bash)Readline Init File Syntax'

More on Completion

Larger completion scripts often contain multiple compspecs and several helpers. One convention I’ve seen several times is to name the helper functions with two leading underscores. If you find you need to write a large amount of completion logic in Bash, these conventions may be helpful to follow. As we’ve already seen, it’s also possible to handle some, most, or even all of the completion logic in other languages using external commands.

There is a package available from Homebrew called bash-completion that contains a great number of completion scripts for common commands. After installation, it also prompts the user to configure their Bash profile to source all of these scripts. They all live in a bash-completions.d directory under $(brew --prefix)/etc and can be good reading. A similar package should also be available for Linux (and probably originated there).

Speaking of similar features for different platforms, I should also mention that while this post focuses specifically on the programmable completion feature of the Bash shell, other shells have similar functionality. If you’re interested in learning about completion for zsh or fish, please see the links at the end of this post.

Further Reading

This is only the tip of the iceberg of what’s possible with Bash programmable completion. I hope that walking through a couple of examples has helped demystify what happens when tab completion magically provides custom options to commands. For further reading, see the links below.

The post Bash Completion, Part 2: Programmable Completion appeared first on Atomic Spin.

Bash Completion, Part 1: Using Tab Completion

One of the most useful features I learned when I first started working with Linux was the “tab completion” feature of Bash. This feature automatically completes unambiguous commands and paths when a user presses the <TAB> key. I’ll provide some examples to illustrate the utility of this feature.

Using Tab Completion

Completing Paths

I can open, and at the prompt ($), I can type:

$ open ~/Des<TAB>

This will automatically be completed to:

$ open ~/Desktop/

At this point, I can also use tab completion to get a list of ambiguous completion options, given what I’ve already entered. Here I have to press <TAB> twice.

$ open ~/Desktop/<TAB><TAB>

Will show me:

$ open ~/Desktop/
.DS_Store   .localized  hacker.jpg  rug/        wallpapers/
$ open ~/Desktop/

(I keep my desktop clean by periodically sweeping everything under the rug/ directory.)

Completing Commands

This completion feature can also be used to complete commands.
For example, if I type:

$ op<TAB><TAB>

I’ll see:

$ op
opam              opam-switch-eval  opensnoop
opam-admin        open              openssl
opam-installer    opendiff          opl2ofm
$ op

Or if I type:

$ ope<TAB>

I’ll see:

$ open

Learning Shell Commands with Tab Completion

This is useful for learning one’s way around a shell because it includes all the commands in the $PATH. When I first learned to use Bash and Linux, I used to tab-complete all the available commands starting with different letters of the alphabet. Then I’d pick those that sounded interesting, use which to find out where they were located, and use man to read about them.

For example, I might ask myself, what is opensnoop?

$ which opensnoop

Well, it’s located in /usr/bin, so it probably shipped with OS X–it isn’t something I installed with Homebrew since those commands end up in /usr/local/bin. I wonder what it does?

$ man opensnoop

This brings up the manual page, which tells me, among other things, that opensnoop is a command to “snoop file opens as they occur.” I also learn that it “Uses DTrace.” (If reading these manual pages or “manpages” is new to you, you can use the arrow keys to scroll up and down and press ‘q’ to quit when you’re done.)

Sometimes when I tried to open the manual page for a command, I was brought to a manual page for Bash’s own shell built-ins. This manpage was somewhat informative, but it didn’t really tell me much about how to use the command. I later learned that Bash has a help command that gives a brief overview of each built-in command. There’s also  much more information available in Bash’s info documentation.

You may find command line interfaces opaque at first, but there is often helpful documentation available (without resorting to Google) if you know how to access it. Tab completion was an important first step for me when learning how to access traditional UNIX documentation.

Come back tomorrow, when I’ll explain programmable completion in Bash.

The post Bash Completion, Part 1: Using Tab Completion appeared first on Atomic Spin.

Distributing Command Line Tools with Docker

Last time, I covered some of the basics of using Docker for isolated local development environments. This time, I’d like to talk about how Docker can be used to distribute command line tools with complex dependencies in a portable way.

Before I go any further, I want to point out that I am not the first person to use Docker in this way. For another example, see the command line interface for Code Climate’s new platform.


Why would you want to distribute a command line application with a container instead of running it directly on your host? One reason could be that your application has a complicated setup and installation process. For example, your application might require a lot of additional libraries to be installed. Or, your language of choice might not provide a good means of distributing applications without first installing all of the developer tools (e.g. Ruby1,2). There are often language-specific alternatives to this approach, but using Docker as a distribution mechanism can work for most anything you can install within a Linux container.

Simple Example: GNU Date

For a contrived example, let’s say you want to make use of the version of date(1) distributed with Ubuntu instead of the version available on OS X. (Yes, you can get GNU coreutils from Homebrew–this is a contrived example!) Let’s say we want to use date to get an ISO8601-formatted date from a relative date, say “next Friday.” We can do that using docker run like so:

$ docker run --rm -ti ubuntu:12.04 date -d "next Friday" -I

As you can see, we can directly invoke a command contained in a specific image, and pass it arguments. Let’s take this a step further and make a wrapper script:

# gnu-date - a wrapper script for invoking `date(1)` from within a Docker image
docker run --rm -ti ubuntu:12.04 date "$@"

If we save this as gnu-date, mark it as executable, and put it somewhere in our $PATH, we can invoke it like so:

$ gnu-date -d "next Friday" -I

Using a wrapper script like this to invoke docker run allows us to distribute our own applications.

Custom Images

As a more realistic example, let’s assume we have a GLI-based Ruby command line app we’d like to distribute to users who are not Ruby developers, but do have Docker Toolbox installed. We can write a Dockerfile to build an image based on the ruby:2.2 image like so:

FROM ruby:2.2
COPY ./ruby-cli-app /app
RUN cd /app 
 && bundle install
ENTRYPOINT ["ruby-cli-app"]

And we can build our image:

$ docker build -t ruby-cli-app .

And run it:

$ docker run --rm -ti ruby-cli-app help
ruby-cli-app - Describe your application here</code>
ruby-cli-app [global options] command [command options] [arguments...]
	-f, --flagname=The name of the argument - Describe some flag here (default: the default)
	--help - Show this message
	-s, --[no-]switch - Describe some switch here
	--version - Display the program version
	help - Shows a list of commands or help for one command

By using an ENTRYPOINT, all of the arguments to docker run following our image name are passed as arguments to our application.

Distributing via Docker Hub

To actually distribute our application in this way, we can publish our custom image on Docker Hub. Here’s a Makefile and a more advanced wrapper script:


PREFIX ?= /usr/local
VERSION = "v0.0.1"
all: install
	mkdir -p $(DESTDIR)$(PREFIX)/bin
	install -m 0755 ruby-cli-app-wrapper $(DESTDIR)$(PREFIX)/bin/ruby-cli-app
	@$(RM) $(DESTDIR)$(PREFIX)/bin/ruby-cli-app
	@docker rmi atomicobject/ruby-cli-app:$(VERSION)
	@docker rmi atomicobject/ruby-cli-app:latest
	@docker build -t atomicobject/ruby-cli-app:$(VERSION) . 
	&& docker tag -f atomicobject/ruby-cli-app:$(VERSION) atomicobject/ruby-cli-app:latest
publish: build
	@docker push atomicobject/ruby-cli-app:$(VERSION) 
	&& docker push atomicobject/ruby-cli-app:latest
.PHONY: all install uninstall build publish


# ruby-cli-app
# A wrapper script for invoking ruby-cli-app with docker
# Put this script in $PATH as `ruby-cli-app`
PROGNAME="$(basename $0)"
# Helper functions for guards
  echo "ERROR: $2" >&2
  echo "($PROGNAME wrapper version: $VERSION, error code: $error_code )" &>2
  exit $1
  which $cmd > /dev/null 2>&1 || error 1 "$cmd not found!"
# Guards (checks for dependencies)
check_cmd_in_path docker
check_cmd_in_path docker-machine
docker-machine active > /dev/null 2>&1 || error 2 "No active docker-machine VM found."
# Set up mounted volumes, environment, and run our containerized command
exec docker run 
  --interactive --tty --rm 
  --volume "$PWD":/wd 
  --workdir /wd 
  "atomicobject/ruby-cli-app:$VERSION" "$@"

Now that we have a container-based distribution mechanism for our application, we’re free to make use of whatever dependencies we need within the Linux container. We can use mounted volumes to allow our application to access files and even sockets from the host. We could even go as far as the Code Climate CLI does, and take control of Docker within our container to download and run additional images.


The biggest downside of this approach is that it requires users to first have Docker installed. Depending on your application, however, having a single dependency on Docker may be much simpler to support. Imagine, for example, having dependencies on multiple libraries across multiple platforms and dealing with other unexpected interactions with your users’ system configurations–this would be a great situation to choose Docker.

There’s another gotcha to watch out for when running more complex setups: It can be confusing to keep track of which files are and are not accessible via mounted volumes.


All of the examples above can also be found on our GitHub.


I am actively using this approach on an internal tool (to build and deploy Craft CMS-based websites) right now. If you also try out this approach, I’d love to hear about it! Please leave questions or comments below. Thanks!

The post Distributing Command Line Tools with Docker appeared first on Atomic Spin.

Docker Basics for Local Development

There’s been a lot of talk about how Docker can be used in conjunction with tools like Kubernetes to manage clusters of highly scalable microservices.

But Docker can also be a very useful tool for local development, especially when it comes to making repeatable builds and environments faster and easier.

Getting Started

Docker is under very active development, and the best way to get started seems to change every couple of months. As of this writing (2015-09-28), the best way to get set up on OS X seems to be via Docker Toolbox (which can also be downloaded using Homebrew Cask: brew cask install dockertoolbox.)

Docker Toolbox automates the setup of the Docker runtime and several supporting tools. On OS X, this means ensuring that VirtualBox is installed, installing docker-machine and using it to create a VirtualBox boot2docker1 VM to host the Docker daemon.

Once Docker Toolbox is installed, we should have access to the Docker Quickstart Terminal. This application simply opens OS X’s with a wrapper script, running a few checks and setting up environment variables so that the docker commands know how to find the Docker daemon running on the VM.

Rather than jumping over to the Docker Quickstart Terminal every time I want to do something with Docker locally, I’ve simply added the following to my Bash configuration so that it sets up the correct environment with each new shell session:

# Connect docker client to Docker Toolbox's boot2docker VM
# (A docker-machine created VirtualBox VM called 'default')
eval $(docker-machine env default)

After sourcing this file or starting a new shell, we should have some environment variables set for Docker:

$ env | grep DOCKER

Now, I can run docker ps and other Docker commands from any Bash shell and connect to the Docker daemon running on the default Virtual Machine.

Testing It Out

Let’s say we’ve been developing a Rails app on OS X, and we’d like to test it on Linux so we can document its dependencies in preparation for a production deployment. We could use a tool like Vagrant to spin up a Linux VM from a known basebox, but if we’re splitting our time between several projects, we may not want to deal with the overhead of downloading and running several full VMs on our development workstation.

Since Docker uses containers, we only need one VM running Linux (or none if our workstation is already running Linux, but here, we’re assuming development on OS X). Container images can be much smaller than full VM images, and they can also be spun up much faster since the VM that hosts them only has to boot once.

Let’s start an Ubuntu container and see if we can get our application running inside of it.

docker run --rm -ti ubuntu:14.04 bash

This will pull down the official Ubuntu 14.04 Docker image from the _/ubuntu Docker Hub repo (with the 14.04 tag) if we don’t already have it it locally. It will create a new container using that image (in the default VM provided by Docker Toolbox), then run the command bash within that container, attaching it to our terminal in interactive mode (-ti). After running, it will remove the container (--rm), not saving any modifications that we may have made to it.

Once all of the layers2 of the image are pulled down, we should see something like this:


By default, we’re logged in to the running container as root. The 8d6cb95178f4 is the container’s ID. We can use this later to operate on the container. If we poke around a bit, we’ll see that we’re in a minimal Ubuntu Linux environment. Let’s exit and try something more advanced.

This time, let’s attach a volume containing the source code for our application. This will let us access that directory from within the running container. WARNING: This is not a copy! Modifications will also be made to the directory on our workstation.

From our application’s source directory, try this:

docker --rm -v $PWD:/src ubuntu bash 

We’ve added -v $PWD:/src. This will mount the current working directory from our host as a volume at /src in the container3.

We should now be able change to that directory and poke around. We should be able to see files from our app’s source repository and working within the container, create files that show up on our workstation.

For this example, I’m using the sample Rails 4 app from Michael Hartl’s Rails Tutorial.

root@9c0c1fc48459:/# cd /src/
root@9c0c1fc48459:/src# ls
Gemfile  Gemfile.lock  Guardfile  LICENSE  Rakefile  app  bin  config  db  features  lib  log  public  script  spec  vendor
root@9c0c1fc48459:/src# cat
# This file is used by Rack-based servers to start the application.
require ::File.expand_path('../config/environment',  __FILE__)
run SampleApp::Application
root@9c0c1fc48459:/src# ls
Gemfile       Guardfile          Rakefile  bin  features        lib  public  spec
Gemfile.lock  LICENSE  app       config  db         fromdocker.txt  log  script  vendor
root@9c0c1fc48459:/src# touch fromdocker.txt
root@9c0c1fc48459:/src# exit
vonnegut:sample_app_rails_4 english$ ls
Gemfile           LICENSE           Rakefile          config            features          log               spec
Gemfile.lock         app              fromdocker.txt    public            vendor
Guardfile bin               db                lib               script

We now have the basic tools for doing local development with Docker. We’ve pulled images from Docker Hub, we’ve run local containers based on those images, and we’ve mounted volumes into these containers allowing us to interact with files on our host workstation.


1. The name “boot2docker” has been used to refer to both the minimal VM for hosting a Docker daemon and the package that was a predecessor to the Docker Toolbox. Here, we refer to the minimal VM with the Docker daemon installed. The Boot2Docker package has been superceded by the Docker Toolbox as the preferred means of installing Docker on OS X.

2. Docker images are built up in layers using union filesystems with copy on write (CoW) semantics. This allows some lower layers to be shared between different images. Pulling down an image pulls down all of the image’s layers. If we omitted the --rm from our docker run commands and made modifications to the container’s filesystem, we could save a new image with a new layer added on top of all the previous layers. For more on Docker’s union filesytems, see Jérôme Petazzoni’s “Deep Dive into Docker Storage Drivers”.

3. One of the things that Docker Toolbox simplifies for us is mounting filesystems into the boot2docker VM. By default, it sets up a file sharing mount of /Users to /Users on the boot2docker VM. This allows -v in our docker run commands to mount individual directories into individual containers. If you need to mount something outside of /Users, you will need to manually set up the file sharing in VirtualBox to support it.

The post Docker Basics for Local Development appeared first on Atomic Spin.

Commandline Craft: Creating a Craft Console Plugin

I recently worked on automating a deployment step for a website built with Craft. Specifically, I wanted to clear some caches during a deploy. Previously this had been a manual step done through the admin interface, but it was easy to forget. Furthermore, invalidating the CloudFront cache without first invalidating the Craft cache meant that sometimes CloudFront would re-cache old pages and images.

During deploys, we already run several other commands on the server (for example: to update file permissions and create symlinks) so I set out to find a way to expose this cache-clearing functionality through the command line.

I found good documentation for creating Craft plugins, but the details of creating plugins specifically intended to be run from the command line were not clear. I’d like to share what I learned about how to do this. I won’t go into detail about the cache-clearing code, but I’ll demonstrate how to set up a console plugin and invoke it from the command line.

Yii Command Runner

Craft is built on the Yii framework and includes a command runner, yiic under craft/app/etc/console/:

 $ /Applications/MAMP/bin/php/php5.6.7/bin/php ./craft/app/etc/console/yiic help Yii command runner (based on Yii v1.1.16) Usage: ./craft/app/etc/console/yiic  [parameters...] The following commands are available:  - base  - migrate  - querygen  - shell To see individual command help, use the following:    ./craft/app/etc/console/yiic help  

These commands run in a context where the Craft code has already been bootstraped, so (in theory, at least) anything that can be done through the Craft admin panel can also be done from a custom yiic command here.

The benefit of exposing functionality through this command line interface is ease of automation. Other scripts can be written to call these commands for you—for example: during a deploy, or from a cron job that runs on regular basis.

After creating and installing our plugin, we’ll have a new custom command available:

 $ /Applications/MAMP/bin/php/php5.6.7/bin/php ./craft/app/etc/console/yiic help Yii command runner (based on Yii v1.1.16) Usage: ./craft/app/etc/console/yiic  [parameters...] The following commands are available:  - base  - helloworld  - migrate  - querygen  - shell To see individual command help, use the following:    ./craft/app/etc/console/yiic help  

Creating a Basic Plugin

To create our simple Craft console plugin, we’ll first start with a basic plugin as described in the Craft docs. For the sake of programming tradition, we’ll call ours “Hello World” so let’s start with a directory helloworld/ containing a HelloWorldPlugin.php file:

 <?php namespace Craft; class HelloWorldPlugin extends BasePlugin {     function getName()     {          return Craft::t('Hello World');     }     function getVersion()     {         return '0.0.1';     }     function getDeveloper()     {         return 'Atomic Object';     }     function getDeveloperUrl()     {         return '';     } } ?> 

We’ll put this helloworld/ directory under craft/plugins/ when we install it.

 craft/ │ ... ├── plugins/ │   │ ... │   ├── helloworld/ │   │   └── HelloWorldPlugin.php ... 

Creating a Craft Console Plugin

Now, for making something accessible from the command line, we’ll create a new BaseCommand. This will go in a file called HelloWorldCommand.php under a subdirectory called consolecommands.

 craft/ │ ... ├── plugins/ │   │ ... │   ├── helloworld/ │   │   ├── HelloWorldPlugin.php │   │   ├── consolecommands/ │   │   │   └── HelloWorldCommand.php ... 

In HelloWorldCommand.php we’ll put the following code:

 <?php namespace Craft; class HelloWorldCommand extends BaseCommand {   public function actionHello()   {     echo "Hello World!n";   } } ?> 

If we check yiic again, we might expect to see our command, but…

 $ /Applications/MAMP/bin/php/php5.6.7/bin/php ./craft/app/etc/console/yiic help Yii command runner (based on Yii v1.1.16) Usage: ./craft/app/etc/console/yiic  [parameters...] The following commands are available:  - base  - migrate  - querygen  - shell To see individual command help, use the following:    ./craft/app/etc/console/yiic help  

It’s still not there. Before it will appear, we need to enable our plugin:



Now when we check `yiic`, we see our command:

 $ /Applications/MAMP/bin/php/php5.6.7/bin/php ./craft/app/etc/console/yiic help Yii command runner (based on Yii v1.1.16) Usage: ./craft/app/etc/console/yiic  [parameters...] The following commands are available:  - base  - helloworld  - migrate  - querygen  - shell To see individual command help, use the following:    ./craft/app/etc/console/yiic help  

Let’s check that help…

 $ /Applications/MAMP/bin/php/php5.6.7/bin/php ./craft/app/etc/console/yiic help helloworld Usage: ./craft/app/etc/console/yiic helloworld hello 

So, it looks like the actionHello() function we created earlier is mapped to the subcommand hello.

Let’s run it:

 $ /Applications/MAMP/bin/php/php5.6.7/bin/php ./craft/app/etc/console/yiic helloworld hello Hello World! 


Now we’re equipped to create more complex Craft console plugins that invoke things like craft()->templateCache->deleteAllCaches() in our PHP code and make these features available from the command line. We can use the Yii command runner to call these new commands from bash scripts, deployment scripts, and cron jobs. Automating Craft tasks just got a lot easier.


Credit to the following Stack Exchange posts for pointing me in the right direction:

The post Commandline Craft: Creating a Craft Console Plugin appeared first on Atomic Spin.

Sticky Documentation, Part 2: Source Control History as Documentation

Last week, I introduced a concept I’m calling “sticky documentation” and reviewed a few ways that we can make the most of the “stickiest” documentation we have: the code. Today, I’d like to talk about another form of “sticky” documentation: source control history.

If you have access to the code for an application, and that code has been kept under some form of source control, it’s quite likely you’ll have access to the source control history as well. In other words, the source control history is likely to stick around.

How can we make the most of source control history as a form of documentation for our projects? What will be most valuable to future code archeologists digging in our repositories?

Properties of Source Control History

First, let’s distinguish some properties of source control (a.k.a “version control” or “revision control”) history from other forms of documentation like the code itself. The code can tell us what’s happening and can help us understand the overall structure of a software system; source control history can tell us how things came to be that way. Oftentimes, when troubleshooting a problem, the story of how things came to be is immensely valuable.

Like the code, source control is near at hand during development—working with source control is often a necessary part of testing and deploying new code. When it’s an integrated part of the development process, it can’t be neglected completely, but it can certainly be underutilized.

Making the Most of Source Control History

Here are some ways to make source control history a more valuable asset for your project. (I’ll be using Git as an example, but most of the same practices apply to other tools as well.)

1. Make clean commits

Try to scope commits to well-defined units of work: a specific feature, a specific bug fix, a specific code cleanup task. It’s often tempting to commit a bug fix and a feature, or a whole afternoon’s work on three separate features and fixes all at once, but being disciplined about maintaining “clean” commits can go a long way toward making your source control history tractable in the future.

When using Git, I sometimes use git commit -p to help split up changes when I accidentally slip up and have a few lines that belong in a separate commit. I can leave those lines unstated while committing other changes to the same file. It’s tedious if you have a large number of changes that need to be split, so it’s still important to maintain discipline, but git commit -p is really handy for small fix-ups.

2. Write good commit messages

Take the time to write good commit messages. If you’re making an effort to make clean commits, you should at least be able to explain what the purpose of your commit is. Also consider the formatting of your commit messages. If you’ve made a complex change that might be hard to remember the reason for later, add a few paragraphs of explanation to the commit message. Just remember to keep the subject line concise.

This is another area that requires discipline. Sometimes, in the heat of troubleshooting a thorny problem, it’s tempting to start committing with messages like “Trying something else” or “Update” or even to use less-than-polite language. Don’t.

With Git, I often use git commit --amend to improve my commit messages before pushing. If you have a lot of (unpushed) troubleshooting commits, you might also consider squashing them down to the one meaningful change, and giving that a good commit message before pushing.

3. Take advantage of branches

This varies somewhat by tool, but for Git at least, it’s very easy to create new branches. Take advantage of this fact and use branches as another way to structure your work. Being able to follow development on various features through various stages of QA to release can help to clarify where things stand, and can tell a much clearer picture of how the code currently in production came to be.

With Git, you can create explicit merge commits with git merge --no-ff <branchname>. This clearly sets apart all of the commits that were made on that branch as a group. It also gives you another opportunity to leave a meaningful commit message. Consider using formal branching model like “Git flow”, or adapt it to a form that fits your workflow best. Whatever your branching model, be consistent and make it part of your team’s workflow. This too takes discipline, but it’s discipline that will pay off when you look back at your source control history and can see at a glance when major features and fixes landed on master.

4. Include Identifiers

When writing commit messages, you can include other unique identifiers in them like issue numbers from an issue tracker or keywords that can tag a particular type of change. Github automatically creates hyperlinks for issue numbers and usernames, but you don’t need to have that for unique identifiers to be useful in commit messages.

Sometimes when troubleshooting an issue, I come to the code via a commit that included the ID of an issue I was looking at. Sometimes, it’s the other way around. Either way, cross-references to and from another source of information can be really helpful, so long as it’s not an excuse to omit necessary information from your commit messages. It’s quite possible that the code repository may someday be accessible to someone who does not have access to the original issue tracker.

5. Tag Releases

Tags can be easy to forget when you’re deploying code behind the scenes and not making a publicly downloadable release artifact. Consider making automatic tag creation part of your deploy process.

A tag with the date that your code was deployed to a particular environment can be awfully useful when trying to figure out what might have caused an issue for an end user 2 weeks ago. Tags can also provide helpful specificity when talking about “the release with feature X that we deployed last month.”


Programs must be written for people to read, and only incidentally for machines to execute.
– Hal Abelson and Gerald Sussman, “Structure and Interpretation of Computer Programs”

Code is for humans. Generating documentation should be part of writing the code. Forms of documentation that “stick” with the code are often overlooked, but extremely valuable. Source control history can be used to tell the story of how a software system came to be. The most valuable source control history requires a degree of discipline to generate, but is not unattainable. The effort required to produce valuable source control history is less than what would be required to generate the same level of detailed code documentation in another medium.

Code and source control history are just two forms of “sticky” documentation. I think that a good test suite could qualify as a third form of “sticky” documentation. Are there others? How does your team generate and maintain “sticky” documentation?

The post Sticky Documentation, Part 2: Source Control History as Documentation appeared first on Atomic Spin.

Sticky Documentation, Part 1: Code as Documentation

I support and maintain a variety of applications in production. Some of these applications consist of what might be considered “legacy” codebases. When troubleshooting issues with these applications, detailed and accurate external documentation is not always available. I often find myself acting as a code archaeologist, reliant on only the contents of the source code repo to get to the bottom of a thorny problem.

In these situations, I’ve found that source code repositories contain at least two important forms of documentation:

  1. The code: can be self-documenting, insofar as it clearly expresses intent and data flow
  2. The revision control history: can tell detailed stories of how a piece of code came to be

In my opinion, these have the potential to be the most important documentation your app can have.
I’d like to share some observations on how cultivating good habits can make these two forms of documentation more valuable.

Why Code as Documentation

When I say that code can act as documentation, I explicitly do not mean comments. Comments have their place, but it’s easy for a comment to get separated or out of sync with the code it pertains to. It’s much better to write expressive code, when possible.

So long as you have the code, you have… the code. It’s the stickiest form of documentation available. External documentation can get stale, and it’s not always made available to all the right people when teams transition—things can get lost in the shuffle. It’s likely though, that you’ll have a copy of the source when providing technical support or working on the application, and even more likely when the application is a web application in an interpreted language where the source is what gets deployed.

How Code as Documentation

There are a few different ways that I’ve recently seen code be expressive and self-documenting. There are certainly more ways than I’ll cover here, but these are a few where I’ve recently seen the benefit firsthand.

1. Micro improvements: Naming behavior

The first is at a micro-level: within a file, it can be helpful to give descriptive names to series of steps. Creating a separate, named method for five to ten lines of code that seems obvious to you can go a long ways toward making it immediately clear to a later reader.

I was reminded of this recently when pairing with Matt Fletcher. We were test-driving the development of a Chef cookbook to set up servers for his project and he pointed out a few places where such descriptive helper methods added clarity. As a bonus, we were able to “DRY” up some code, too, but the primary goal was to make the code easier to understand.

2. Macro improvements: Abstractions

The second is at a macro-level: when designing a large project, it’s important to be mindful about the abstractions you use, and how the architecture of your codebase supports clear thinking about the core business logic. Drew Colthorp recently gave a presentation at SoftwareGR about just this thing.

Once a project is underway, further enhancements to your abstractions are sometimes worth refactoring for, but it’s important to prioritize that work according to the value it provides.

3. Clarifying Techniques for Control and Data Flow

Finding ways to clearly express control flow and data flow within your code goes a long ways toward making it tractable for a later reader. This is especially true for asynchronous code—there are a lot of benefits to asynchronous code, but clear expression of control flow isn’t always one of them.

Leaning on well-known idioms and opinionated frameworks can help in some contexts, so long as the reader has a chance to learn those idioms as well. Straying from those norms can incur significant penalties when it comes to the readability and maintainability of your code.

Another help for clarifying control flow is to use explicitly defined finite state machines. When it comes to data flow, it is also helpful to lean on local idioms. Another approach, similar to using finite state machines, is to build explicit pipelines for data flow. Seeing how data flows through a series of functional transformations can be easier to understand as a pipeline than when the behaviors are spread across interaction between several components and buried within the behavior of complex objects.

Next week we’ll talk about making the most of another form of “sticky” documentation: revision control history.

The post Sticky Documentation, Part 1: Code as Documentation appeared first on Atomic Spin.

Remote-First Communication for Project Teams


“If anyone is remote, you’re all remote.”

At Atomic Object, we value co-located teams. But not every team member can always be co-located. Larger project teams may have members from multiple offices. Some projects might involve working closely with other vendors. I experience this “remoteness” when I support the infrastructure needs of teams in our Ann Arbor and Detroit offices.

When these situations arise, it helps if your communication style is already what I would call “remote-first”.

What is remote-first communication?

Remote-first communication prioritizes communicating with those who are not here now. Whether that’s a team member who’s working from home with a bad head cold or your client on the coast, successful projects go the extra mile to communicate effectively with those who are remote.

Generally speaking, remote-first communication means preferring written, searchable methods of communication that work even when the sender and receiver aren’t engaged at the same time. This means that phone calls, while potentially much better at conveying tone and establishing emotional connections, cannot be the default method of connecting with teammates.

Remember, the group of people on your team who are “not here now” also includes anyone who might work on the project at any point in the future, including yourself. Remote-first communication has the knock-on effect of acting as a form of documentation—recording conversations had, decisions made, resources shared, etc.

Privacy Limits & No-blame Culture

Remote-first communication that clearly documents problems encountered, ideas proposed, and decisions reached works best in organizations with strong no-blame cultures. The extent to which team members fear their words being used against them in the future limits their candor in ways that can greatly impede coming to solutions. This is is generally true, but the lasting verbatim record of remote-first communication greatly amplifies the need to tend to this culture.

Remote-first Tools used by Atomic

Remote-first communication is not just about the media used, but also the way in which they are used, the patterns of communication. Teams may use a variety of tools to communicate, but remote-first communication patterns have, as a default, a medium that favors communication with remote team members.

Here are some tools I’ve seen used in this way at Atomic:

  • Trello
  • Jabber/XMPP/GTalk
  • HipChat
  • Slack
  • IRC
  • Pivotal Tracker
  • Basecamp
  • E-mail

What is important to note is that these platforms can and are used even by team members who work right next to each other.

Remote in the Open

One downside to our open office environment can be distracting noise levels from neighboring project teams. While we’re exploring ways we can change our space to mitigate such issues, remote-first communication also helps keep noise levels down.

Another benefit of remote-first communication is that it can create a space for people who prefer written communication to speaking aloud. This might be a team member who is shy, or whose first language isn’t English, or who is hard of hearing—there are a lot of reasons people might be more comfortable communicating through text. I know that I sometimes value having a moment to switch contexts and collect my thoughts before responding to an instant message, a luxury not always afforded by direct in-person questions.

Remote-first Is not Remote-always

Remote-first communication does not mean that your only communication should be written. There are certainly times when in-person conversations or phone calls will be much more effective. Particularly when it comes to delivering disappointing news or when first getting to know your client or team, it can be much better to converse in a way that allows an emotional connection.

In short, teams that use text-based communication tools as a default communication medium are better equipped for remote communication across both space and time.

How do your teams practice remote first communication?

Further Reading on Communicating with Remote Teams:

The post Remote-First Communication for Project Teams appeared first on Atomic Spin.

Things I Learned while Pairing on odo

I recently had the opportunity to pair with Scott Vokes on a side project.
He had an idea for a simple C program and let me drive while we talked through the design. In a few short hours, I learned a lot more than I expected. I’ll add the list below.

Learning through Paring

At Atomic Object, we’ve been pairing pragmatically on projects as a way of solving problems more effectively. While working with Scott on odo, I was reminded of how how well pairing also works for knowledge transfer. By sitting down for an hour or two and doing the work together, the gaps in my knowledge that were pertinent to problem at hand were quickly exposed. With Scott answering questions based on his experience, I was spared many hours of Googling and weighing different solutions and approaches against each other.

One area I was surprised to find this particularly valuable was when it came to questions of style and project setup. I don’t write a lot of C, so starting on something like this alone would have left me with a lot of anxiety about the right way to do things: what kind of Makefile should I have? Should I be using autotools? Where should I put my brackets? How should I align this block of code? Having an expert there from whom to borrow opinions on these things saved a lot of worry.

As an aside: another place you might look to answer some of these questions for a handful of project types is How I Start. Also, I really like that in the Go programming language, go fmt eliminates much of this cognitive overhead about formatting.

With these environmental and stylistic issues easily dealt with thanks to Scott’s experience, we were able to talk through the problem and design, and discuss the trade-offs of a couple different approaches to atomicity. For more on that, see Scott’s post on odo.

If you have the time, patience, and a good friend or mentor with the same, I would highly recommend trying out this approach to learning something new.

(By the way, Hacker School students will have the opportunity to pair with Scott in late January, when he will spend a week there as a Hacker School Resident).

What I Learned

Here are a few of the things I learned while pairing with Scott.

More about Emacs

(I usually use Vim, but recently forked Scott’s Emacs config):

  • How to maintain a 2-window split and flip quickly between recently active buffers
  • How to use built-in hooks for make
  • Efficient patterns of use:
    • Open what docs/files you need, let midnight mode clean things up later
    • Using bindings that can toggle between buffers (e.g. C-z for shell or last buffer)
  • Which parts of Scott’s config don’t work for me (e.g. bindings that make more sense on Dvorak)

More about Makefiles

  • How to set compiler flags
  • When it’s not yet necessary to have a Makefile

More about C

  • Style preferences (more on this later)
  • Easy typos to make (e.g. ‘=’ for ‘==’) and how to catch errors
    • E.g. There is a style that makes assignment in place of equality checks a syntax error
    • But you can also just turn on more compiler warnings & test to catch it
  • Pointer / dereferencing syntax is confusing, and I shouldn’t feel bad for thinking that
    • This is one of those thing you end up just internalizing after spending a lot of time in the language
  • More examples of why working with pointers and pointer math can be dangerous
  • How to organize small programs (e.g. start by describing internal API in a .h file)

More about syscalls

  • Wow! Man pages for syscalls are full of really useful info!
  • Also: the Stevens book is great
  • Also: compare different OSes for interesting differences (the FreeBSD docs site has a good collection of docs for various OSes in the Unix family)
  • mmap is really powerful (and dangerous). Wow.
  • Error checking/handling patterns (not entirely unlike what I’ve seen in Go…)

More about (using) C compilers

  • Systems programming has its own yaks to shave
    • I’m used to dealing with Ruby’s “dependency hell” but had idealized systems programming as yak-free
    • It turns out that there are plenty of incidental complexities around C programming, too.
    • e.g. trying to find out about support for compiler-provided features (e.g. Atomic CAS) can be hard
  • Compiler-provided features are totally a thing

Further Reading

The post Things I Learned while Pairing on odo appeared first on Atomic Spin.

Atomic Spin » Mike English 2014-10-16 12:00:57

First announced almost a month ago, Shellshock continues to endanger un-patched web servers and Linux devices. So what is it? How can you tell if you’re vulnerable? And how can it be addressed?

What Is Shellshock?

Shellshock is a vulnerability in the bash software program. Bash is a shell, installed to Linux and other operating systems in the Unix family. A shell is a software component that is deeply integrated into the operating system, which is what makes this vulnerability so insidious.

The Shellshock vulnerability is a bug in the parser. It was first introduced more than 20 years ago when a feature to allow exporting functions was added. The danger is that an attacker who could control the content of an environment variable could potentially execute arbitrary code on a vulnerable system. Remote code execution (RCE) vulnerabilities (also called “arbitrary code execution” vulnerabilities) are among the most dangerous. Paired with privilege escalation vulnerabilities or poor security practices (e.g. allowing web servers to run as privileged users), unaddressed arbitrary code execution vulnerabilities can lead to the complete takeover of vulnerable systems.

An unfortunately large number of arbitrary code execution vulnerabilities exist in modern software, most of them caused by bugs in code dealing with memory management. (As an aside, this is one reason many systems programmers are excited by new languages like Rust that provide more safety for memory management.) Bugs with memory management (like stack overflows, underflows, or poor bounds-checking) can be exploited by skilled attackers, but aren’t usually trivially exploited by code this trivial:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

That’s the proof of concept for CVE-2014-6271, the initial Shellshock vulnerability. It fits in a blog post, and with a bit of experience writing shell scripts, it’s easy to see how it operates and how echo vulnerable could be replaced by a malicious payload. In other words, it’s easily exploited by a large number of potential attackers, even those with few resources at their disposal. The ease with which it can be exploited, combined with the fact that it allows for arbitrary code execution, the pervasiveness of bash on modern Unix systems means that Shellshock is a Very Big Deal.

Find more information here:

Checking for Shellshock

Use the bashcheck test script on GitHub.

Known Shellshock Vectors

Shellshocker – a Repository of “Shellshock” Proof of Concept Code

Shell Shock Exploitation Vectors by Daniel Fox Franke

Bugs (CVEs)

There are currently four published CVEs for recently discovered bash vulnerabilities. There are also currently two still embargoed CVEs.

  • CVE-2014-6271 – Original bug reported by Stephané Chazelas.
  • CVE-2014-7169 – “Incomplete fix for CVE-2014-6271″ PoC by Tavis Ormandy (@taviso); posted to Twitter.
  • CVE-2014-7186From RedHat: “It was discovered that the fixed-sized redir_stack could be forced to overflow in the Bash parser, resulting in memory corruption, and possibly leading to arbitrary code execution when evaluating untrusted input that would not otherwise be run as code.”
  • CVE-2014-7187 – “An off-by-one error was discovered in the way Bash was handling deeply nested flow control constructs. Depending on the layout of the .bss segment, this could allow arbitrary execution of code that would not otherwise be executed by Bash.”
  • CVE-2014-6277Reported by Michal Zalewski (lcamtuf) of Google. The prefix-suffix patch does not fix this underlying issue, but reportedly makes it inaccessible to a remote attacker.
  • CVE-2014-6278 – “Sixth bug” reportedly very easy to exploit if only the first CVE-2014-6271 patch is applied. Reported by Michal Zalewski (lcamtuf) of Google. The prefix-suffix patch does not fix this underlying issue, but reportedly makes it inaccessible to a remote attacker.

Shellshock Source Patches

Official – “Upstream” / Chet Ramey

Available from the GNU Project Archive for 2.05b through 4.3 (3.2 and 4.3 patches called out below).

Other – Vendors, 3rd-party (an incomplete list)

  • “Florian’s prefix-suffix patch”
    • Accepted upstream as bash32-054, bash43-027, etc.
    • The upstream version may have compatibility issues that still need to be resolved.
  • “Christos’ patch”
    • Disable feature except with flag, breaks backwards compatibility.
    • Adopted by both FreeBSD and NetBSD.
  • RedHat patch for CVE-2014-7186 & CVE-2014-7187

Vendor Updates


CentOS (RedHat)


Apple product security was notified of the issue by Chet Ramey days in advance of the CVE-2014-6271 public disclosure. This issued a “safe by default” Statement:

With OS X, systems are safe by default and not exposed to remote exploits of bash unless users configure advanced UNIX services. We are working to quickly provide a software update for our advanced UNIX users.

Manually applying patches to Bash for OS X – Apple posts source code for open source software they distribute. You can apply official upstream patches to this source. Here is one guide for this approach.

Other Vendor Identifiers

Other Shellshock Mitigations

Firewall Signature Block

One approach to mitigating the issue is to block/drop all traffic that contains the exploit signature '() {', e.g.

iptables -A INPUT -m string --algo bm --hex-string '|28 29 20 7B|' -j DROP

But RedHat notes that this “is a weak workaround, as an attacker could easily send one or two characters per packet, which would avoid matching this signature check. It may, in conjunction with logging, provide an overview of automated attempts at exploiting this vulnerability.”

There are several other OS-specific mitigation techniques listed on that page, too.

Binary Patching(?!)

As a really hacky way to patch for CVE-2014-6271, it may be possible to edit the /bin/bash binary directly to break function importing and prevent the feature from being exploited. (For example: An interesting approach, at least.


The best way to prevent a vulnerability like this from being exploited on your systems is to enable automatic security updates.

_Updated: The first version of this post indicated that


The post Shellshock – CVEs, Patches, Updates, & Other Resources appeared first on Atomic Spin.