Take Your Emacs to the Next Level by Writing Custom Packages

I wrote recently about using Emacs as a JavaScript development environment. One of my chief complaints was the inability to easily run JavaScript tests from within Emacs. I practice TDD frequently, and having to context-switch out of the editor I’m using to run tests is a big annoyance for me.

I knew it was possible to do what I wanted from within Emacs, as evidenced by other test runner modes like RSpec-mode. Armed with that knowledge, I decided to go through the process of learning enough Emacs Lisp to make a Mocha test runner. In the process, I learned a lot about developing Emacs packages and ended up with a really useful tool, so I thought I would share some of the things I learned.

There is a lot of content here, and we are going to cover three main topics: using Emacs as a Lisp IDE, writing a simple package, and publishing that package for others to use.

Emacs as an Emacs Lisp IDE

Unsuprisingly, Emacs itself is an excellent development environment for Emacs Lisp code. It can be easily cofigured to include IDE features for Lisp development, such as autocompletion, popup documentation, integrated debugging, and a REPL.

A few recommendations

Most of these features are built in, although I highly recommend installing the third-party packages company-mode (autocompletion) and Flycheck (real-time syntax checking) if you’re going to do Emacs Lisp development.

I also recommend turning on the built-in eldoc-mode, which will pop up documentation and signatures for various functions and symbols as you write code.

Lastly, I recommend familiarizing yourself with the built-in debugging and evaluation functions for Emacs Lisp. For evaluating code to test it, you can use the built-in Lisp-interaction-mode, which the *scratch* buffer usually has enabled by default. With the mode, you can paste Emacs Lisp code and then press C-x C-e to evaluate the code and see the results.

Emacs also comes with Edebug, a built-in stepping debugger for Emacs Lisp code. There are several ways to use it, but I most commonly use the interactive function edebug-defun. When run inside the body of a function, it sets a breakpoint at the start of that function that will be hit the next time you run it.

Making a Custom Compilation Mode

Mocha is a CLI tool, and Emacs has a number of built-in utilities for running external CLI programs.

Compilation buffer

The most relevant one for something like a test runner is a compilation buffer. In Emacs, this runs an external CLI process and displays the output in a buffer. This is useful for programs where you care about the output, like a compiler or test runner. It also includes some built-in niceties like the ability to highlight errors and jump to them.

In fact, you don’t even need to write any code to run an external command in a compilation buffer. You can just use the M-x compile command like so:

Running a compilation command

This is a solid approach for a static compilation command like the default make -k. However, it doesn’t scale well to something like a test runner, which needs to do the following:

  1. Run a local script, requiring a consistent working directory or an absolute path (M-x compile will use the directory of the current file as the working directory).
  2. Pass dynamic configuration options like the file to test the runner.

Custom compilation mode

The solution in Emacs is to programmatically create a custom compilation mode that can take these options and run using an interactive function. This is easy to do. In fact, the compilation mode for Mocha.el is only a couple of lines:


(require 'compile)
...
(defvar node-error-regexp-alist
  `((,node-error-regexp 1 2 3)))
(defun mocha-compilation-filter ()
  "Filter function for compilation output."
  (ansi-color-apply-on-region compilation-filter-start (point-max)))
(define-compilation-mode mocha-compilation-mode "Mocha"
  "Mocha compilation mode."
  (progn
    (set (make-local-variable 'compilation-error-regexp-alist) node-error-regexp-alist)
    (add-hook 'compilation-filter-hook 'mocha-compilation-filter nil t)
  ))

While some of the syntax is a little cryptic (thanks, Lisp!), what it does is very simple. We use the built-in define-compilation-mode macro to define a compilation mode named mocha-compilation-mode, and we do two things with it:

  1. Pass it a regular expression that maps Node.js error output to filenames, line numbers, and column numbers.
  2. Add a processing hook which interprets ANSI escape codes and formats them properly.

The first enables us to quickly jump to the point of failure in a test. The second makes everything look nicer.

Running Test Commands

Now that we have a custom compilation mode that will nicely display our command output, we need to generate a test command and run it with the custom mode. Doing this will involve several simple steps.

Find project root

Many types of command line utilities need to be run from the project root. Fortunately, project roots are generally easily identified by the presence of a particular file or directory (like a source control directory). Since this is such a common need, Emacs has a built-in function, locate-dominating-file, to recursively search up a directory tree for a particular file name. The Emacs documentation on this function explains how to use it better than I could:

(locate-dominating-file FILE NAME)
Look up the directory hierarchy from FILE for a directory containing NAME. Stop at the first parent directory containing a file NAME, and return the directory. Return nil if not found. Instead of a string, NAME can also be a predicate taking one argument (a directory) and returning a non-nil value if that directory is the one for which we’re looking.

Customize configuration

Unlike an actual compilation, which would involve rerunning a single static command, something like a test runner needs to be dynamically configurable. Fortunately, Emacs has Customize, an awesome built-in and extensible configuration interface for packages (and the core editor). Customize exposes several macros which can be used to define custom configuration parameters for a package and display them in an editable GUI.

For example, here are the configurations we expose for our Mocha runner:


(defgroup mocha nil
  "Tools for running mocha tests."
  :group 'tools)
(defcustom mocha-which-node "node"
  "The path to the node executable to run."
  :type 'string
  :group 'mocha)
(defcustom mocha-command "mocha"
  "The path to the mocha command to run."
  :type 'string
  :group 'mocha)
(defcustom mocha-environment-variables nil
  "Environment variables to run mocha with."
  :type 'string
  :group 'mocha)
(defcustom mocha-options "--recursive --reporter dot"
  "Command line options to pass to mocha."
  :type 'string
  :group 'mocha)
(defcustom mocha-debug-port "5858"
  "The port number to debug mocha tests at."
  :type 'string
  :group 'mocha)

And those show up in the customize GUI like so:

GUI interface for configuring our package

Since many of these options make sense to configure on a per-project rather than global basis, Emacs also supports a special file called .dir-locals.el, which can override these settings on a per-directory basis. A typical .dir-locals.el file might look like this:


((nil . (
            (mocha-which-node . "/Users/ajs/.nvm/versions/node/v4.2.2/bin/node")
            (mocha-command . "node_modules/.bin/mocha")
            (mocha-environment-variables . "NODE_ENV=test")
            (mocha-options . "--recursive --reporter dot -t 5000")
            (mocha-project-test-directory . "test")
            )))

The syntax is a little cryptic, but if your Emacs working directory is in the same directory as this file or below it, it will respect these options in favor of any global configuration.

Once we have these configuration options defined, it is easy to write a function that will concatenate all the strings together to create our test runner command!


(defun mocha-generate-command (debug &optional mocha-file test)
  "The test command to run.
If DEBUG is true, then make this a debug command.
If MOCHA-FILE is specified run just that file otherwise run
MOCHA-PROJECT-TEST-DIRECTORY.
IF TEST is specified run mocha with a grep for just that test."
  (let ((path (or mocha-file mocha-project-test-directory))
        (target (if test (concat "--grep "" test "" ") ""))
        (node-command (concat mocha-which-node (if debug (concat " --debug=" mocha-debug-port) "")))
        (options (concat mocha-options (if debug " -t 21600000"))))
    (concat mocha-environment-variables " "
            node-command " "
            mocha-command " "
            options " "
            target
            path)))

Generating and Running Compile Command

Now that we can configure our test command and find the root of our project, we are ready to run it with the custom compilation mode we made earlier. I’m going to show you the most important code for doing that below, and then break it down and explain the different parts.


(defun mocha-run (&optional mocha-file test)
  "Run mocha in a compilation buffer.
If MOCHA-FILE is specified run just that file otherwise run
MOCHA-PROJECT-TEST-DIRECTORY.
IF TEST is specified run mocha with a grep for just that test."
  (save-some-buffers (not compilation-ask-about-save)
                     (when (boundp 'compilation-save-buffers-predicate)
                       compilation-save-buffers-predicate))
(when (get-buffer "*mocha tests*")
    (kill-buffer "*mocha tests*"))
  (let ((test-command-to-run (mocha-generate-command nil mocha-file test)) (root-dir (mocha-find-project-root)))
    (with-current-buffer (get-buffer-create "*mocha tests*")
      (setq default-directory root-dir)
      (compilation-start test-command-to-run 'mocha-compilation-mode (lambda (m) (buffer-name))))))

Whew! That is some pretty dense code, so let’s break it down bit by bit.

Check for unsaved buffers

The first thing this function does is check if there are any unsaved buffers open, and then prompt the user to save them. Sounds pretty complex, but since this is such a common operation, Emacs makes it possible with just a couple of lines.


  (save-some-buffers (not compilation-ask-about-save)
                     (when (boundp 'compilation-save-buffers-predicate)
                       compilation-save-buffers-predicate))

Clean up test buffer

Next, we search for the named buffer we use to run tests to see if it is still around from a previous test run. If it is, we kill it so we can get a fresh start.


  (when (get-buffer "*mocha tests*")
    (kill-buffer "*mocha tests*"))

Bind values

After that, the real work begins. We start by binding two values: the actual test command we are going to run and the path to the project root directory. Both values are calculated using the techniques and code we defined above.


  (let ((test-command-to-run (mocha-generate-command nil mocha-file test)) (root-dir (mocha-find-project-root)))

Run test command

Finally, now that we have those two values, we actually run our test command. This is a three-step process of:

  1. Creating and switching to the buffer our tests will run in.
  2. Changing the working directory to our project root.
  3. Running our test command in the buffer with our custom compilation mode.

All of this is done with the last three lines of code:


    (with-current-buffer (get-buffer-create "*mocha tests*")
      (setq default-directory root-dir)
      (compilation-start test-command-to-run 'mocha-compilation-mode (lambda (m) (buffer-name))))))

Expose interface to users

Now that we have the code to run our test commands, we need to expose it to users. For explicit actions like running commands, Emacs uses interactive functions, which can be called interactively by a user via either the M-x interface or a hotkey.

To make a function interactive, you just include the (interactive) special form at the top of the function body like so:


;;;###autoload
(defun mocha-test-file ()
  "Test the current file."
  (interactive)
  (mocha-run (buffer-file-name)))

If you are not exporting the function as part of a mode, it is also customary to add the ;;;###autoload magic comment before the function, which helps other Emacs files referencing your package find the function so it can be used (for example, to bind them to a hotkey).

Once a function is defined as interactive, it will appear in the M-x interface and can be activated by a user.

Interactive functions for our mode

And there you have it. With only a couple of functions and big dose of Emacs magic, we have created a highly configurable test runner that is integrated into our development environment.

Distributing on MELPA

Having done all the work to create a custom package, don’t you just want to share it with the world? Fortunately for you, Emacs has a built-in package manager that makes this pretty easy. The package manager is backed by several different repositories, so making your package publicly available is just a matter of getting it into one of these repositories.

The three main package repositories are ELPA, Marmalade, and MELPA. ELPA is the offical GNU repository that comes with Emacs, while Marmalade and MELPA are third-party repositories. There are a number of differences between each of the repositories, the most significant being how they deal with licensing.

ELPA and Marmalade both require that all packages are GPL- or GPL-compliant licensed. Additionally, ELPA requires you to complete an FSF copyright assignment form. MELPA, on the other hand, has no licensing requirements, although it does have a code review process that all newly added packages must go through to ensure the code is of suitable quality.

Which package repositories you choose to put your code on is up to you, but I personally use MELPA and will talk about the process of getting a package into that repository.

There are two basic steps to getting a project on to MELPA.

Format the package file

First, you need to follow standard Emacs Lisp conventions for formatting a package file, which includes adding a description header and several other sections to the file. The Flycheck package for Emacs is invaluable here, because it will mark all of the required sections that are missing as errors and guide you through adding them. Doing this correctly is important because the Emacs package manager actually parses these sections as metadata to use.

Add your recipe

Once your code is properly formatted, all you need to do is fork the MELPA project on GitHub and add a recipe for your project. MELPA has docs for configuring more complex projects, but for a simple one-file package, the recipe is really easy.

The recipe for the Mocha runner looks like this:


(mocha
 :repo "scottaj/mocha.el"
 :fetcher github)

That’s it, just a path to the GitHub repository. Once the recipe is added, you can open a pull request against MELPA. Someone will review your package and may suggest code changes. Once those are done, your pull request will be merged and MELPA will start publishing your package in its regular builds. The best part is, since MELPA pulls your code straight from your source repository, you don’t have to do anything to push updates to MELPA. It will just automatically pull down the latest version of your code.

Well, that is my short guide to creating and publishing an Emacs package. You can find the Mocha.el package I used as an example here and my Emacs config here. Drop me a comment if you have any questions!

The post Take Your Emacs to the Next Level by Writing Custom Packages appeared first on Atomic Spin.

Testing with Swift – Approaches & Useful Libraries

I’ve been working on developing an iOS app in Swift. It’s my first experience developing in pure Swift, without any Objective-C. This project has taught me a lot about the current state of testing in Swift, including different testing approaches and best practices. In this post, I’ll share some of my experiences and discuss how we have approached testing different types of Swift code. I’ll also talk about some useful testing libraries.

XCTest

XCTest has been the standard out-of-the-box iOS testing framework for as long as I can remember. This is what you get by default in Swift. Though it has been around for a while, I had not used XCTest much in the past. Instead, I usually opted for Kiwi when working in Objective-C. Unfortunately, Kiwi is not supported in Swift. I wanted to give vanilla XCTest a try, so for the first few months, that’s all I used.

On one hand, I learned that XCTest is a very bare-bones and limited testing framework. This wasn’t a particularly surprising revelation—I think most people find it to be average at best.

On the other hand, I also found that you can test most things with a high success rate using just XCTest. The tests may not be the most beautiful, and they may require a lot of boilerplate, but you are usually able to find some way to write the test you want.

General Test Structure

When writing tests in XCTest, you usually create a new class that extends XCTestCase, and add your tests as methods to your new class. It usually looks like this:


class MyClassTests: XCTestCase {
  func testCaseA() {
    ...
  }
  func testCaseB() {
    ...
  }
}

Synchronous Tests

Synchronous tests are usually straightforward. You instantiate the object you wish to test, call a method, and then use one of the XCTest assertions to confirm the outcome that you expect.


func testAddTwoNumbers {
  let adder = MyAdderClass()
  let result = adder.add(4, otherNumber: 8)
  XCTAssertEqual(result, 12)
}

Asynchronous Tests

Asynchronous tests are a little more tricky, though you can usually use XCTest’s XCTestExpectation class. As an example, suppose that we have a class that takes a number as input, makes a network call to get a second number, adds them together, and calls a callback with the result. To test something like this, we probably want to be able to stub the network call to return a known value, and assert that the result callback contains an expected value. For the sake of clarity, suppose this class looks like this:


class NetworkAdder {
  func add(userProvidedNumber: Int, callbackWithSum: (Int) -> ()) {
    self.getNumberFromNetwork({ numberFromNetwork in 
      let sum = userProvidedNumber + numberFromNetwork
      callbackWithSum(sum)
    })
  }
  func getNumberFromNetwork(callback: (Int) -> ()) {
    let numberFromNetwork = // some operation that get a number
    callback(numberFromNetwork)
  }
}

The standard way to test this using XCTest would be to extend NetworkAdder with an inline class, and override getNumberFromNetwork to return a fixed value. Then you can use some XCTest assertions in the callback you pass into callbackWithSum. However, you need to ensure that the test waits until the assertions are checked before exiting. To do this, you can use the XCTestExpectation class:


class NetworkAdderTests: XCTestCase {
  class MockNetworkAdder: NetworkAdder {
    override func getNumberFromNetwork(callback: (Int) -> ()) {
      callback(5) // force this to return 5 always for the test
    }
  }
  func testAddAsync() {
    let expectation = expectationWithDescription("the add method callback was called with the correct value")
    let networkAdder = MockNetworkAdder()
    networkAdder.add(8, callbackWithSum: { sum in
        XCTAssertEqual(sum, 13)
        expectation.fulfill()
    })
    waitForExpectationsWithTimeout(1, handler: nil)
  }
}

While the above mocking strategy requires a lot of boilerplate, it does allow you to test a wide variety of scenarios. In fact, I have found that most scenarios can be tested with some combination of the above synchronous and asynchronous examples.

Testing View Controllers

View controllers are another central testing concern. They can usually be tested effectively using UITests, which I’ll discuss later. However, sometimes unit tests are more appropriate. I have found that if you want to unit test a view controller, it’s important to instantiate it programmatically from your storyboard. This ensures that all of its outlets are properly instantiated as well. I have had several scenarios where I wanted to test the state of one or more view controller outlets at the end of a test (e.g., the text of a UILabel, the number of rows in a table view, etc.).

You can instantiate your view controllers using the storyboard by doing the following:


let mainStoryboard: UIStoryboard = UIStoryboard(name: "Main", bundle: nil)
let myViewController = mainStoryboard.instantiateViewControllerWithIdentifier("MyViewControllerIdentifier") as! MyViewController
myViewController.loadView()

The above code assumes that you have set the identifier on MyViewController to MyViewControllerIdentifier. I usually run something similar to the above snippet in the before each block for my MyViewController tests.

Upgrading with Quick and Nimble

Although I was able to test most things effectively using XCTest, it didn’t feel great. My tests were often verbose, didn’t read well, and would sometimes require a lot of boilerplate code.

I wanted to try a third-party testing framework to see if it alleviated any of these issues. Quick and Nimble seem to be the defacto third-party Swift testing framework.

Quick is a testing framework that provides a testing structure similar to RSpec or Kiwi. Nimble is a library that provides a large set of matchers to use in place of XCTest’s assertions. Quick and Nimble are usually used together, though they don’t absolutely need to be.

The first thing that you get with Quick and Nimble is much better test readability. The above synchronous test written using Quick and Nimble becomes:


describe("The Adder class") {
  it(".add method is able to add two numbers correctly") {
    let adder = MyAdderClass()
    let result = adder.add(4, otherNumber: 8)
    expect(result).to(equal(12))
  }
}

Similarly, the asynchronous test becomes:


describe("NetworkAdder") {
  it(".add works") {
    var result = 0
    let networkAdder = MockNetworkAdder()
    networkAdder.add(8, callbackWithSum: { sum in
      result = sum
    })
    expect(result).toEventually(equal(13))
  }
}

The other really helpful item you get out-of-the-box with Nimble is the ability to expect that things don’t happen in your tests. You can do this via expect(someVariable).toNotEventually(equal(5)). This makes certain asynchronous tests much easier to write compared to using XCTest, such as confirming that functions are never called, exceptions are never thrown, etc.

Overall, I would strongly recommend using Quick and Nimble over XCTest. The only potential negative that I’ve observed is that XCode seems to get confused more easily when running and reporting results for Quick tests. Sometimes the run button doesn’t immediately appear next to your test code, and sometimes it can forget to report results or even run some tests when running your full test suite. These issues seem to be intermittent and are usually fixed by re-running your test suite. To be fair, I have also observed XCode exhibit the same behavior for XCTests; it just seems to happen less frequently.

UI Testing

The last item I would like to discuss is UITests. In the past, I have used KIF or something similar to write integration-style UI tests.

I initially tried to get KIF working, but experienced some difficulty getting it to build and work in our Swift-only project. As an alternative, I decided to try the UITest functionality built into XCode, and I’m glad that I did. I have found the UI tests to be extremely easy to write, and we have been able to test large amounts of our app using them.

UITests work similarly to KIF or other such test frameworks—they instantiate your application and use accessibility labels on your UI controls to press things in your app and navigate around. You can watch these tests run in the simulator, which is pretty neat. While navigating around, you can assert certain things about your app, such as the text on a UILabel, the number of rows in a UITableView, the text they are displaying, etc.

Let’s walk through an example UITest for an app that contains a button that adds rows to a UITableView, and updates a label with the number of rows in the table. The test will press the button three times and check that a row is added for each press, and the label text is updated appropriately. The app looks like this:

uitest-ss

The UITest code looks like this:


func testAddRowsToTable() {
    let app = XCUIApplication()
    let addRowButton = app.buttons["addRowToTableButton"]
    XCTAssertEqual(app.tables["tableView"].cells.count, 0)
    XCTAssertEqual(app.staticTexts["numTableViewRowsLabel"].label, "The table contains 0 rows")
    
    addRowButton.tap()
    XCTAssertEqual(app.tables["tableView"].cells.count, 1)
    XCTAssertEqual(app.staticTexts["numTableViewRowsLabel"].label, "The table contains 1 row")
    
    addRowButton.tap()
    XCTAssertEqual(app.tables["tableView"].cells.count, 2)
    XCTAssertEqual(app.staticTexts["numTableViewRowsLabel"].label, "The table contains 2 rows")
    
    addRowButton.tap()
    XCTAssertEqual(app.tables["tableView"].cells.count, 3)
    XCTAssertEqual(app.staticTexts["numTableViewRowsLabel"].label, "The table contains 3 rows")
}

To set this test up correctly, the Add Row To Table button accessibility label was set to addRowToTableButton, the UITableView’s accessibility identifier was set to tableView, and the bottom UILabel’s accessibility identifier was set to numTableViewRowsLabel. You can watch the test run in the simulator—it looks like this:

ui-test

We found it helpful to create a UITest base class where we could set up an application and do some other configuration work. This class looks like this:


import XCTest
class BaseUITest: XCTestCase {
    var app: XCUIApplication?
    override func setUp() {
        super.setUp()
        app = XCUIApplication()
        app!.launchArguments.append("UITesting")
        continueAfterFailure = false // set to true to continue after failure
        app!.launch()
        sleep(1) // prevents occasional test failures
    }
}

There are a few things to note in the above code sample. The first and most obvious is the sleep(1) before returning from the setUp function. I noticed that some of our tests would fail without this—presumably because the test would start running before the app was up and running in the simulator.

Additionally, we are passing a UITesting string value into our app launch arguments. Occasionally, you will need to mock things out for your UITests (e.g., network calls, file IO, etc.). The best way we found to do this is to set a test-specific launch argument that we can check for in our code and inject UITest classes instead of production classes when injecting our app dependencies on startup. This was mostly inspired by this Stack Overflow post.

Recording Tests

A great, and often overlooked, UITest feature is the ability to record interaction sequences with your app and have XCode write your UITest for you. This won’t add any assertions into your test, but it will create a sequence of UI interactions that you can use as a starting point for your test. You can start a recording by pressing the red record button on the bottom of the screen. This will launch your app and allow you to start using it. Each screen interaction is immediately translated to a line of code that you can see show up dynamically in your test function. When you’re finished, you simply press stop recording.

UI Testing Gotchas

Aside from that one-second delay that we added to the beginning of our UI tests to prevent periodic test failures, there are a few other tricky items to be aware of.

If your test involves typing text into a text field, you need to tap on the text field first to bring it into focus (similar to what you would do if you were using the app). Additionally, you need to ensure that Connect Hardware Keyboard is unchecked on the simulator. This allows the onscreen keyboard to be used when typing into text fields, instead of your laptop keyboard.

hardware-keyboard-settting

When attempting to programmatically access elements from your app, the XCode accessibility UI shows how each control is categorized. You can modify this by selecting different categories. So, for example, to access a UIButton via app.buttons["myButtonIdentifier"], your control element needs to be categorized as a Button.

ui-element-types

Most of the time, you won’t re-categorize elements, but this screen allows you to look up how to access each control in your app.

Another thing to be careful with is understanding where your app starts when it is launched. If you have an app that requires user login, and it either starts on the login screen if the user is not logged in, or takes the user to the app if they are, your UI tests need to be aware of this. In our tests, we first check to see if the user is logged in and either continue running or log them in/out as desired.

The post Testing with Swift – Approaches & Useful Libraries appeared first on Atomic Spin.

Bye-Bye, Sinon – Hello, testdouble

I’ve been working in JavaScript-land for the last little while, writing lots of Node.js code. Since I practice TDD, I’m always trying to keep my eye on the best new ways to test JavaScript code.

In unit testing, mock objects are a key tool for isolating your tests to single components. For a long time, the status quo for JavaScript test mocking has been Sinon.JS. It is ubiquitous, full-featured, and easy to use. It is not perfect, however. The API is confusing enough that even after using it for around three years, I still routinely look up how to do common things in the documentation. It also does not support some workflows, such as injecting entire mock objects into tests, very well.

I recently attended a talk presenting a new alternative mocking library for JavaScript:  testdouble.js. I really liked the API and design decisions presented in the talk, so I have started integrating testdouble.js into my current project (which is already using Sinon.JS). After a few weeks of using both tools side-by-side, I thought I would share my thoughts and observations.

testdouble.js is OO, while sinon.js is function-based

The first thing that struck me about testdouble.js is that it was clearly designed to fit into an object-oriented JavaScript codebase. The td.object call makes it trivial to generate entire mock objects from a constructor or object literal. In contrast, Sinon.JS is entirely focused on individual functions, and it requires you to mock out each function on an object individually.

One consequence of this is that testdouble.js explicitly doesn’t support partial mocks. This is an interesting contrast to Sinon.JS, which only supports partially mocking objects. In general this isn’t a problem, and I agree with the justifications for avoiding them. However, sometimes you need them, particularly when adding unit tests to a codebase that didn’t have them before (and where the refactoring necessary to avoid them isn’t feasible in the short term). When I’ve run into these situations in my current codebase, I’ve just continued to use Sinon.JS, but I’d like to find a better long-term solution.

testdouble.js has a nicer API

This is one area where testdouble.js wins hands-down. Sinon.JS has an API that builds up mocks, spies, and stubs via method chaining. For example:

1
sinon.stub(myObject, 'myMethod').withArgs("a", 3).returns("aaa")

On the other hand, testdouble.js has a much more natural API where you simply call mocked functions just like you expect them to be in your test:

1
td.when(myObject.myMethod("a", 3)).thenReturn("aaa");

testdouble.js can inject dependencies via require

When writing tests for Node.js, testdouble.js also provides the td.replace API. This lets you inject test doubles as dependencies directly through the Node.js API. I personally prefer to use constructors to inject dependencies manually in JavaScript–a method which eliminates the need for this most of the time. However, it is a great tool for working with code that wasn’t designed that way, allowing you to inject mock objects without having to drastically refactor your system.

The other great thing is that since the td.object API accepts constructors, testdouble.js also makes it much easier to do my preferred constructor dependency injection. I simply wrap all of my dependencies in a td.object call and pass them in as complete mock objects that I can control.

I have, unfortunately, had some weird dependency resolution issues when using the td.replace API. While I have not root-caused it yet, it generally seems to come up when using instanceof calls in my tests. I have been able to work around it by moving the troublesome import to before my first call to td.replace.

testdouble.js can’t replace some Sinon.JS things.

In particular, Sinon.JS has some incredibly useful utilities for mocking out async timers and AJAX calls, allowing you to test those things totally syncronously. testdouble.js has no replacement. Fortunately, there is absolutly no issue with using just those parts of Sinon.JS alongside testdouble.js. The Sinon.JS maintainers have even been nice enough to split just those two pieces into their own separate modules which you can get without the rest of the library.

Conclusion

Sinon.JS is great and has been the status quo for a long time for a reason. However, testdouble.js feels more polished and natural, and it fits way better into an OO design and workflow. Going forward, I plan to use testdouble.js as much as possible.

The post Bye-Bye, Sinon – Hello, testdouble appeared first on Atomic Spin.

Property-Based Testing for Serialized Data Structures

When I first heard about property-based testing, my instincts told me it was too academic to be of practical use. But, as is often the case in the art of software, my gut reaction failed to appreciate the value of something new.

I originally felt the same way about functional programming, so I guess I can’t trust my gut very much when it comes to new concepts. To quote Nick Hornby, “Between you and me, I have come to the conclusion that my guts have s— for brains.” I’ve recently stumbled into some great ways to get real-world value out of property-based testing.

What is Property-Based Testing?

Before I dive into how I’m using property-based testing, let’s review what this type of testing is. Scott Vokes covered the topic pretty thoroughly here. In my own words, I would say that property-based testing is about asserting important invariants in the way your code works—properties that do not change, regardless of the input. For example, the property might be “the function should never throw an exception,” or “the output string should always be valid JSON.” This contrasts with standard unit tests, where you generally assert that a specific set of inputs should produce a specific output.

Property-based testing asserts that, given arbitrary inputs, a function always behaves a certain way. It turns the computer loose to generate random inputs for your function, in search of a set of inputs which violate your assertion. If it finds one, it will “shrink” that set of inputs down to the “simplest” version of the inputs which will break the assertion.

For example, you may have a code that takes an array and fails if the array is of length greater than three. The initial random case might have an array with five elements, but the shrinker will reduce that to an array with four elements because it’s “simpler” than a five-element array.

When You Can Use Property Tests

You can test a function with property-based tests if the following conditions hold:

  1. You can generate and shrink arbitrary values for all inputs.
  2. You can make assertions about the output of your function, given arbitrary inputs. Because your inputs are arbitrary, your outputs will be somewhat arbitrary as well. So you need to have a means of asserting valid outputs.

Verifying Serialization

My current project involves sending many messages between microservices using message queues. We need to support partial or rolling deployments. Consequently, we need to update the payload of new messages while continuing to support the old versions of the payload. Say, for example, we have a data transfer object like so:


[DataContract]
public class SerializeMe
{
    [DataMember]
    public int Foo { get; set; }
    [DataMember]
    public string Bar { get; set; }
}

which serializes to JSON on the wire. Now, say we want to update SerializeMe with a new field:


    [DataContract]
    public bool Baz { get; set; }

It’s possible that when we update the code, some messages in the old format will still be in transit (or serialized on disk or in a database). Will the new version of SerializeMe successfully deserialize objects that were serialized in the old version? Property-based testing makes proving this much easier.

We can use property testing because:

  1. We can generate—and shrink—random values for the old version of SerializeMe.
  2. We can serialize the random old SerializeMe, and deserialize it as the updated SerializeMe. If this process always succeeds, we can say with confidence that the two versions are compatible.

We can also add checks to make sure values are transferred appropriately (e.g. if a field that was previously an int becomes a string). Admittedly, you can try to eyeball the two different versions of the structure, think about the properties of your serializer, and try to determine whether or not the change is safe. But why not throw some computational resources at the problem as well?

Generating and Shrinking DataContract

Using property-based testing in this way gives me much more confidence about rolling updates and message version compatibility. This is a huge win for safe iterative development in a microservice architecture–or really any message-based architecture.

I used the FsCheck library to do property testing in my project. Even though the library is written in F#, it has a really good C# API. I wrote a generator and a shrinker for classes tagged with [DataContract], using the following method:

For the generator:

  1. Use reflection to iterate over the class properties, looking for properties annotated with [DataMember].
  2. Get the random generator for each property based on the property type. This allowed me to leverage the built-in primitive and collection generators. If the property is an object type, generate it recursively using this same process.
  3. These random values form an IEnumerable<Generator<object>>. The Sequence function lets me transform this into Generator<IEnumable<object>>. Then I can use Select to turn it into a Generator of the original class by assigning those values to its properties with reflection.

For the shrinker:

  1. Iterate over the class properties one at a time, generating shrunken versions of the value for that property.
  2. Pair each shrunken value for each property with the original values for the other properties.
  3. Use those different sets of shrunken values to create shrunken objects.

This process is similar to what the FsCheck library does for F# records. It’s worth noting that I also added some logic to randomly generate null values, and to consider null a possible shrunken value for an object.

Considerations

I’m also interested in using property-based testing in conjunction with tools like Chaos Monkey. Property testing is a great fit whenever your code has meaningful invariants, but it is hard to cover all the edge cases. I can think of few situations that align better with those needs than microservices with many moving parts. By no means is property-based testing the hammer to solve all problems, but it is the perfect solution for certain types of problems. What other “one-off” techniques have you found invaluable in the right situation?

The post Property-Based Testing for Serialized Data Structures appeared first on Atomic Spin.

Continuous Validation for Mobile User Interfaces in iOS

Laying out the user interface of a mobile app (or any app for that matter) is not a simple process. As visual designs get more complex and the number of devices and screen sizes grow, the work of a mobile developer grows more challenging.

Many developers choose to leverage open source tools to help with layout or image rendering. Because those libraries evolve over time, updating to new versions can cause unexpected changes to the look and behavior of the app. That puts the burden on developers to continuously monitor their work for problems that may emerge in the UI.

As any good developer knows, automation of mundane or time-consuming tasks is critical to a productive workflow. We often write automated unit tests and system tests to prove that our app functions the way we we expect, but how can we automate testing that our user interface looks the way we expect? That is a much greater challenge! Fortunately, we came up with a system that ended up being a great help.

Background

We were building an iOS app for a client and wanted to continuously validate the UI. One of the biggest challenges of doing this manually over a long period is that it’s easy to skip over views that are really simple or have been finished for a long time. I often find myself jumping to the more detailed views or most recent work and neglecting the rest. And I have a feeling I’m not the only one!

One member of our team came up with a process that we found to be extremely helpful. Rather than attempting to automate the process of analyzing the positions, sizes, colors, etc., of every artifact on every view, we decided to automate taking a screenshot of every view and composing them into a single document. We found that having one document that contains a picture of every screen made it much easier to stay focused on thoroughly inspecting every view.

The following image shows part of the generated document that we ended up with for our app.

Screen Shot 2015-07-27 at 11.38.22 PM

Mobile UI Testing Method

Now that I have you convinced of the value of this process I will let you in on how we accomplished it! We started out with an open source library aptly named snapshot, one component of a larger suite of deployment tools for iOS called fastlane. I believe the original purpose of snapshot was to help developers automate the process of capturing preview images of apps in preparation for App Store submission (which can be a rigorous process in and of itself when your app supports multiple languages). We used it for much more than that.

Setting up snapshot is actually really easy. The GitHub Readme has plenty of detailed information about how to install and configure it, so rather than repeat what’s there, I’ll just highlight a few important pieces.

A “snapfile” is snapshot’s configuration mechanism, implemented using a Ruby DSL. We used the snapfile to specify which simulators we wanted the task to run on and which version(s) of iOS to use. For example, the following snippet indicates that screenshots should be captured using the iPhone6, iPhone 6 Plus, and iPhone 4s on iOS 8.4. When snapshot is invoked, it will automatically build your application, start the simulator, load your app, and capture the desired screenshots for each configuration specified.

ios_version('8.4')
devices([
  iPhone 6
  iPhone 6 Plus
  iPhone 4s
])

How do you control which screens are captured? With Javascript. Apple provides a nice feature called UI Automation that uses Javascript functions to allow remote access to user interface elements currently presented in you app running in the simulator. This allows you to write scripts that tap buttons, enter text into forms, or do whatever is necessary to get your app into the desired state. When you want the tool to take a screenshot, call  captureLocalizedScreenshot('screenshot-name-1').

The snapshot tool searches your project directory for a snapshot.js file that contains these Javascript commands and executes them on the simulator configurations(s) that you specified in the snapfile. Below are a few examples of these commands:

var target = UIATarget.localTarget();
var mainWindow = target.frontMostApp().mainWindow();
mainWindow.navigationBar().buttons()["Help Button"].tap();
target.frontMostApp().keyboard().typeString("password123");
captureLocalizedScreenshot("sign-in-screen-1");
mainWindow.navigationBar().leftButton().tap();

When writing the script to progress through your app, you can usually reference UI elements by their accessibility labels, If that doesn’t work, you can refer to elements using index notation as long as you know where they are in the element tree. If you get stuck, Apple’s Instruments suite has a tool called Automation that will connect to your app running in the simulator. Use it to click through your app, and it will output Javascript that you can use in your snapshot.js file. The snapshot GitHub page demonstrates how to do this.

Screen Shot 2015-07-28 at 10.03.42 AM

Takeaway

Throughout the course of our project, we found that capturing screenshots added value in more ways that we originally expected. Running the script not only allowed us to discover UI mistakes before they were delivered to the customer, but it also helped us catch several functional bugs in the app that we had missed in our system/manual testing. The generated document was also a great artifact to deliver to the customer along with our weekly releases.

If you’ve found another way to do mobile user interface testing, I would love to hear your experiences. I am especially interested in hearing if anyone has accomplished this on the Android platform.

The post Continuous Validation for Mobile User Interfaces in iOS appeared first on Atomic Spin.

Who Pays for Bugs?

who-pays-bugs

It’s inevitable that bugs will be created during custom software development projects. And it’s not unusual for clients to have the mindset that the development team should “pay” for the bugs.

But punishing the team for bugs is penny wise and pound foolish. As a client, you should have the responsibility to pay for bugs, and the team should have the responsibility to learn from bugs.

Mismatch of Mindset

Whether working with an employee or a vendor team, it can be easy to slip into the mindset that a sprint delivery or release point should come with a guarantee against bugs. As a client, you have the expectation of getting what you paid for. I believe confusion comes from a mismatch on what is actually being paid for.

Custom software development should not be confused with buying a finished product like off-the-shelf software.

When you buy a finished product, the majority of risk is driven out of the product. The seller can tune pricing to cover cost and leave margin for future development, maintenance, and bug fixes. You should expect a mostly bug-free experience, as the product should have been created using quality practices and gone through testing phases. Bugs should be fixed for free as part of the purchase price.

When you are paying for custom software, you should have the mindset of paying a team to design and develop the very first instance of a product. The effort is ripe with risk, including:

  • Building the right product.
  • Funding and schedule risks.
  • Third-party integrations.
  • Technical approaches that need to be proved out.

You are paying your team to help you mitigate these risks, not to financially own them. Each sprint or release is not a guarantee of a finished, bug-free product.

The team should do their best to build a high-quality product. Quality is created in the short term through design and testing practices. Quality is created in the long term through insights gained from exploratory testing, pilot releases, and public releases. Treat bugs as inevitable discoveries that occur at a longer cycle time in the development process.

Caveat: Have a Team You Trust

You should only be willing to pay for bugs if the development team is pro-actively focused on quality. The development team should be:

  • Disambiguating features and building the right product.
  • Creating automated test suites that have high code coverage.
  • Using continuous integration to frequently build the application and run test suites.
  • Complementing automated testing with manual, exploratory testing.
  • Automating tests for any bugs discovered to ensure bugs are fixed and remain fixed.

If your team is working in good faith and employing practices that promote quality, view bugs as necessary product development work that happens downstream of initial execution. And if your team isn’t using practices that promote quality, fire your team.

Penny Wise, Pound Foolish

When clients believe they are getting an implicit warranty with each sprint or release, I’ve heard the following requests:

  • As a vendor, you should pay for fixing the bugs and keep the project on schedule.
  • As an employee, you should work extra hours (for free) and keep the project on schedule.

Both of the these requests are a form of punishment that puts you at odds with your team.

The team uses best practices and their professional judgement to balance delivery and quality. Penalizing the team for bugs perverts their professional judgement because it over-stresses quality.

If you penalize a team for bugs, they will move more slowly. It’s likely the team will continue to brainstorm and develop automated tests beyond the team’s general sense of diminishing returns. Manual testers will also work beyond their general sense of diminishing returns.

Punishing the team for bugs also creates an incentive for the team to not identify bugs during development. Bugs and quality issues may be silently deferred and surface later in the development schedule when it is harder to manage the situation.

Embrace Bugs and Embrace Risk

If you want to build a quality product, embrace the fact that there will be bugs. Encourage the team to identify bugs and get them into a backlog. Get aligned on when to estimate bugs and prioritize them against new feature development.

Feel good about funding a team of smart designers and developers to do their best at predictably building your product. Feel good about the learning cycles that occur and how quickly your team integrates new insights.

Be aware that the financial risk of developing custom software is in your hands. A warranty doesn’t come from your development team, but it’s what you will offer to your customers. Taking on the financial risk is why you have such a potentially significant financial upside.

The post Who Pays for Bugs? appeared first on Atomic Spin.

Speeding Up Your JavaScript Test Suite

Having fast tests is important. Slow running tests slow down development, especially if you’re practicing TDD. If tests are too slow to run, some developers may avoid running them altogether. Slow tests will also slow down CI builds, increasing the length of your feedback loop.

While it takes more development time, doing maintenance on your test suite to ensure it continues to run quickly is an important task that any significant project should prioritize.

I recently went through the practice of speeding up a large JavaScript test suite for the project I am working on. I thought I would share some of the culprits I found that were slowing down my tests, and how to fix them.

Tooling

If you are going hunting for performance problems in your code, it can help to know where to start looking. Getting a report of your slowest tests, or tests that run longer than some threshold, can be a great way to get starting metrics.

Some test runners have this feature built in, for example the Karma JavaScript runner has a reportSlowerThan option which does just this. If it’s not built into your test runner, it can be pretty easy to hack in manually by starting a timer at the start of a test in a before block, and then reading it in an after block.

1. Audit your beforeEach code.

This may seem like a no brainer, but it’s important to remember that any code in a beforeEach block will have linear complexity, where n is the number of tests. If you have some type of global before and after blocks, this can be particularly troublesome. Keep an eye out for slow running computations, loops, and DOM manipulations. While I am specifically talking about speeding up JavaScript tests here, this is a good tip for speeding up any type of test suite.

One of the first problems I found in my test suite was some code in a global beforeEach block that was inserting some static content into the DOM before every single test. Removing that so that it only happened once reduced my test execution time by more than half. It was the single biggest improvement I was able to make.

Don’t assume that your own code couldn’t possibly have something so simple slowing it down. It went unnoticed on our project by a whole team of engineers for well over a year. It took someone actually wondering why the tests were so slow to notice it.

2. Avoid rendering things onto the page.

As evidenced by my previous example, DOM manipulations and insertions can be expensive. Most JavaScript code can be tested without rendering anything onto the actual page, and doing so will make your tests run much faster. It can be tempting to just test JavaScript code by drawing views or components or whatever and interacting with them, but this does not scale well. Make sure you only touch the DOM when you really need to, and that you limit the scope to individual tests that need it. If you need to do it for a large number of tests, consider whether you can just render something once, and then reuse it across tests.

3. Use sinon or another tool to test async things synchronously.

Asynchronous behavior is a reality of JavaScript programming. It can also seriously slow down your test suite if you are having tests wait for timeouts or AJAX calls to complete.

Most javascript test frameworks give you the ability to run tests ansyncronously and wait for async behavior in your code to happen. For example, in jasmine and mocha you can do:

Code



myAsyncFn = function() {
setTimeout(function() {
counter++;
}, 500);
};

Test



it('increments the counter after a wait', function(done) {
counter = 0;
myAsyncFn();
expect(counter).toBe(0);
setTimeout(function() {
expect(counter).toBe(1);
done();
}, 500);
});

Unfortunately, this test is slow. It will sit and do nothing for 500ms while waiting for a timeout to trigger, meaning this whole tests will take more than half a second to complete at the absolute minimum.

Fortunately, there is a better way. For the purposes of testing, we can turn this into synchronous code using sinonJS. Sinon has lots of tools to turn async behavior like timeouts, intervals, and AJAX into controllable synchronous code. We could easily use it to rewrite the above test to be synchronous, and much faster as a result:


it('increments the counter after a wait', function() {
  var clock = sinon.useFakeTimers();
  counter = 0;
  myAsyncFn();
  expect(counter).toBe(0);
  clock.tick(500);
  expect(counter).toBe(1);
  clock.restore();
});

Out test suite had several complicated async tests that were taking several seconds to run when written in the former style, plus a host of simpler ones that were still much slower than they needed to be. Switching them to use sinon timers shaved around 20% of our total test execution time, and also made the tests easier to read and debug.

Tests Need Maintenance Too

Keeping your test suite healthy and fast is an important maintenance task for any project. What do you do to keep your tests running quickly?

The post Speeding Up Your JavaScript Test Suite appeared first on Atomic Spin.

autoclave: A Pressure Cooker for Programs

autoclave

I’ve been working on a multi-threaded, distributed system, and there have been some bugs that only manifested when things lined up exactly right. While the project’s deterministic elements have lots of unit and system tests, every once in a while mysterious failures have appeared.

On top of the other tests, the code is full of asserts for numerous boundary conditions, and stress tests intentionally overload the system in several ways trying to see if they trigger. While the system is generally stable, every once in a while something has still happened due to unusual thread interleaving or network timing, and these issues can be extraordinarily difficult to reproduce.

Streamlining Info Gathering

Early efforts to fix rare bugs like these tend to focus on gathering info: narrowing the focus to the code paths involved and finding patterns in the interactions that cause them. This process can take quite a while, so I wrote a tool to streamline things. I named it “autoclave” because it applies prolonged heat and pressure to eliminate bugs: a pressure cooker for programs.

autoclave runs programs over and over, to the point of failure. It handles logging and log rotation and can call a failure handler script to send a push notification, analyze a core dump, or freeze a process and attach a debugger. When investigating issues that may take 10 hours to reproduce, it can be tremendously helpful to know there will be logs and an active debugger connection to hit the ground running the next morning, or to get an e-mail with logs attached when a failure is found.

For example, we had a network request with a race condition in the error handling. When the request succeeded, everything was fine, but if it failed with a specific kind of recoverable error, there was a narrow window where one thread could potentially free() memory associated with the request before the other had finished using it to handle the error. This led to a crash, or was caught by an assert shortly after. By stressing it with autoclave, we were able to get logs from a couple failure instances, spot a common pattern, and narrow the root cause to that error handling path.

Basic Usage

$ autoclave  stress_test_program_name

To log standard out & standard error and just print run counts, use:

$ autoclave -s  stress_test_program_name

To rotate log files, use -c (here, keeping 10):

$ autoclave -s -c 10  stress_test_program_name

To specify a handler script/program for failure events, use -x:

$ autoclave -s -c 10 -x gdb_it  stress_test_program_name

While the program doesn’t need any special changes to work with autoclave, it can be useful to testing for a failure condition and then adding an infinite loop. Then, running autoclave with a timeout can be an easy way to attach a debugger. For example, to freeze the process and attach a debugger if the stress test doesn’t complete within 60 seconds:

// in the program, add:
if (some_condition) { for (;;) {} }  // infinite loop => timeout
$ autoclave -s -c 10 -x gdb_it -t 60 stress_test_program_name

(Attaching a debugger on timeout is also useful for investigating deadlocks.)

While automated testing is usually a better use of time than debugging issues after the fact, some bugs are inevitable, particularly in multi-threaded systems. It’s good to be prepared when surprises occur. Before a bug can be captured in a failing test, it’s often necessary to gather more raw data, and tools can help automate this as well.

The post autoclave: A Pressure Cooker for Programs appeared first on Atomic Spin.

An example of why open source software is fantastic

I was writing some basic RSpec tests for a puppet module this morning, methodically adding in fixtures and hiera data items to get the module to compile under the spec tests.

Then I hit this error:

Failures:

1) profile_puppet::master supported operating systems profile_puppet::master class without any parameters on redhat 6.4 should compile into a catalogue without dependency cycles
Failure/Error: it { should compile.with_all_deps }
NoMethodError:
undefined method `groups' for nil:NilClass
# ./spec/classes/init_spec.rb:36:in `block (5 levels) in '

Uh oh, that doesn’t look good. I did what I always do in such circumstances and googled the error message: puppet NoMethodError: undefined method `groups' for nil:NilClass. The first hit was https://tickets.puppetlabs.com/browse/PUP-1547 which describes my situation completely (I am testing for RHEL 6.4 on OSX).

What’s even better is that the ticket was updated 3 days ago with a pull request that fixes the issue. I applied the change locally, it worked perfectly, and I was able to complete my task.

Try doing that with proprietary software.

Test(ing) Kitchen: Assembling the Ingredients for Your Next Usability Test

Julia Child

“…no one is born a great cook, one learns by doing.”
– Julia Child, My Life in France

Or, to paraphrase the late great Julia — no one is born a great designer, one learns by doing (and testing). Cooks test out their recipes with an audience, and the same principle applies to new products and services. Usability testing is necessary to prove the product viability, alongside making sure that the proposed design will meet (and exceed) the user’s expectations.

Think of usability testing as the test kitchen for successful software design. You put together some great ingredients, follow the necessary steps carefully, and – voila! – a greater design emerges.

Usability Testing Ingredients

Your list of ingredients could range from artifacts (paper or digital) to principles regarding design strategy that you want to keep in mind along the way. As with, say, a bread recipe, the steps will vary project by project, but the general format is something that can be carried through all of your testing adventures and continually refined.

Validation: Putting Your Ingredients to Work

Define goals up front on what needs to be tested — what do you want to come out of the interviews knowing more about? This knowledge should be substantial enough to equip you to make decisions about how and where to iterate the design. When revisiting your initial objectives, you should feel confident you know how to address each issue.

When constructing…

  • Think about the environment of your end user. What is the product experience going to feel like? Simulate your testing environment and tailor your artifacts to the end product as closely as possible.
  • Vary the user profile (analytical vs. creative). Sprinkle in a few power users to each engagement. They are your forward-thinkers, and can provide unpredictable usage patterns.
  • Find your go-to person on the inside — someone who is familiar with various users, departments, and settings. You need this person to schedule your interviews and provide some important up-front product information to prep users ahead of time.

When conducting…

  • Whenever possible, test in the environment of your end user. Eliminate any need for human intervention, which threatens the purity of your test results.
  • Use your test plan as your guide. It should act as your script, providing a consistent template for each interview. It’s important to stay on track with your original plan, but also necessary to maintain some flexibility for elements of surprise.
  • Note every aspect of the user’s behavior. Are users interacting with things in the room, drawing on a whiteboard, adjusting lighting, closing doors? Are patterns beginning to emerge as the day goes on?
  • When taking notes, quote your user verbatim, if that is the best way to describe their reaction. How successfully you relay information back to internal teams will be critical in spreading the valuable insights from usability testing.
  • If only for a couple of sessions, get your client to sit in. Their participation will be critical to the success of the project. As an observer, they can absorb information without having to engage with the user — although being able to chime in with questions is another key benefit. If necessary, capture video of the engagement to share with remote team members.

When consolidating…

  • Assemble your artifacts and present your data in a format that is tailored to your client’s needs. It should be easily consumable and reveal in one glance the most important discoveries found in testing.
  • Be sure to classify issues related to utility vs. usability (see below). Avoid getting into design details that can be sorted out later. Just agree on the top issues, recommendations, and tactical areas in the design that need to be addressed.
  • Ask your client to review these issues with their peers. Do they experience this issue too? Have they heard users reporting it? Are there any developed work-arounds created to address this problem? Peers will provide valuable input, especially if they’re in touch with end users.

Utility vs. Usability

Closing Thoughts

New insights gained through testing will inform how you enhance your recipe (or design). Maintaining consensus not only keeps the confidence building and relationships growing stronger, it also creates a platform for more well-informed (smarter) decisions. More minds are greater than one, and having more minds witnessing the user reactions with the design will provide everyone context as to why decisions were made along the way.

Don’t assume what the users may need help with when interacting with your product. The interface should be a guide to get them to their destination with little assumption made in advance. For information on why usability testing matters, read my latest post, Why Usability Testing Matters: A Newbie’s Perspective.

In the meantime, happy cooking!

* Iteration Cycle
** 6 UCD Principles

The post Test(ing) Kitchen: Assembling the Ingredients for Your Next Usability Test appeared first on Atomic Spin.