Take Your Emacs to the Next Level by Writing Custom Packages

I wrote recently about using Emacs as a JavaScript development environment. One of my chief complaints was the inability to easily run JavaScript tests from within Emacs. I practice TDD frequently, and having to context-switch out of the editor I’m using to run tests is a big annoyance for me.

I knew it was possible to do what I wanted from within Emacs, as evidenced by other test runner modes like RSpec-mode. Armed with that knowledge, I decided to go through the process of learning enough Emacs Lisp to make a Mocha test runner. In the process, I learned a lot about developing Emacs packages and ended up with a really useful tool, so I thought I would share some of the things I learned.

There is a lot of content here, and we are going to cover three main topics: using Emacs as a Lisp IDE, writing a simple package, and publishing that package for others to use.

Emacs as an Emacs Lisp IDE

Unsuprisingly, Emacs itself is an excellent development environment for Emacs Lisp code. It can be easily cofigured to include IDE features for Lisp development, such as autocompletion, popup documentation, integrated debugging, and a REPL.

A few recommendations

Most of these features are built in, although I highly recommend installing the third-party packages company-mode (autocompletion) and Flycheck (real-time syntax checking) if you’re going to do Emacs Lisp development.

I also recommend turning on the built-in eldoc-mode, which will pop up documentation and signatures for various functions and symbols as you write code.

Lastly, I recommend familiarizing yourself with the built-in debugging and evaluation functions for Emacs Lisp. For evaluating code to test it, you can use the built-in Lisp-interaction-mode, which the *scratch* buffer usually has enabled by default. With the mode, you can paste Emacs Lisp code and then press C-x C-e to evaluate the code and see the results.

Emacs also comes with Edebug, a built-in stepping debugger for Emacs Lisp code. There are several ways to use it, but I most commonly use the interactive function edebug-defun. When run inside the body of a function, it sets a breakpoint at the start of that function that will be hit the next time you run it.

Making a Custom Compilation Mode

Mocha is a CLI tool, and Emacs has a number of built-in utilities for running external CLI programs.

Compilation buffer

The most relevant one for something like a test runner is a compilation buffer. In Emacs, this runs an external CLI process and displays the output in a buffer. This is useful for programs where you care about the output, like a compiler or test runner. It also includes some built-in niceties like the ability to highlight errors and jump to them.

In fact, you don’t even need to write any code to run an external command in a compilation buffer. You can just use the M-x compile command like so:

Running a compilation command

This is a solid approach for a static compilation command like the default make -k. However, it doesn’t scale well to something like a test runner, which needs to do the following:

  1. Run a local script, requiring a consistent working directory or an absolute path (M-x compile will use the directory of the current file as the working directory).
  2. Pass dynamic configuration options like the file to test the runner.

Custom compilation mode

The solution in Emacs is to programmatically create a custom compilation mode that can take these options and run using an interactive function. This is easy to do. In fact, the compilation mode for Mocha.el is only a couple of lines:


(require 'compile)
...
(defvar node-error-regexp-alist
  `((,node-error-regexp 1 2 3)))
(defun mocha-compilation-filter ()
  "Filter function for compilation output."
  (ansi-color-apply-on-region compilation-filter-start (point-max)))
(define-compilation-mode mocha-compilation-mode "Mocha"
  "Mocha compilation mode."
  (progn
    (set (make-local-variable 'compilation-error-regexp-alist) node-error-regexp-alist)
    (add-hook 'compilation-filter-hook 'mocha-compilation-filter nil t)
  ))

While some of the syntax is a little cryptic (thanks, Lisp!), what it does is very simple. We use the built-in define-compilation-mode macro to define a compilation mode named mocha-compilation-mode, and we do two things with it:

  1. Pass it a regular expression that maps Node.js error output to filenames, line numbers, and column numbers.
  2. Add a processing hook which interprets ANSI escape codes and formats them properly.

The first enables us to quickly jump to the point of failure in a test. The second makes everything look nicer.

Running Test Commands

Now that we have a custom compilation mode that will nicely display our command output, we need to generate a test command and run it with the custom mode. Doing this will involve several simple steps.

Find project root

Many types of command line utilities need to be run from the project root. Fortunately, project roots are generally easily identified by the presence of a particular file or directory (like a source control directory). Since this is such a common need, Emacs has a built-in function, locate-dominating-file, to recursively search up a directory tree for a particular file name. The Emacs documentation on this function explains how to use it better than I could:

(locate-dominating-file FILE NAME)
Look up the directory hierarchy from FILE for a directory containing NAME. Stop at the first parent directory containing a file NAME, and return the directory. Return nil if not found. Instead of a string, NAME can also be a predicate taking one argument (a directory) and returning a non-nil value if that directory is the one for which we’re looking.

Customize configuration

Unlike an actual compilation, which would involve rerunning a single static command, something like a test runner needs to be dynamically configurable. Fortunately, Emacs has Customize, an awesome built-in and extensible configuration interface for packages (and the core editor). Customize exposes several macros which can be used to define custom configuration parameters for a package and display them in an editable GUI.

For example, here are the configurations we expose for our Mocha runner:


(defgroup mocha nil
  "Tools for running mocha tests."
  :group 'tools)
(defcustom mocha-which-node "node"
  "The path to the node executable to run."
  :type 'string
  :group 'mocha)
(defcustom mocha-command "mocha"
  "The path to the mocha command to run."
  :type 'string
  :group 'mocha)
(defcustom mocha-environment-variables nil
  "Environment variables to run mocha with."
  :type 'string
  :group 'mocha)
(defcustom mocha-options "--recursive --reporter dot"
  "Command line options to pass to mocha."
  :type 'string
  :group 'mocha)
(defcustom mocha-debug-port "5858"
  "The port number to debug mocha tests at."
  :type 'string
  :group 'mocha)

And those show up in the customize GUI like so:

GUI interface for configuring our package

Since many of these options make sense to configure on a per-project rather than global basis, Emacs also supports a special file called .dir-locals.el, which can override these settings on a per-directory basis. A typical .dir-locals.el file might look like this:


((nil . (
            (mocha-which-node . "/Users/ajs/.nvm/versions/node/v4.2.2/bin/node")
            (mocha-command . "node_modules/.bin/mocha")
            (mocha-environment-variables . "NODE_ENV=test")
            (mocha-options . "--recursive --reporter dot -t 5000")
            (mocha-project-test-directory . "test")
            )))

The syntax is a little cryptic, but if your Emacs working directory is in the same directory as this file or below it, it will respect these options in favor of any global configuration.

Once we have these configuration options defined, it is easy to write a function that will concatenate all the strings together to create our test runner command!


(defun mocha-generate-command (debug &optional mocha-file test)
  "The test command to run.
If DEBUG is true, then make this a debug command.
If MOCHA-FILE is specified run just that file otherwise run
MOCHA-PROJECT-TEST-DIRECTORY.
IF TEST is specified run mocha with a grep for just that test."
  (let ((path (or mocha-file mocha-project-test-directory))
        (target (if test (concat "--grep "" test "" ") ""))
        (node-command (concat mocha-which-node (if debug (concat " --debug=" mocha-debug-port) "")))
        (options (concat mocha-options (if debug " -t 21600000"))))
    (concat mocha-environment-variables " "
            node-command " "
            mocha-command " "
            options " "
            target
            path)))

Generating and Running Compile Command

Now that we can configure our test command and find the root of our project, we are ready to run it with the custom compilation mode we made earlier. I’m going to show you the most important code for doing that below, and then break it down and explain the different parts.


(defun mocha-run (&optional mocha-file test)
  "Run mocha in a compilation buffer.
If MOCHA-FILE is specified run just that file otherwise run
MOCHA-PROJECT-TEST-DIRECTORY.
IF TEST is specified run mocha with a grep for just that test."
  (save-some-buffers (not compilation-ask-about-save)
                     (when (boundp 'compilation-save-buffers-predicate)
                       compilation-save-buffers-predicate))
(when (get-buffer "*mocha tests*")
    (kill-buffer "*mocha tests*"))
  (let ((test-command-to-run (mocha-generate-command nil mocha-file test)) (root-dir (mocha-find-project-root)))
    (with-current-buffer (get-buffer-create "*mocha tests*")
      (setq default-directory root-dir)
      (compilation-start test-command-to-run 'mocha-compilation-mode (lambda (m) (buffer-name))))))

Whew! That is some pretty dense code, so let’s break it down bit by bit.

Check for unsaved buffers

The first thing this function does is check if there are any unsaved buffers open, and then prompt the user to save them. Sounds pretty complex, but since this is such a common operation, Emacs makes it possible with just a couple of lines.


  (save-some-buffers (not compilation-ask-about-save)
                     (when (boundp 'compilation-save-buffers-predicate)
                       compilation-save-buffers-predicate))

Clean up test buffer

Next, we search for the named buffer we use to run tests to see if it is still around from a previous test run. If it is, we kill it so we can get a fresh start.


  (when (get-buffer "*mocha tests*")
    (kill-buffer "*mocha tests*"))

Bind values

After that, the real work begins. We start by binding two values: the actual test command we are going to run and the path to the project root directory. Both values are calculated using the techniques and code we defined above.


  (let ((test-command-to-run (mocha-generate-command nil mocha-file test)) (root-dir (mocha-find-project-root)))

Run test command

Finally, now that we have those two values, we actually run our test command. This is a three-step process of:

  1. Creating and switching to the buffer our tests will run in.
  2. Changing the working directory to our project root.
  3. Running our test command in the buffer with our custom compilation mode.

All of this is done with the last three lines of code:


    (with-current-buffer (get-buffer-create "*mocha tests*")
      (setq default-directory root-dir)
      (compilation-start test-command-to-run 'mocha-compilation-mode (lambda (m) (buffer-name))))))

Expose interface to users

Now that we have the code to run our test commands, we need to expose it to users. For explicit actions like running commands, Emacs uses interactive functions, which can be called interactively by a user via either the M-x interface or a hotkey.

To make a function interactive, you just include the (interactive) special form at the top of the function body like so:


;;;###autoload
(defun mocha-test-file ()
  "Test the current file."
  (interactive)
  (mocha-run (buffer-file-name)))

If you are not exporting the function as part of a mode, it is also customary to add the ;;;###autoload magic comment before the function, which helps other Emacs files referencing your package find the function so it can be used (for example, to bind them to a hotkey).

Once a function is defined as interactive, it will appear in the M-x interface and can be activated by a user.

Interactive functions for our mode

And there you have it. With only a couple of functions and big dose of Emacs magic, we have created a highly configurable test runner that is integrated into our development environment.

Distributing on MELPA

Having done all the work to create a custom package, don’t you just want to share it with the world? Fortunately for you, Emacs has a built-in package manager that makes this pretty easy. The package manager is backed by several different repositories, so making your package publicly available is just a matter of getting it into one of these repositories.

The three main package repositories are ELPA, Marmalade, and MELPA. ELPA is the offical GNU repository that comes with Emacs, while Marmalade and MELPA are third-party repositories. There are a number of differences between each of the repositories, the most significant being how they deal with licensing.

ELPA and Marmalade both require that all packages are GPL- or GPL-compliant licensed. Additionally, ELPA requires you to complete an FSF copyright assignment form. MELPA, on the other hand, has no licensing requirements, although it does have a code review process that all newly added packages must go through to ensure the code is of suitable quality.

Which package repositories you choose to put your code on is up to you, but I personally use MELPA and will talk about the process of getting a package into that repository.

There are two basic steps to getting a project on to MELPA.

Format the package file

First, you need to follow standard Emacs Lisp conventions for formatting a package file, which includes adding a description header and several other sections to the file. The Flycheck package for Emacs is invaluable here, because it will mark all of the required sections that are missing as errors and guide you through adding them. Doing this correctly is important because the Emacs package manager actually parses these sections as metadata to use.

Add your recipe

Once your code is properly formatted, all you need to do is fork the MELPA project on GitHub and add a recipe for your project. MELPA has docs for configuring more complex projects, but for a simple one-file package, the recipe is really easy.

The recipe for the Mocha runner looks like this:


(mocha
 :repo "scottaj/mocha.el"
 :fetcher github)

That’s it, just a path to the GitHub repository. Once the recipe is added, you can open a pull request against MELPA. Someone will review your package and may suggest code changes. Once those are done, your pull request will be merged and MELPA will start publishing your package in its regular builds. The best part is, since MELPA pulls your code straight from your source repository, you don’t have to do anything to push updates to MELPA. It will just automatically pull down the latest version of your code.

Well, that is my short guide to creating and publishing an Emacs package. You can find the Mocha.el package I used as an example here and my Emacs config here. Drop me a comment if you have any questions!

The post Take Your Emacs to the Next Level by Writing Custom Packages appeared first on Atomic Spin.

Rust Sysroots for Easier Embedded Projects

Rust now has usable support for Rust sysroots. This makes it must easier and useful to use cargo to build your embedded projects. You should be able to even use stable rust once 1.9 is out, but you will have to use a nightly for now.

Build libcore for your target

Use the instructions in my last blog post here to build a libcore for the target platform.

Make a sysroot directory structure:

The directory naming and structure in your sysroot is important. Put the libraries you built in the previous step in:

my-sysroot/lib/rustlib/$target_name/lib/libcore.rlib

Replace $target_name with your desired target, in my case this is thumbv7em-none-eabi so I have:

my-sysroot/lib/rustlib/thumbv7em-none-eabi/lib/libcore.rlib

(you can name my-sysroot whatever you want)

Set RUSTFLAGS to specify the desired target and sysroot

You have two options for specifying the sysroot. You can either use the RUSTFLAGS environment variable:

RUSTFLAGS=”–sysroot=my-sysroot” cargo build –target thumbv7em-none-eabi

or you can specify these in the .cargo/config file:

[build]
target = "thumbv7em-none-eabi"
rustflags = ["--sysroot", "my-sysroot"]

Build

and then just do:

cargo build

To build your cargo library against your desired target and sysroot.

I usually have a few more flags. Something like:

[build]
target = "thumbv7em-none-eabi"
rustflags = ["-C", "opt-level=2", "-Z", "no-landing-pads", "--emit", "obj", "--sysroot", "my-sysroot"]

and then build with a:

cargo build --verbose

The --emit obj causes cargo to emit an object file as an artifact of the build which you can link into an existing C embedded project. The --verbose shows the arguments passed to rustc under the hood which is useful to verify that cargo is calling rustc as you expect.

The post Rust Sysroots for Easier Embedded Projects appeared first on Atomic Spin.

Using Rust 8.1 Stable for Building Embedded Firmware

A lot of things have changed since I wrote my last blog post on using Rust to build embedded firmware.

Since Rust 1.6 was released, libcore is now stable, and nostd is now a stable feature. This means we can now build Rust libraries for our embedded firmware using the official stable version of the compiler!

There are a few caveats. We still will need a nightly to build libcore and other components in our cross-compiling environment (see more info here), but once we have that all setup we can do the bulk of our development in Rust libraries using the stable version.

The basic process is as follows:

1. Install multirust.

Install multirust by following the instructions here. Multirust makes it much easier to fetch, install and switch between, particular versions of Rust.

More detailed information is on the github readme here.

2. Install latest stable version of Rust.


multirust default stable

3. Install a nightly version of Rust.

You’ll want the nightly that corresponds with stable version that you have. You can find the associated dates on the rust release notes page.
So for Rust 1.8 we’ll need the 2016-04-14 nightly.


multirust override nightly-2016-04-14

Having a version of the nightly that matches our stable will let us build core libraries with the nightly that can be used by stable.

4. Clone the Rust repo.

Check out the commit hash that matches your nightly version of Rust. The commit for mine is 2b6020723, so we can do this:


git clone git@github.com:rust-lang/rust.git
cd rust git checkout 2b6020723115e77ebe94f228c0c9b977b9199c6e
cd ..

5. Use the nightly build to build libcore for your target platform.

You will need a target specification for the target triple you want to build for and put it in the root of your project. I’m using this one, for an ARM Cortex-M4.

The hardest part is getting the data-layout right, I used a method mention here to generate mine. More thorough documentation on target specification files is here.

Then to build libcore, you can do this:


mkdir libcore-thumbv7em
rustc -C opt-level=2 -Z no-landing-pads --target thumbv7em-none-eabi -g rust/src/libcore/lib.rs --out-dir libcore-thumbv7em

6. Build binaries linking against your new libcore.

Switch back to stable Rust, and build your libraries.


multirust override stable
rustc --crate-type lib -C opt-level=2 -Z no-landing-pads --target thumbv7em-none-eabi -g --emit obj -L libcore-thumbv7em -o my_rust_file.o my_rust_file.rs

And there you have it! This will give you an object file that you can link into your final binary.

Some more general documentation on cross compiling with rust can be found here

The post Using Rust 8.1 Stable for Building Embedded Firmware appeared first on Atomic Spin.

     

A Design-First Approach to Mobile App Architecture for iOS

At one time or another, how many of you have thought, “I’m glad I get to do mobile development because that means I don’t have to deal with CSS”? I’m not going to lie. I’ve had that thought more than once. And, while it’s true that by working in the mobile space, we have escaped the misery of CSS, the reality is that we still suffer from many of the same problems that CSS was designed to resolve. We still have to decide: How do we separate aesthetics from behavior and function in our applications?

Why is This a Problem?

Before I get into describing my technique, let me first say that I think Apple is doing the development community a disservice by pushing people to use Interface Builder for styling views. I know it’s possible to only use IB for positioning and constraining view elements, and then apply style programmatically. However, I don’t believe the vast majority of people are doing that.

My biggest problem with Interface Builder is that it does not facilitate reuse. Colors, font sizes, margins, etc., are essentially hard-coded within the GUI over and over again, for every view. What happens if your app gets redesigned and you need to change colors or margins on every view? You have go through every storyboard, sifting through panels of property lists to find every occurrence of the old and change it to the new. Yuck.

Design-First Mobile App Architecture

So, let’s say you bite the bullet and decide to programmatically style your UI. You still have to decide how to organize your “style” code. There are a lot of ways to go about it, and I don’t think there is a silver bullet solution that will work perfectly for every app or every team.

The most important thing is to think through how you want to handle visual design from the very beginning of the development process. Come up with a plan from the beginning, and stick to it.

I call my approach Design-First because I think design should be treated as a first-class citizen in the application architecture—not an afterthought.

Let’s start by taking a look at some code that styles a button and a text field in a view.


// In DashboardView.swift 
let emailAddress = UITextField()
emailAddress.text = "Enter email"
emailAddress.textColor = UIColor(red: 74.0, green: 63.0, blue: 99.0, alpha: 1.0)
emailAddress.backgroundColor = UIColor(red: 94.0, green: 23.0, blue: 19.0, alpha: 1.0)
self.addSubview(emailAddress)
let submitButton = UIButton(type: .RoundedRect)
submitButton.setTitle("Submit", forState: .Normal)
submitButton.backgroundColor = UIColor.whiteColor()
submitButton.setTitleColor(UIColor.blueColor(), forState: .Normal)
self.addSubview(submitButton)

This code creates the text field and the button and applies colors and fonts to them. It’s really no better than the Interface Builder approach because the colors and fonts are hard-coded right there in the view. Any other view needing the same colors or font properties would have to redefine them. Doing an application-wide redesign would be painful.

To improve on this, we could create a common style class (or master style) that contains definitions of the colors used in our app. This gives us the ability to reuse definitions throughout the app.


class MasterStyle {
    static let dashboardSubmitButtonTextColor = UIColor(red: 74.0, green: 63.0, blue: 99.0, alpha: 1.0)
    static let dashboardSubmitButtonBackground = UIColor(red: 94.0, green: 23.0, blue: 19.0, alpha: 1.0)
    static let dashboardEmailAddressTextFieldTextColor = UIColor.blueColor()
    static let dashboardEmailAddressTextFieldBackgorundColor = UIColor.whiteColor()
}
// DashboardView.swift can now look like:
let submitButton = UIButton(type: .RoundedRect)
submitButton.setTitle("Submit", forState: .Normal)
submitButton.backgroundColor = MasterStyle.dashboardSubmitButtonBackground
submitButton.setTitleColor(MasterStyle.dashboardSubmitButtonTextColor, forState: .Normal)
self.addSubview(submitButton)

This approach is decent, but the problem is that the master style class quickly gets bloated with lots of definitions. The organization of that file gets difficult to maintain, and adding new styles for specific views can become burdensome.

My recommended approach takes this one step further by breaking the style into view-specific style classes. These style classes provide aliases for properties or functions that are defined in a master style class as a means of organization. All colors should still be defined in a master style class, thus allowing them to be modified globally. The master style can also contain functions that return groups of style properties for common UI elements. For example, it might have a function called commonButtonStyle() that returns a collection of text color, background color, border color, etc.

The code below shows a more complete example of how this might look in an application. I created two classes, ButtonStyle and TextFieldStyle, that hold onto a background color and a text color. My MasterStyle has static functions that return instances of these classes with the colors populated.


class MasterStyle {
    static let primaryBackgroundColor = UIColor.whiteColor()
    // Buttons
    static let primaryButtonBackgroundColor = UIColor.clearColor()
    static let primaryButtonTextColor = UIColor.blueColor()
    static func commonButtonStyle() -> ButtonStyle {
        return ButtonStyle(
            backgroundColor: primaryButtonBackgroundColor,
            textColor: primaryButtonTextColor
        )
    }
    // Text Fields
    static let primaryTextFieldBackgroundColor = UIColor.clearColor()
    static let primaryTextFieldTextColor = UIColor.blueColor()
    static func commonTextFieldStyle() -> TextFieldStyle {
        return TextFieldStyle(
            backgroundColor: primaryTextFieldBackgroundColor,
            textColor: primaryTextFieldTextColor
        )
    }
}
extension UIColor {
    static func designFirstLightGray() -> UIColor {
        return UIColor.grayColor().colorWithAlphaComponent(0.5)
    }
}
class ButtonStyle {
    var backgroundColor: UIColor
    var textColor: UIColor
    init(backgroundColor: UIColor, textColor: UIColor) {
        self.backgroundColor = backgroundColor
        self.textColor = textColor
    }
}
class TextFieldStyle  {
    var backgroundColor: UIColor
    var textColor: UIColor
    init(backgroundColor: UIColor, textColor: UIColor) {
        self.backgroundColor = backgroundColor
        self.textColor = textColor
    }
    func customize(block: (TextFieldStyle) -> ()) -> TextFieldStyle {
        block(self)
        return self
    }
}

The DashboardStyle class can now access the common style groups in the MasterStyle and alias them for view-specific elements. I also included an example of how the DashboardStyle could apply a customization to the common style if that was needed.

The great thing about about this approach is that you never have to search to find the code that is styling a specific UI element on a view. It’s always in a deterministic, easy-to-find place.


class DashboardStyle {
    static let backgroundColor = MasterStyle.primaryBackgroundColor
    static let submitButton = MasterStyle.commonButtonStyle()
    static var emailAddressTextField: TextFieldStyle = MasterStyle.commonTextFieldStyle().customize {
        $0.backgroundColor = UIColor.designFirstLightGray()
    }
}

Finally, the view code only contains references to its own style class–not the master style.


// In DashboardView.swift
let emailAddress = UITextField()
emailAddress.text = "Enter email"
emailAddress.textColor = DashboardStyle.emailAddressTextField.textColor
emailAddress.backgroundColor = DashboardStyle.emailAddressTextField.backgroundColor
self.addSubview(emailAddress)
let submitButton = UIButton(type: .RoundedRect)
submitButton.setTitle("Submit", forState: .Normal)
submitButton.backgroundColor = DashboardStyle.submitButton.backgroundColor
submitButton.setTitleColor(DashboardStyle.submitButton.textColor, forState: .Normal)
self.addSubview(submitButton)

The graphic below shows how this idea could be extended to include a group-specific class that contains style definition for views that have something in common. For example, if your app has an on-boarding process that has a different look than the rest of your app, you might create an OnboardingStyle class.

Design First Architecture

Takeaway

This approach might not be favorable to all developers. I realize that everyone has their own programming style and ways of doing things, and I don’t expect it to be a one-size-fits-all approach.  The main point I hope you leave with is that structuring the “visual design” code in your app is important. It may seem like a tiny portion of your development effort, but it certainly adds up to a lot of code over time, and it can get out of hand very easily.

If you’re starting a new application, talk through your approach with your team and settle on something that fits your needs. If you work with a designer, bring them into the conversation as well. Try to think of ways to cleanly encapsulate the artifacts provided by your designer directly into your app architecture. Believe me, you will be happy in the long run!

What About Android?

I specifically targeted iOS in this post because UI programming in Android is so vastly different. Android seems to have taken a step in the right direction in regards to UI programming, although I think their Theme system is a nightmare to reason about.

See the Source

The complete source code for the snippets used in this post is available on Github.

The post A Design-First Approach to Mobile App Architecture for iOS appeared first on Atomic Spin.

A Swift Architecture for Managing State: Revised

In my previous blog post, I wrote about an approach for managing state in a Swift app. Following that post, some changes were made to the Swift language that deprecated some convenient syntax my approach relied on. After some thinking, and with a better understanding of Swift’s approach to mutability, I’ve slightly revised this architecture to reduce a lot of friction.

Problems We Encountered

The architecture I described in my previous post was focused on centralizing the application state, and then using functions that interpret state or derive new states as semantic abstractions.

Functions that interpret, or project, information from the state had a type signature of State -> anything, while mutation functions had a type signature of State -> State.

Often, you’d need to provide some additional information. For example, if you had a mutation that changed the user’s name, you would need to know the new name. The pattern we used to solve this was to create lambdas that closed over the necessary information. The function might look like String -> State -> State. While this is easy and normal in more functional languages, Swift’s syntax fought pretty hard against us, and we ended up with lots of code like this:


func setFullName(name: String) -> State -> State {
  return { s in
    var s = s
    s.fullName = name
    return s
  }
}

Namespacing

In Swift, it’s possible to declare methods on a struct, just as you would define methods on a class. At first, I saw no advantage to doing so. Worse, doing so actually results in functions with different type signatures.

For example, the following method, when referenced as State.username, has a type of State -> Void -> String, instead of State -> String:


extension State {
  func username() -> String {
    return fullName
  }
}

You also need to be careful not to clash names with a field of the struct.

Over time, we accumulated a large number of projection and mutation functions, and we wanted a way to namespace them. Given that Swift namespaces at the library or application binary level, defining these functions as methods is our only option to gain some kind of namespacing within our application.

As a bonus, methods have extremely convenient access to the struct’s fields, as you can see above. This really pays off, however, when it comes to mutation functions.

Embracing Mutating Funcs

If a tree mutates inside a function, does anyone care? Swift believes the answer is no, as long as that tree lives in a var on the stack.

When I first read of Swift’s mutating funcs, I decided to stay far away. Mutable state seems directly at odds with the controlled architecture I was trying to create. I’ve since gained a deeper understanding of what they are, and I am more comfortable with them.

Structs, enums, strings, and Swift’s (not Objective-C’s) arrays and dictionaries are all value types in the language. A value type is copied, rather than referenced, each time it is assigned to a variable (or constant) or passed into a function. This makes it much harder for side effects to leak across your application. A mutating func can only mutate what’s inside the var where a struct is stored, and mutating what’s inside that var cannot affect anything else.

Essentially, you can start to think of a mutating func as Swift’s syntactical shorthand for deriving new values and automatically assigning them to a var.

Let’s look at the above example, setFullName, refactored into a mutating func:


extension State {
  mutating func setFullName(name: String) {
    fullName = name
  }
}

This is a lot more succinct, and less code means less bugs. The type of this function, if you refer to it as State.setFullName, is: (inout State) -> String -> Void. The inout specifier means that the function will accept a pointer to a var holding a state, and it will be able to mutate it directly.

While inconvenient, I can work with this. It’s not difficult to transform a function of that type into a function of State -> State, and then plug it back into the rest of my code. The benefit is a lot less typing.

Function Gymnastics

When you define a method on a struct, its type is: TheStruct -> (YourArgs) -> YourReturn.

When you define a mutating method, its type is: (inout TheStruct) -> (YourArgs) -> Void.

These can easily be transformed into our projection and mutation types. For example, here is the implementation for a mutation function that takes one argument:


func lift<E,A>(mut: (inout E) -> A -> Void, _ a: A) -> E -> E {
  return { e in
    var e = e
    mut(&e)(a)
    return e
  }
}

Which we could use like:


appState.change(lift(State.setFullName, "Radiohead"))

Because this is a bit ugly, I also defined an operator for it:


appState.change(State.setFullName <+ "Radiohead")

The most unfortunate aspect of this is that it needs to be reimplemented, rather repetitively, for each arity that you need to support. Still, it’s better to have a couple implementations of lift so that you can save a bunch of code in the rest of the application.

Wrapping Up

I tried to fight the design of the Swift language, and I had a bad time. Fortunately, if you concede a little bit of purity and make a bit more machinery, you can work with the language and still have nice things.

If you’re curious, I’ve put the Ref class on github.

The post A Swift Architecture for Managing State: Revised appeared first on Atomic Spin.

Estimating Project Completion with Burn Charts

burn-chart

Micah has written before about using burn charts to track team progress. One of his tips is to use a project’s projected finish date to help the client understand what changes can or must be made to the scope and budget. I’ve long been curious about calculating when a project will finish, so after reading Micah’s post, I did some research.

First, I collected sprint data from several of Atomic’s past projects, ranging from short engagements to multi-year marathons. I wanted to study what formula best predicted a team’s final velocity based solely on past sprints.

Estimating a project’s completion date can involve a lot of variables: teams expand or contract, individual team members are swapped out, schedules shift, tools improve, and technical debt accrues. Often, these variables can’t be known, measured, or appropriately accounted for, especially in advance. If you try to factor all of these variables into your prediction, you’ll end up fitting past data very well, but you probably won’t make a very good future estimate.

The least possible data to use for estimating a finish date are the historical sprint velocities and the project scope—two pieces of data that can be very easily and accurately measured.

Unfortunately, as I looked at Atomic’s past projects, I realized that scope is anything but stable. It can vary wildly through the lifetime of a project. Sometimes it starts by climbing steeply, while the client and team figure out what should be built. Sometimes it fluctuates as clients change their mind. Sometimes it drops suddenly or grows steadily, depending more on the client’s budget than the initial work agreement.

Because project scopes can be so unpredictable, we can’t rely on scope to consistently estimate when a team will finish their work. Instead, I had to ignore scope and see how well I could predict a team’s final average velocity from sprint velocities alone.

There are several common formulas, but it turns out that the best way to forecast the final average from any given sprint is simply to average all the sprint velocities that came before it. The average of the initial sprints won’t give very accurate predictions, but over time, the estimate stabilizes.

There’s a temptation to use more sophisticated formulas than mere averaging. Some teams average only the last three sprints, or they weight the most recent sprint more heavily than the ones before it. Based on Atomic’s past projects, these strategies yield very erratic estimates. They tend to follow sprint-to-sprint variability and never settle onto a particular finish date. If you want to show clients recent trends in sprint velocities without the noise of individual sprints, you can use a rolling average, but I wouldn’t use a rolling average to predict when the team will finish.

As part of this research, I made a basic burn chart generator. I’ve seen lots of different ways to make burn charts, from using spreadsheets like Excel and Numbers to manually editing vector graphics in programs like Sketch.

If you’d rather not fiddle with spreadsheets or math or manual graphing, you’re welcome to use my generator. Just drag and drop a CSV of your sprint data and export an image. Hope that helps with your estimating.

The post Estimating Project Completion with Burn Charts appeared first on Atomic Spin.

A Trip to SyntaxCon 2016

Recently, I spoke at the first SyntaxCon in Charleston, South Carolina. It was a great time, and it’s exciting to see their nascent tech community just starting to grow. There was a ton of good content packed into the two intense days, and I wanted to share some highlights.

“Thinking in SQL”

I can’t help plugging my own talk. It was great—you should’ve been there!

My main idea was that we tend to think about an SQL join as a “has many” relationship (or a “belongs to” or an equivalent simple dependency). However, you can get more mileage out of thinking about JOINs in a way that’s more in tune with the relational model. Think of it as creating the Cartesian product first, and then filtering the result set down to matches.

The full 40-minute talk has more content than I have room here, so check out my slides or watch the video (forthcoming at the SyntaxCon blog) for the full gory details!

Service Workers

Netflix’s Jem Young shared his excitement about service workers, a new feature specification in the early stages of adoption in many major web browsers. They are persistent daemons that can access and modify HTTP requests transparently in-flight.

Service workers are very powerful and are part of an ongoing pattern of using the browser less as a way to display content and more as an application platform (a trend that’s been going for some time now). After seeing his demonstrations, I share much of his excitement, and like much of the audience, some concern regarding security and resource utilization. However, despite the potential risks that still need to be ironed out, there is a ton of promise.

In the past, I’ve written HTML5 offline apps, and the app cache is painful. It’s very tricky to get it set up just right and to integrate the manifest generator into your build system. It’s too much of a house of cards, and it doesn’t take much to knock it over. Anything you can do with an offline app, you can do with a service worker—plus more, and with a much simpler API.

Check out Jem’s blog and read more about service workers.

ECMAScript 6

Ben Ilegbodu from Eventbrite shared his expertise on the upcoming features in ECMAScript 6. It doesn’t sound like there’s anything world-changing here (unlike service workers, these language features are not going to fundamentally change how we architect our applications), but these are still some long-awaited conveniences.

Here are a few highlights:

  • Destructured assignment
  • Default function parameters
  • Template literals
  • Arrow functions

The alternate function syntax with lexical scoping for ‘this’ is something I’ve been pining after for years. With this new functionality, I don’t think CoffeeScript has nearly as much of an advantage. Check out everything Ben put together on his web site.

Looking Forward

This conference wasn’t as big as some I’ve been to, but they had a great team and managed to put together a really sharp lineup. I’m looking forward to seeing what they do next year!

The post A Trip to SyntaxCon 2016 appeared first on Atomic Spin.

A Minimalist Guide to Customizing ActiveAdmin Forms

ActiveAdmin has saved a huge amount of time on our current project, and I highly recommend it for quickly giving non-technical people administrator access to your Rails app.

Some of the documentation is great, and there are lots of methods you can use to customize, but there are also some out-of-date red herrings, some things that require learning about Formtastic, etc. So here’s a minimalist guide to customizing ActiveAdmin forms relying more on Ruby and logic than on the unique ins and outs of ActiveAdmin.

Tip One

Use the ActiveAdmin documentation that’s on Github . The top hit on Google is out of date!

Tip Two

Selects and check boxes can take any arrays of tuples. The first item in the tuple is what will be displayed, and the second item is the value: [[“Display Name One”, 1], [Display Name Two”, 2]]. In practice, you’ll probably grab some array from ActiveRecord and do map on it like so:

input :toppings, :as => :check_boxes, :collection => Toppings.all.map { |t| [t.name, t.id] }

Tip Three

Create a module for ActiveAdmin helper functions. It’ll have access to ActiveRecord for your app and you can write code like you would anywhere else. I found it made my code more readable and saved me a lot of time and frustration because I could just write regular Rails code. For example:

/*In AdminHelpers module:*/

def self.sorted_available_instructors
booked_instructor_ids = Classes.pluck(:instructor_id)
available_instructor_ids = Instructor.pluck(:id) - booked_instructor_ids
available_instructors = available_instructor_ids.map{|instructor_id|
instructor = Instructor.find(instructor_id)
[instructor.name, instructor.id]
}
available_instructors.sort {|a, b| a[0] <=> b[0]}
end

In the form for editing a class, you can then create a specialized drop-down with:



input :instructor_id, as: :select, collection: AdminHelpers::sorted_available_instructors

Note that there are ActiveAdmin methods for sorting, filtering, etc., and sometimes you can get away with a one-line declaration. I just found that, often, writing my own method was faster and made my code more readable.

Tip Four

It’s handy to condition on f.object.new_record? to know whether the record is new or not. That way, if there’s just a small difference between creating a resource and editing a resource, you can use the same form code for both.

if f.object.new_record?

input :instructor, as: :select, collection: ActiveAdminHelpers.sorted_available_instructors
else
input :instructor, as: :select, collection: ActiveAdminHelpers.instructor_choices_for_class(f.object.instructor_id),
end

You can mix and match these tips to customize your ActiveAdmin forms using your Ruby skills. But if you want to get into the weeds, you can also poke around the ActiveAdmin documentation (on GitHub!) for just the method you need.

The post A Minimalist Guide to Customizing ActiveAdmin Forms appeared first on Atomic Spin.

     

Managing Multiple Releases in a Production Application

Projects are full of features. As an agile shop, we believe in getting those features in front of our client and end users as soon as they have been completed and thoroughly tested. It allows us to validate our assumptions and iterate on the feature if necessary. However, after an application is in production things become trickier.

After a project goes into production and the user base begins to grow, the client’s deployment strategy will change because they need to market their new features. They may also decide that some features need to be withheld for a feature release. This makes the traditional agile workflow of “delivering features as they are finished” difficult.

So, how do we continue to deliver value to our client and customers and sit on completed features? On a recent project, we’ve tried two different strategies and I want to share the strengths and weaknesses of both.

Strategy 1: Release Branches

The exact workflow we are using on this project closely follows gitflow. There are a couple important gitflow concepts to be familiar with:

  1. All code on master is what is currently deployed to production
  2. All completed features that have not been released are on develop
  3. All features are developed on feature/feature-name

This workflow is great when you’re confident that everything on develop will be deployable in the next release. It also allows us to test our entire codebase often as tests run each time a commit is made.

But, what happens when development is ahead of the release schedule and new features need to be intentionally withheld?

This is where release branches come in. The idea is very similar to the gitflow model, except that now we have multiple develop branches. For example, our next release lives on develop and the following release would be on develop-1.x.

Here is what the version control system looks like under this model:

feature_blog_post_1x

Pros

  1. Clear separation of features
  2. No technical debt of feature flags

Cons

  1. Rigid releases
  2. Difficult to maintain when supporting 3 or more releases
  3. Error prone (e.g. merging into the wrong branch, branching off wrong release)

This model actually works pretty well when you are managing up to 2 releases. It forces you to think ahead about which features fit nicely together in releases, and when the number of releases is low, there is a much lower chance of branching from the wrong base branch or merging to the wrong base branch.

But, it starts to fall apart when we are managing more releases or when the customer decides to sit on a feature that was previously approved. This is where the second strategy comes in.

Strategy 2: Feature Flags

Feature flags are not a new concept. You can google the idea and find countless blog posts on the topic. The basic idea is that each new feature gets hidden behind a flag that can be toggled to enable or disable the feature.

This means that we no longer need multiple develop-1.x branches. As long as each feature is hidden behind a flag, all of our code can continue to live in one place.

Here is the branching model of this system:

feature_flag_model

Pros

  1. Highly configurable releases
  2. All code lives in one place
  3. Fewer branches to maintain
  4. Adds possibility of A/B testing

Cons

  1. Technical debt of feature flags
  2. Need discipline to remove feature flags

There are quite a few benefits to this approach. As the application becomes more successful, the client can use feature flags to deploy new features to a subset of users. For developers, the code lives in one place, on develop. We can also have a more flexible release schedule and on top of that, we can easily turn a feature off if it doesn’t work as expected.

As you can see, both of these models have their benefits and drawbacks. We have recently gone from the release branch model to the feature flag model and we have found that the wins far outweigh any of the complications.

I would love to hear how others have managed releases in live applications.

The post Managing Multiple Releases in a Production Application appeared first on Atomic Spin.

Avoiding “Undefined is Not a Function” with Constants

How many times have you come across JavaScript’s “Undefined is not a function”? Too many. JavaScript is known for being so flexible that it’s easy to create unintentional bugs.

One way we can add structure to JavaScript code is to make a habit of using constants. Constants pair well in JavaScript with JS’s powerful object data structure, and they can prevent all kinds of problems, such as:

  • “What key do I need to access to get the property I want?”
  • “Where can I go to see all the keys that can be accessed on an object?”
  • “Undefined is not a function” (sometimes)

Working without Constants

Defining your object’s keys as a constant helps to avoid accessing values that don’t exist, and getting errors as a result. For example, let’s say we have an application with translations. Translations are generated by a third party and provided for us in this form:

sell = {eng: sell, esp: vender}

We also have a

getLang()

function that returns the language as “ENGLISH” or “SPANISH.”

With this, we might write the following code for translating our word:


if (getLang() === ‘ENGLISH’) {
  return sell.eng;
}
else if (getLang() === ’SPANISH’) {
  return sell.esp;
}

For now, this is working. But what happens if getLang() is updated and starts returning “English” instead of “ENGLISH”? Now, every comparison instance of getLang() needs to be changed from the all upper case.

How to Use Constants in JavaScript

Here’s how constants can work in our favor.

1. Use constants to store values that will be used repeatedly.


const Languages = {english: “ENGLISH”, spanish: “SPANISH”}
if (getLang() === Languages.english) {
  return sell.eng;
}
else if (getLang() === Languages.spanish) {
  return sell.esp;
}

2. Use constants as keys that are used repeatedly.

We no longer have any string literals in our business logic, which is good. However, ‘eng’ and ‘esp’ aren’t the most descriptive key names. Since our translations come from a third party (and even if they didn’t), there’s no guarantee that  ‘eng’ and  ‘esp’ will be the relevant keys forever.

Consider, if any of the values on sell are functions, and the keys change, we will get an “undefined is not a function” error.

A change of accessing every .eng and .esp could be expensive, but it will be easy if we use a constant. As a bonus, we can use whole words to make it more readable.


const Translated= {english: “eng”, spanish: “esp”};
if (getLang() === Languages.english) {
  return sell[Translated.english];
}
else if (getLang() === Languages.spanish) {
  return sell[Translated.spanish];
}

With this approach, constants also add some documentation to the code about what keys an object has and doesn’t have, as well as how those keys are spelled and cased.

3. Use constants to factor out control statements.

In the example above, doesn’t “Translated.english” mean something similar to “Languages.english”? And wouldn’t it be nice to get rid of that if block altogether? We can use constants to make that happen. Consider the example below: (I’m using ECMAScript 2015 syntax to define keys using other objects).


const BrowserLanguages = {english: ‘ENGLISH’, spanish: “SPANISH”}

 

const TranslationKeys = {

 

[BrowserLanguages.english]: ‘eng’,

 

[BrowserLanguages.spanish]: ‘esp’

 

}

Once these constants are set up, we can easily access the language we want like this:


return sell[TranslationKeys[getLang()]]

And if either getLang() or our translator change their format, the only thing that needs to be updated is our constants.

Advantages of Constants in JavaScript

Why constraints?

  • Language names can be updated, maintained, and re-used independently.
  • Language names will always refer to a human readable key, even if the value isn’t.
  • Language names are strongly tied to our translator keys.
  • Bonus:
    sell[Languages[lang]]

    is a safe, optimized, one-line representation of our logic.

I won’t say an “undefined” error is impossible with this approach, but it’s much less likely to be an issue.

In my experience working in active codebases, properties and keys can change all the time. Using constants is one technique I use to make it easy to absorb these kinds of changes.

The post Avoiding “Undefined is Not a Function” with Constants appeared first on Atomic Spin.