Auto Layout Demystified

I was at a local user group recently where the topic of the hot new cross-platform mobile development options out there (doesn’t really matter which) came up. There was the usual UIkit bashing, which was expected since the topic was to propose an alternative programming model.

I’m not here to defend UIkit, because I agree that it can use some improvement. However, the speaker did say one thing about Auto Layout that I have heard before and that rubbed me the wrong way.

I’ve heard people say things like, “You need a mathematics PhD to understand Auto Layout,” or, “You need to be a rocket scientist to understand Auto Layout.” I’m not going to get into a debate on how hard rocket science actually is—you can see reddit for the answer. Instead, in this blog post, I will show you how easy the calculation for Auto Layout is. In fact, the linear equation is so simple that an elementary child could do the math.

So why do people think Auto Layout is hard? Well, I don’t think Xcode’s interface builder helps the situation. Xcode has improved since Auto Layout was introduced, but it can still be hard to see how the Auto Layout equation relates to what you see in interface builder. Let’s see if I can demystify Auto Layout a little.

The Equation

Let’s say you have two buttons and you want to know how far apart to place them. How could we express the relationship between the right edge of the Cancel button and the left edge of the Accept button? If I were to describe this to another person, I would probably say something like, “The Accept button is eight points to the right of the Cancel button.”

auto_layout_demystified_two_buttons

A more precise statement would be, “The left edge of the Accept button is eight points greater than the right edge of the Cancel button.” The equation for this would be:

Accept.left = Cancel.right + 8   where 8 is a constant

What if I wanted to express the width of the Accept button in relation to the width of the Cancel button? Let’s say it’s twice as wide. Then the equation would be the following:

Accept.width = Cancel.width * 2 where 2 is a multiplier

The Auto Layout equation combines these two into one to express any kind of relational constraint.

Item1.attribute = Item2.attribute * multiplier + constant

To express the position of the Accept button in relation to the Cancel button, we don’t need the multiplier, so its value is one. The equation would be the following:

Accept.left = Cancel.right * 1 + 8

To express the width of the Accept button, we don’t need the constant, so that becomes zero.

Accept.width = Cancel.width * 2 + 0

That’s it! The mystical equation that you need a math PhD to figure out  ¯_(ツ)_/¯. As with any equation, you can reverse it. You can express the relationship between the distance of these two buttons with Item 1 and Item 2 reversed as follows:

Cancel.right = Accept.left * 1 + (-8)

I think this is where some of the confusion starts with Auto Layout because Xcode’s interface builder doesn’t always show the equation in the order you are expecting. It is not logical to think of the distance between the two buttons as a negative number.

If you want to know more about the equation, Apple has a great page explaining the Anatomy of a Constraint

Next, I’ll show you how interface builder uses this equation to build a constraint.

Interface Builder

Let’s create the same horizontal constraint between our two buttons. I am not going to go through all the different ways you can create a constraint in interface builder because there are several different ways. My favorite way is to press the control key and drag a constraint from the first item to the second.

auto_layout_demystified_horizontal_constraint

After I drag UI elements into my view controller, I give them a meaningful name in the document tree. This is very helpful in understanding which control is which when looking at the attributes of a constraint. Xcode will use the the name of the control in the attribute inspector so it is easier to identify your controls.

auto_layout_demystified_naming

Now let’s look at the attributes of the horizontal constraint and try to find the elements of the equation in interface builder.

auto_layout_demystified_attribute2

When you add constraints in Xcode, the order of the equation may not be what you are expecting so the numbers will be reversed. Here you can see that my constant is a negative number and the first and second items are reversed. I believe this trips people up in understanding what is going on.

auto_layoutdemystified_reversed

You can fix this situation by reversing the first and second item. I often do this so that it is clear in my head (even though, from a mathematical standpoint, it is the same). After you reverse it, the negative constant becomes positive again, and the items appear in the order you expect.

auto_layout_demystified_reverse

Hopefully, this is helpful to someone new to Auto Layout. We don’t need to hire mathematicians to lay out our controls. The math is pretty easy, although Xcode doesn’t help the situation sometimes.

The post Auto Layout Demystified appeared first on Atomic Spin.

Tips for Improving Web Typography

Typography is one of the most important aspects of designing a website. Good typography can improve reading comprehension and usability, while poor typography can make even the best site difficult to use.

Fortunately, there is a lot of low-hanging fruit when it comes to this area. By following just a few key rules, you can greatly improve even the worst web typography.

These simple improvements can have a big impact:

1) Choose an Optimal Column Width

The width of text columns has a large effect on the readability of the text. Long lines of text can make it difficult for readers to keep their place when moving from line to line, while overly narrow columns can slow reading speed and create layout issues.

While there is no strict rule for what constitutes an optimal column width, a line length of between 50 and 75 characters (including spaces) is a happy medium that allows readers to maintain their place without breaking rhythm too often when traveling from line to line.

Takeaway: Set text column widths to between 50 and 75 characters per line to improve readability.

Long lines of text can hurt readability.
Long lines of text can hurt readability.

Limiting the text column to around 75 characters helps readability.
Limiting the text column to around 75 characters helps readability.

2) Establish a Typographic Hierarchy Using Font Size and Weight

Typographic hierarchy is the presentation of text in a way that indicates to the reader its importance by its size, weight and placement relative to the text around it.

Readers on the web often scan content quickly. A clear visual hierarchy helps them determine how important each piece of content is before reading a single word. This improves people’s ability to find the information they want more efficiently.

Font size and weight are two of the most important visual cues in establishing a typographic hierarchy. In general, the larger and bolder a piece of text is, the more important it is. Start with the largest, boldest type for your headlines, and work your way down. Headlines should be larger and heavier than subheadlines, which should be larger and heavier than body text, which should be larger and heavier than captions and footnotes.

Takeaway: Use font size and weight to indicate the importance of the content represented. Start with large, bold headlines, and work your way down to small, normal weight text.

Type size and weight helps readers determine the importance of a piece of text.
Type size and weight helps readers determine the importance of a piece of text.

3) Use Colored Text to Indicate Interactivity

Users should be able to spot interactive elements on a web page at a glance. Using color in your text can be an effective way to signal this interactivity to the reader.

In my designs, I represent all static text in either black or shades of grey. I reserve use of colored text for interactive elements such as links and buttons. By consistently sticking to this pattern, you can help users quickly and intuitively determine which elements are interactive and which are not.

Takeaway: Use black and grey scale for static text. Reserve color for links and buttons.

4) Avoid Floating Text

Floating text is a headline or caption that sits evenly between two pieces of content, making it difficult to determine to what content it applies. Readers interpret visually close content as being related. When these visual groupings are confusing or unclear, usability can suffer.

Use white space to clearly differentiate which pieces of content go together. Headlines should be closer to the content below than the content above. Captions should be closer to the images they label than the content that follows.

Takeaway: Content should be visually grouped with related content. Use white space to ensure that these visual groupings are obvious.

The caption above sits midway between the above and below photos, making it unclear which image it describes.
The caption above sits midway between the above and below photos, making it unclear which image it describes.

Conclusion

There are endless more tips out there for improving web typography, but following these simple rules should help you get you moving down the right road. What tips have you found to be effective?

The post Tips for Improving Web Typography appeared first on Atomic Spin.

Designing a Scalable Deployment Pipeline

Anyone who’s led a product engineering team knows that a growing team requires investments in process, communication approaches, and documentation. These investments help new people get up to speed, become productive quickly, stay informed about what the rest of the team is doing, and codify tribal knowledge so it doesn’t leave with people.

One thing that receives less investment when a team scales is its deployment pipeline–the tools and infrastructure for deploying, testing, and running in production. Why are these investments lacking even when the team can identify the pain points? My theory is that it nearly always feels too expensive in terms of both money and lost progress on building features.

Following that theory, I now consider designing an effective and scalable deployment pipeline to be the first priority of a product engineering team—even higher than choosing a language or tech stack. The same staging/production design that was my standard just a few years ago now seems unacceptable.

What is a Deployment Pipeline?

Before we dive into what our deployment pipelines used to look like, let’s start by defining a few terms.

A deployment pipeline includes the automation, deploy environments, and process that supports getting code from a developer’s laptop into the hands of an end user.

A deploy environment is a named version of the application. It can be uniquely addressed or installed by a non-developer team member. A developer can distinctly deploy an arbitrary version of the underlying codebase to it. Often, distinct deploy environments will also have unique sets of backing data.

A deployment process is the set of rules the team agrees upon regarding hand-off, build promotion between environments, source control management, and new functionality verification.

Automation is the approach to making mundane parts of the deployment process executable by computers as a result of a detectable event (i.e. source control commit) or manual push-button trigger.

Our Old Approach: A Hot Mess

In the recent past, our goto template for a web app deployment pipeline utilized two deployment environments: staging and production.

Process

The process for utilizing these environments looked something like this:

  1. Developer works on a feature locally until it’s ready to be integrated and accepted.
  2. Developer integrates it with the version of the app on staging and deploys it to the staging environment.
  3. Delivery lead verifies that the feature is acceptable by functionally testing it in the staging environment.
  4. Delivery lead gives developer feedback for improvement or approves it as done.
  5. At some point, the developer deploys the features on staging to production.

Automation

We’d also, minimally, automate deployment of an arbitrary version of the app from a developer’s laptop to either environment.

Result

This deployment pipeline is straightforward and easy to implement–but it’s not easy to scale if, for example, you need to grow your dev team. Or if you support a heavily used production deployment while simultaneously developing new product functionality.

The most common sign that a prod/staging pipeline is breaking down due to scaling demands is integration pain felt by the delivery lead in Step 3 of the process above. Multiple developers pile their feature updates and bug fixes onto the staging environment. Staging starts to feel like a traffic accident on top of a log jam.  It’s a mix of verified and unverified bug fixes and accepted/brand new feature enhancements. This results in regressions for which the root cause cannot be easily found. Since it’s all on staging, a delivery lead doesn’t know which change is a likely culprit, and they’re probably not sure which developer should investigate it.

It’s a hot mess.

In this scenario, the staging environment rarely provides a sense of confidence for the upcoming production deployment. Rather, it foretells the disaster your team is likely to encounter once you go live.

We Can Do Better

If we look at this problem through the lens of the theory of constraints, it’s obvious that the staging deploy environment is the pipeline’s constraint/bottleneck.

We don’t want to drop staging because it provides a valuable opportunity to validate app changes just outside of the live environment. Instead, we want to optimize for staging to provide the most value possible–that being:

Provide a deploy environment identical to production except for one or two changes which can be verified one last time right before deploying them to production.

This definition of value implies that the staging environment spends a lot of time looking just like production, which is good. A clean staging environment is an open highway for the next feature or bug fix to be quickly deployed to production with confidence.

Deployed Dev Environments

To minimize the time a new feature spends on staging, we introduced new deploy environments which we call dev environments. These aren’t the same as local dev environments. A deploy environment needs to be uniquely addressable by the delivery lead-it can’t just be running on your laptop. The number of dev environments is fluid, scaling with the number of developers and number of in-progress features and updates.

Process

If you think of staging as a clone of production, then think of a dev environment as a clone of staging. The new process looks like this:

  1. Developer works on a feature locally until it’s ready to be integrated and accepted.
  2. Developer spins up a dev environment (cloned from staging) and deploys a change to it.
  3. Delivery lead verifies the feature is acceptable by functionally testing it in the dev environment.
  4. Delivery lead gives developer feedback for improvement or approves it as done.
  5. Developer deploys change to staging and shuts down dev environment.
  6. Delivery lead spot checks change in staging and deploys it to production.

The main difference in our process is moving the iteration on feature acceptance feedback from upstream from the staging environment to the dev environments. This allows staging to be a clean clone of production most of the time and lets us validate multiple updates in parallel isolated environments. The fact that features can validated in isolated environments means we can more easily identify the root cause of a defect or regression resulting from a recent change.

The idea of on-demand deploy environments may be uncommon, but it’s not new. Atlasssian called them rush boxes. Github called them staff servers and let developers spin them up with hubot commands.

Automation

In addition to automating deployment, we’ll need to automate the creation of a new dev environment to support this pipeline. Ideally, it should be a clone of staging and uniquely addressable (e.g. dev1.app.com, dev2.app.com, etc.).

Say you’re managing your deploy environments in a cloud service like AWS. Automating this process is doable with, at most, a few weeks of investment. As a stop gap, your team could also spin up a set of dev servers (one per developer) and try to suspend their respective computing resources (i.e. EC2 instance) when they’re not in use.

In 2014, we started implementing this pipeline design on top of Heroku. This made cloning environments really easy via the built-in ability to fork a copy of an app.

The Golden Triforce of Deployment Tools

Today, if you use GitHub and Heroku, you can get everything I described above right out of the box with Heroku Pipelines and Heroku Review Apps. Because of this, GitHub + Heroku is a killer stack for teams focused on building their product over their infrastructure.

I’d also throw in CircleCI for continuous integration. It’s a nearly zero-conf CI service that can automatically parallelize your slow test suite and execute it in parallel. All of these tools do a great job guiding a team to build a portable app. This makes it easy to move to another platform later, like AWS.

Deploying with Confidence

In summary: Use GitHub + Heroku + CircleCI unless you have a really good reason not to. Keep staging clean with on-demand dev environments. Deploy with confidence.

The post Designing a Scalable Deployment Pipeline appeared first on Atomic Spin.

Open Source Basics: NPM Edition

As software developers, we’ve long used third-party code in our day-to-day work, but these days, it’s much easier to find and integrate it with package managers and searchable repositories.

Inevitably, there comes a time when our unique use of a library exposes a new bug, or we find that we could almost use that sweet tool if only it did this one tiny thing differently. When that happens, we find ourselves popping open the hood and making changes to a third-party dependency.

The same modern cushy systems also make it easier to maintain these changes, collaborate, and contribute our changes upstream. This is what I’m going to talk about today.

fork

I’ll use NodeJS’s npm in this example, but the process is similar for other languages’ packaging systems like RubyGems or PyPI.

Fork It

So we’ve decided to make a change to a library. Say we’re using the npm package foo, referenced in our application’s package.json file like this:


"devDependencies": {
    "foo": "1.2.3",
    ...

The first step, of course, is to clone the repository. Make sure to check out the same revision that your application is currently using. (It’s probably a recent release, not trunk.)

With npm, we can reference our local copy like this:


"foo": "file:/Users/johnruble/repos/foo",

This is a memorable but somewhat blunt approach, with a couple of caveats:

  • file:/ sources do not know about Git. They’re just looking at what’s on disk, so don’t try to reference a specific branch or revision.
  • This path is simply a source we can install from. To pick up changes, we’ll need to re-npm install and rebuild our app. If you find yourself doing this repeatedly, look into npm link.

Now that we can build our app using our own custom version of the third-party component, we’re ready to dive in.

Eventually, our experiment is successful. We’ve made changes, and we want to use them in development (and eventually production) builds of our app. After pushing our branch to another remote where it can live for a while, we can reference our repository in our app’s package.json, so that it can be reached by other developers, CI, and deployment:


"foo": "jrr/foo#branch-with-my-changes", //(github shorthand), or
"foo": "git://private.repo.com/jrr/foo.git#branch-with-my-changes",

Keeping a separate fork allows us to keep moving forward for now, but eventually, we’ll probably want to…

Unfork It

The big downside to keeping a fork like this long-term is that it puts friction between us and future updates from upstream. We’re going to want those bug fixes and new features, but it’s a tedious chore to switch over to other repo, reintegrate our changes, update the reference from our application, etc.

On the flip side, there are several advantages to making our fork obsolete by contributing the work upstream:

  1. Code review: The changes we made are in somebody else’s code, unfamiliar to us. If we submit our changes upstream, we get review from the experts.
  2. That small change we made is a tiny piece of custom software, and it has a disproportionately large maintenance cost. What if somebody else could maintain it for free?
  3. That cool thing we built? We get to share it with the world!

So, we’ve decided to submit our changes upstream. How do we do it?

Get prepared

  • We’ve been working from a tagged release of the library, but changes are typically made on a develop or master branch. Merge the latest code from upstream into your branch (or better yet, rebase onto it).
  • Run the library’s tests to make sure it’s still behaving correctly.
  • Use this updated version of the the library in your app, and run your app’s tests to make sure the library is still behaving the way you want.
  • Clean up your branch (squash commits, remove commented code, etc.). It may be easier to just check out a new branch from master and apply all your changes to it in one commit.
  • Write tests! This is critical since we’re working with code that 1) we depend on, and 2) is not under our control. In particular, write tests to specify and document the behavior we need, and to defend our changes against accidental regression in the future.

Create the pull request

Now we’re ready to create a pull request. Make it as nice for the maintainers as you can. Spend a few minutes looking to see if the project has any guidance for contributors. Fill out their template, file an issue, etc. Write up your changes for the project’s changelog. Ask for feedback on your implementation.

Wrap it up

With luck, after some iteration, our changes will be accepted. After the pull request has been merged, we can switch our app’s package reference back to the upstream repository at a specific commit:


"foo": "github:user/foo.git#3f25967e",

Finally, when our changes are released with the library’s next version, we can switch back to vanilla upstream:


"devDependencies": {
     "foo": "1.2.4",

It feels good to remove the lingering risk that the fork represented for our project, and also to know that other developers are using our code!

The post Open Source Basics: NPM Edition appeared first on Atomic Spin.

A Fresh Perspective on Front-End at Midwest JS

Last month, I attended Midwest JS. In the interest of filtering out the conference excitement over new stuff/shiny objects/microservices, I waited a month to write my reaction.

Takeaway 1: Functional Programming Abounds

I loved the emphasis on—and excitement around—bringing functional programming concepts into front-end development. I find that most of my frustration with Angular 1.x, for instance, revolves around mutating data and managing state. It was nice to see that not everyone is thrilled by two-way binding.

The best decision I made at the Midwest JS was attending a talk on Elm put on by Jamison Dance. I was reluctant to attend the talk, because it sounded like a pet language with no practical use. It turned out to be my favorite talk of the conference: a very well-informed walk-through that related the concepts in Elm to front-end development in general. It was also a nice reminder that you can use functional concepts regardless of your front-end framework, even if it’s not enforced in any way.

Honestly, if there were one thing that I took away from this conference, it is that functional programming is the future and that the future is now.

Takeaway 2: Angular 2? ¯_()_/¯

I spend a lot of time writing Angular code. The workshop for Angular 2 was scheduled for the conference’s largest room, which turned out to be a bit… roomy.

By contrast, the React workshop filled up before we could get in. We weren’t even late (despite a hotel SNAFU that landed us 40 minutes out of town). I sat in on the Angular 2 workshop–whose testing section became obsolete literally the night before because of a new release candidate–to see what it had to offer. I have to say, I didn’t come away thinking that:

1) There’s anything resembling a logical path for upgrading from angular 1.x to 2.

2) Given the cost of choosing Angular 2 for a project, that it has any advantages over what everyone is already excited about: React.

Now that Angular 2 is out, I’d still have a serious hesitation in picking it for a greenfield project.

Takeaway 3: Pleasant Reaction to React

Everywhere I saw React code, I liked it. As one attendee noticed in the conference Slack, it seemed like the basics of a React project were covered in every talk. This was not unwelcome for me, since I came in not knowing much about React. It definitely has a lot of boilerplate code. However, a nice difference between React and, say, Angular 1.x, is that React doesn’t require as much convoluted wiring of its components.

Also, a key question that I’d never considered kept coming up: Why are we sprucing up HTML with JavaScript AND treating the two as if they are part of different layers of our app? I find this especially irksome in Angular, since I abide by the rule of moving as much logic to services as possible.

The distinction between controller and template is fuzzy and weird anyway. But with all the heavy lifting happening on the server or in a service, it seems especially odd to pretend that the controller is, well, a controller, and not a view. To me, this fuzzy separation of concerns is the major failure of Angular 1.x. I’ve gotten accustomed to writing nice templates in Angular, but realistically, even the best ones are terrible to read and edit, not to mention reading diffs.

Meanwhile, over in Elm world, HTML files are generated as functions which take two arguments: content and children.

https://github.com/evancz/elm-todomvc/blob/master/Todo.elm#L211
viewInput task =
 header
 [ class "header" ]
 [ h1 [] [ text "todos" ]
 , input
 [ class "new-todo"
 , placeholder "What needs to be done?"
 , autofocus True
 , value task
 , name "newTodo"
 , onInput UpdateField
 , onEnter Add
 ]
 []
 ]

A Fresh Perspective

So, what did I get from Midwest JS? I got a fresh perspective on front-end development. It hasn’t changed my daily work habits (although there has been a nudge toward fundamental functional concepts), but I feel like I’ve got a couple of good directions to go the next time I have to pick a technology for a new or existing project.

The post A Fresh Perspective on Front-End at Midwest JS appeared first on Atomic Spin.

Easy Secure Web Serving with OpenBSD’s acme-client and Let’s Encrypt

As recently as just a few years ago, I hosted my personal website, VPN, and personal email on a computer running OpenBSD in my basement. I respected OpenBSD for providing a well-engineered, no-nonsense, and secure operating system. But when I finally packed up that basement computer, I moved my website to an inexpensive cloud server running Linux instead.

Linux was serviceable, but I really missed having an OpenBSD server. Then I received an email last week announcing that the StartSSL certificate I had been using was about to expire and realized I was facing a tedious manual certificate replacement process. I decided that I would finally move back to OpenBSD, running in the cloud on Vultr, and try the recently-imported acme-client (formerly “letskencrypt”) to get my HTTPS certificate from the free, automated certificate authority Let’s Encrypt.

Why You Should Get Your Certificates from ACME

Let’s Encrypt uses the Automated Certificate Management Environment protocol, more commonly known as ACME, to automatically issue the certificates that servers need to identify themselves to browsers. Prior to ACME, obtaining certificates was a tedious process, and it was no surprise when even high-profile sites’ certificates would expire. You can run an ACME client periodically to automatically renew certificates well in advance of their expiration, eliminating the need for the manual human intervention that can lead to downtime.

There are plenty of options for using ACME on your server, including the Let’s Encrypt-recommended Certbot. I found acme-client particularly attractive not just because it will ship with the next release of OpenBSD, but also because it’s well-designed, making good use of the privilege separation technique that OpenBSD pioneered as well as depending only on OpenBSD’s much-improved LibreSSL fork of OpenSSL.

Bootstrapping

To follow along with me, you’ll need OpenBSD. You can use the 6.0 release and install acme-client. If you’re feeling adventurous and are willing to maintain a bleeding-edge system, you can also run the -current branch, which already has acme-client.

If you do the smart thing and choose to use the release version, you’ll need to do a little extra setup after installing acme-client to align with the places things are in -current:

# mkdir -p /etc/acme /etc/ssl/acme/private /var/www/acme
# chmod 700 /etc/acme /etc/ssl/acme/private

And whenever you use acme-client, you’ll need to specify these paths, e.g.:

# acme-client 
        -C /var/www/acme 
        -c /etc/ssl/acme 
        -k /etc/ssl/acme/private/privkey.pem 
        -f /etc/acme/privkey.pem 
        www.example.com

Everything will work as advertised otherwise.

A note before we get started: If you’re new to OpenBSD, you owe it to yourself to get familiar with man(1). OpenBSD has amazingly good documentation for just about everything, and you can access it all by typing e.g. man httpd or man acme-client. Everything in this article came from my reads of these manpages. If you get stuck, try man first!

ACME will use a web server as part of its challenge-response process with the Let’s Encrypt service. To get this started, we’ll build out a basic /etc/httpd.conf based on our readings of httpd.conf(5) and acme-client(1):

server "default" {
        listen on * port 80
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                root strip 2
        }
}

This is enough to start up a basic web server that will serve the challenge responses that acme-client will produce. Now, start httpd using rcctl(8):

# rcctl enable httpd
# rcctl start httpd

Getting Your First Certificate

Once httpd is up and running, you’re ready to ask acme-client to perform all that heavy lifting that you used to have to do by hand, including:

  1. Generating your web server private and public key
  2. Giving your public key to the certificate authority
  3. Proving to the certificate authority that you’re authorized to have a certificate for the domains you’re requesting
  4. Retrieving the signed certificate

You can do all of this with a single command:

# acme-client -vNn example.com www.example.com

man acme-client will explain all that’s going on here:

  1. -v says we want verbose output, because we’re curious.
  2. -N asks acme-client to create the private key for our web server, if one does not already exist.
  3. -n asks acme-client to create the private key for our Let’s Encrypt account, if one does not already exist.
  4. example.com and www.example.com are the domains where we want our certificate to be valid—note that our web server must be reachable via those names for this process to work!

If this worked correctly, there will be some new keys and certificates on your system ready to be used to serve HTTPS.

Using the New Certificates with httpd

To get httpd working with our new certificates, we just need to expand /etc/httpd.conf a little:

server "default" {
        listen on * port 80
        listen on * tls port 443
        tls certificate "/etc/ssl/acme/fullchain.pem"
        tls key "/etc/ssl/acme/private/privkey.pem"
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                root strip 2
        }
}

The three new lines above add a new HTTPS listener to our configuration, telling httpd where to find the certificate it should present and the private key it should use.

Once this configuration is in place, ask httpd to reload its configuration file:

# rcctl reload httpd

At this point, your server should be online with a valid Let’s Encrypt certificate, serving HTTPS—though giving you an error page, because httpd is not yet configurated to serve any content. That bit is left as an exercise for the reader. (Consult httpd.conf(5) for further help there.)

Automating Yourself Out of a Certificate Renewal Job

By far the best part about ACME is that it can be easily configured to automatically renew your certificates before you notice they’re about to expire. Note that acme-client is written so that you simply need to run it periodically. Once the certificates are 30 days from expiration, it will get a fresh signature from Let’s Encrypt.

Making this happen is as simple as dropping the following into /etc/daily.local (cf. daily(8)):

# renew Let's Encrypt certificate if necessary
acme-client example.com www.example.com
if [ $? -eq 0 ]
then
        rcctl reload httpd
fi

And now acme-client will now run every night (by default at 1:30 a.m.) and renew your certificate when necessary.

Further Reading

This is a simple configuration, but it’s enough to run my web site and give me painless HTTPS that scores an A out-of-the-box on SSL Labs’ server test. I added a few lines to /etc/httpd.conf to serve the static content on my site, and I was done.

If you have a more complex configuration, though, chances are that httpd and acme-client are up to the task. To find out all they can do, read the man pages:

If you want to know more about OpenBSD in general, check out the comprehensive OpenBSD FAQ.

Happy secure serving!

The post Easy Secure Web Serving with OpenBSD’s acme-client and Let’s Encrypt appeared first on Atomic Spin.

ReSpeaker – First Impressions + Simple Offline Voice Recognition

I had the opportunity to get a free ReSpeaker core during their Kickstarter in exchange for an honest review—an offer I couldn’t pass up.

You can think of a ReSpeaker as something like an Amazon Echo, but it’s open-source and you can re-configure it to do whatever you want. You can hook it up to the online cloud voice service of your choice and have it handle complex questions like, “What’s the weather like in Grand Rapids?” or “What’s the average air velocity of a swallow?”

Respeaker core board
ReSpeaker core dev board

One of the features I was particularly interested in is that you can configure it to detect keywords and simple commands (“play music,” “launch the missiles”) offline with no internet connection.

Here is an example of how to do that:

Continue reading ReSpeaker – First Impressions + Simple Offline Voice Recognition

Commodore 128 Keyboard Repair

My Commodore 128, (a “flat” C128CR), has suffered with a few temperamental keys on the keyboard; the symptoms being that keys are either slow to return after being pressed or don’t return at all and stay pressed resulting in undesired input, i.e. they’re “sticky”.

This can be caused either by spilling something, usually sugary, into the keyboard which leaves a sticky residue, or a failure of the plunger for each affected key. As I’m fairly sure I’ve never spilt anything I assumed it was the latter.

To get at the keyboard the machine needs to be opened up first which is simply a case of unscrewing the six screws on the underside; three along the front edge with the middle one usually underneath a warranty disclaimer sticker, two at each of the rear corners and one in the centre. The top half of the case will pop off with a bit of persuasion which allows you enough clearance to reach in and disconnect the power LED and then the top of the case can be hinged open along the right hand side of the machine which allows you to gently disconnect the keyboard cable and unscrew the grounding cable from the RF shielding of the main board. This should leave you with the following:

img_0847

You might not necessarily need to remove the keyboard from the case but the size of the case makes it more unwieldy to work on. To remove the keyboard, just unscrew the six visible screws. Note the four black plastic spacers and their orientation for the screws along the top edge:

img_0846

Once the keyboard is loosened the power LED will likely drop out along with the small plastic part that holds it in place. Keep all of these bits safe, ready for reassembly. You should now be left with just the keyboard itself:

img_0840

Next the hardest step, which requires the use of a soldering iron. You need to desolder the wires from the three stateful keys; “Shift Lock”, “Caps Lock” and “40/80 Display”. It’s important not to let the soldering iron heat the keys up too much to prevent damaging them and thankfully the wires are not twisted together so with some tweezers and the quick application of the soldering iron the wires should separate easily:

img_0841

There are now 27 small screws holding the circuit board in place to undo and then the circuit board should just lift away. Don’t miss this tiny spring that is situated above the ‘+’ key on the numeric keypad:

img_0842

The circuit board can be gently cleaned with something like isopropyl alcohol if it’s dirty. To get at the plungers for each key, you just need to pull off the keycap on the top side of the keyboard, put that and the spring underneath it to one side, and then the plunger should just drop out:

img_0843

The keycap snaps into the top of the plunger which rides up and down through a hole in the keyboard chassis with the spring making the plunger return and stay up. What happens is that the plunger can develop a split which means when the keycap is fitted the diameter of the plunger grows enough that it doesn’t move smoothly in the hole and becomes an interference fit causing the key to stick. Here’s one of the damaged plungers from mine with my nail showing where the split is:

img_0844

It’s simply a case of replacing each damaged plunger, (I ended up replacing five along with two springs), and then reassembling the keyboard. Leave the re-soldering of the stateful keys until last and that you’ve checked all the other keys travel properly. You should now have a reassembled keyboard:

img_0839

Fitting the keyboard back into the case is fairly straightforward, the trickiest bit is keeping the power LED stable while you drop the keyboard in place as it’s just sandwiched in, here’s how it should be fitted:

img_0845

Reconnect the keyboard cable making sure not to bend any pins, reattach the grounding cable to the RF shielding and finally reconnect the power LED, (apparently it can be reconnected in either orientation), as you close the two halves of the case together. Then it’s just a case of powering on the machine and testing the keyboard. Make sure that you also test the stateful keys work correctly after you’ve de- & re-soldered them.

Bring Joy to Your Desktop Backgrounds with Workflow Automation

Tired of the same desktop background? Bored with stock art and don’t care enough to search for anything better? With the help of Workflow, you can bring a little fun and delight to your backgrounds.

Desktop Background Workflow Automation.
Examples of NASA astronomy pictures of the day. Image credit: Various authors (individual images) and me (collage).

Workflow is a powerful iOS automation tool. We’re going to use it to:

  1. Load NASA’s astronomy picture of the day.
  2. Find the page’s image content and show it to the user.
  3. Prompt the user to keep or toss the suggested image. If tossing, we’re done for today.
  4. If saving, save the image to a photo album named “NASA image of the day.”
  5. Run the Workflow roughly every day. It’s fun!

I’ve attached the Workflow export file here.

It’s easy to configure OS X to automatically rotate the desktop background through images in the “NASA image of the day” album.† Sweet! In my case, I have it set to rotate the picture every 30 minutes. I don’t often look directly at my desktop, so I see something fresh just about every time it’s shown. It brings me joy every time I see it, and I hope it brings you joy as well.

Last but not least–credit where credit is due. I based this Workflow off the “Image of the Day” example (author unknown) included in Workflow’s gallery.

†I’ve noticed that, even though the photo album is updated with each new image, the list of desktop backgrounds doesn’t get updated until I manually turn the rotation on and off. Annoying. Perhaps this can be dealt with using OS X automation?

The post Bring Joy to Your Desktop Backgrounds with Workflow Automation appeared first on Atomic Spin.

Onboarding at Atomic: A New Atom’s Perspective

If you’re a frequent reader of Spin, you may have read Jesse’s post about our onboarding guidelines here at Atomic Object. At the end of the post, the importance of gathering feedback from new Atoms is discussed. I’m here to provide that feedback.

Onboarding at Atomic Object

So far, I have been extremely pleased with my onboarding experience at Atomic. I’ve heard horror stories from others regarding the start of a new job. I’ve had my own, too—like having to assemble my own cubicle on my first day, or being locked out in the middle of winter because I was never given a functioning key. We don’t believe in cubicles at Atomic, and from the beginning, I was far from locked out. In fact, I knew what my first day would look like before it even started.

Prior to the First Day

Before my start date, I was emailed a detailed itinerary of what that first day would look like. I was told when to arrive, where to park, and where to go upon entering the building. This information greatly relieved any sort of first-day-of-school anxiety I had. Should I bring a lunch? Will I be working on a client project right away? Will it be a full day? These questions were all answered, and more.

The First Day

Upon arriving on my first day, I was welcomed and introduced to my team. There were flowers on my desk, a welcome card, a vegan snack (they even took note of my dietary restrictions!), and a packet of handy onboarding information. Remember the emailed itinerary? It was printed and placed on top of said onboarding packet. If there’s one thing you take from this post, I hope it’s how organized we are here at Atomic.

After the First Day

To some, the length of onboarding may only be one day, or maybe one week. At Atomic, we look at onboarding as a much longer process. Aside from the typical new job paperwork and training, our onboarding also involves acclimating new Atoms to our culture and preparing professional development opportunities. So, what has this looked like for me?

  • Culture Pair: Being assigned a culture pair meant I had someone to ask all the questions I could come up with during our weekly sync-ups. This scheduled time was super helpful, and it removed any reluctance or fear to ask an abundance of questions.
  • Pair Lunches: My culture pair scheduled a few pair lunches for me. Having these lunches pre-planned was a great way to get to know my coworkers, without feeling awkward about asking people I didn’t know out to lunch. Now that I have experienced how great pair lunches are, I have no hesitation to ask any of my coworkers out for lunch, even the ones I have barely talked to so far.
  • Atomic Classes: As a new Atom, I had the opportunity to attend a few small internal classes or workshops that shed more light on various aspects of Atomic’s culture, such as how our economics work, and what the entire process of a project looks like—from start to finish.
  • Scheduled Readings: So far, I have burned through two books as a part of my assigned reading. These books have helped me gain further insight on software development and goal-oriented design.
  • Conferences and Workshops: Next month, I will be attending Cooper’s UX Bootcamp with a fellow Atomic designer. This bootcamp was highly recommended by several other Atoms, and I am remarkably excited to attend.

My experience joining Atomic Object has been a positive one. The great amount of time and thought that has gone into planning the onboarding journey has not only helped make this transition smooth for the obvious reasons mentioned above, but it has also made me feel like an Atom since day one.

The post Onboarding at Atomic: A New Atom’s Perspective appeared first on Atomic Spin.