[Video] Command Injection: How the Shell Makes You Vulnerable

Most web developers are familiar with SQL injection, an all-too-common web vulnerability. The problem typically arises from assembling SQL queries by concatenating strings, without considering they’re allowing whoever supplies the parameters (typically, a consumer of a web API) to write their own SQL code. But SQL isn’t the only place you can get code injected. SQL injection has a close cousin that’s not nearly as well-known, but it’s just as—if not more—deadly: command injection.

An Introduction to Command Injection

This vulnerability has been popping up quite a bit lately. It particularly crops up in smart devices running Linux, as developers figure out ways to run external programs inside their software. However, it’s not limited to the Internet of Things; it can appear anywhere.

I introduced this topic in a five-minute lightning talk at a lunch-and-learn held at our neighbor Nutshell’s offices. In this video, I tell you about the risky way most developers end up opening their code to command injection. I also discuss the most important thing you can do to defend against it.

Additional Defenses and Further Reading

Immediately after this talk, I had a really good follow-up discussion with a colleague. He reminded me of one more important thing worth mentioning. For a simple defense against not just all kinds of code injection attacks, but many other kinds of attacks as well, make sure that your software is sandboxed to give it exactly what rights it needs on the system—and no more. This simple configuration, stacked with use of the correct APIs to launch external processes, will help defend your application from this attack—and others.

For further reading on this topic and all kinds of other ways to defend your web application, OWASP is a great resource. I recommend reading their page on command injection in particular. It’s important that all software developers to know of attacks like these and write code that effectively defends against them.

The post [Video] Command Injection: How the Shell Makes You Vulnerable appeared first on Atomic Spin.

Dropwizard Deep Dive – Part 3: Multitenancy

Hello once again! This is Part 3 of a three-part series on extending Dropwizard to have custom authentication, authorization, and multitenancy. In Part 1, we set up custom authentication in Dropwizard, and in Part 2, we extended that to have role-based authorization. For this final part, we are going to diverge slightly and tackle the related but different concept of multitenancy.

Just to get our definitions straight, I want to explain what “multitenancy” means in this post. It refers to extending our API in such a way that users can only access data from a particular high-level organization they have access to within an API. This problem comes up often when providing software as a service; each customer needs to use the same API, but they should only have access to their own data.

In Parts 1 and 2, you saw that Java and Dropwizard have some built-in annotations that allow you to declaratively add auth to each of your endpoints. In this part, we are going to build a custom solution on top of what we’ve already created to enforce secure multitenancy in our API. To accomplish this, we will leverage several built-in abstractions in Jersey and Hibernate. In fact, pretty much all of the things we are going to do here are not specific to Dropwizard and could be leveraged in any app using Hibernate and Jersey.

Speaking of Hibernate, this tutorial assumes you are using that—and only that—to manage database access. The entire tutorial is based on using the filter feature built into Hibernate. I know Dropwizard has first-class support for JDBI and libraries for other data stores, but if you are using any of those, you are on your own as far as adapting and extending this solution goes.

Adding Tenant Relations to DB

The first step to adding multitenancy is to add the DB relations. We need to provide a TenantModel to represent a company. Each entity restricted to a company (UserModel and WidgetModel) also needs to reference a TenantModel. These relations are easy to add with Hibernate.


@Entity
@Table(name = "Tenant")
public class TenantModel {
  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Long id;
  private String name;
  public Long getId() {
    return id;
  }
  public void setId(Long id) {
    this.id = id;
  }
  public String getName() {
    return name;
  }
  public void setName(String name) {
    this.name = name;
  }
}

@Entity
@Table(name = "Widget")
public class WidgetModel {
  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Integer id;
  private String name;
  @Enumerated(EnumType.STRING)
  private WidgetScope scope;
  @ManyToOne
  @JoinColumn(name = "tenantId")
  private TenantModel tenant;

Basing Multitenancy On Path Params

Now that we have a tenant model and all the necessary associations, we can start up our app again. While reading our data will work as before, you’ll notice that all writes now get an error. We need to associate a tenant model when we write users or widgets, and we aren’t doing that.

In order to associate a tenant, we need to get a tenant ID from somewhere. In our case, that somewhere is going to be the URL path. The path is a nice place, because unlike a body, it is a part of every request–and unlike a query param, it is not optional. This makes it easy to send and gives us an automatic 404 if we forget it.

Adding it to our resource paths is easy:


@Path("/tenants/{tenantId}/widgets")
@Produces(MediaType.APPLICATION_JSON)
public class WidgetResource {
  ...

Using Thread Locals to Store Tenant

Now that we have a tenant ID on every request, we need to use it to fetch and store our tenant model so that we can write that back out to all associated entities. Normally, we would use a Jersey filter to do that. In this case, however, we are going to use another Jersey feature called a RequestEventListener. I’ll explain why we use this approach in a little bit.

To implement a RequestEventListener, we first need to implement an ApplicationEventListener. This outer listener will set up our other listener to listen for each incoming request.


@Provider
@Priority(10000) // This needs to be the last listener to run
public class MultitenancyApplicationListener implements ApplicationEventListener {
  private TenantDAO tenantDAO;
  public MultitenancyApplicationListener(TenantDAO tenantDAO) {
    this.tenantDAO = tenantDAO;
  }
  @Override
  public void onEvent(ApplicationEvent event) {}
  @Override
  public RequestEventListener onRequest(RequestEvent requestEvent) {
    return new MultitenancyRequestListener(tenantDAO);
  }
}

With that in place, we can implement our actual RequestEventListener which will do the real work.


public class MultitenancyRequestListener implements RequestEventListener {
  private TenantDAO tenantDAO;
  private Cache<Long, TenantModel> tenants;
  public MultitenancyRequestListener(TenantDAO tenantDAO) {
    this.tenantDAO = tenantDAO;
    tenants = CacheBuilder.newBuilder()
      .maximumSize(1000)
      .expireAfterWrite(1, TimeUnit.HOURS)
      .build();
  }
  @Override
  public void onEvent(RequestEvent event) {
    if (event.getType() == RequestEvent.Type.RESOURCE_METHOD_START) {
      try {
        Long tenantId = Long.valueOf(event.getContainerRequest().getUriInfo().getPathParameters().getFirst("tenantId"));
        TenantModel tenant = tenants.get(tenantId, () -> tenantDAO.getTenant(tenantId).get());
        TenantRequestData.tenant.set(tenant);
      } catch (Exception e) {
        throw new WebApplicationException("Tenant not found", FORBIDDEN);
      }
    }
  }
}

There is a lot going on here, so let’s break it down.

In our onEvent method, we first check the event we are getting notified of in the RESOURCE_METHOD_START event. There are a number of different events that get fired in the event listener lifecycle, but right now, this is the only one in which we are interested. Once we get a RESOURCE_METHOD_START event, we extract the tenantId from the URL path, and use that to do a lookup of the tenant data. All of these lookups are wrapped in a cache, because it will be running on every single request and get lots of duplicate data. Assuming we load a tenant successfully, we set a global ThreadLocal with the tenant data for this request so we can access it from anywhere in the request.

Now that we have our tenant in our global ThreadLocal, doing writes is easy. We simply pull the tenant out in any resource methods that need it, and we are guaranteed that we have the right one for the request.


public WidgetModel createWidget(Widget widget) {
    WidgetModel widgetModel = new WidgetModel();
    widgetModel.setName(widget.getName());
    widgetModel.setScope(widget.getScope());
    widgetModel.setTenant(TenantRequestData.tenant.get());
    return persist(widgetModel);
  }

Using Hibernate Filters to Enforce Multitenancy in DB Reads

Now that we can write entities with the correct tenant, we need to handle reads as well.

We’ll use Hibernate filters as the basis of our read multitenancy solution. Filters allow us to inject restrictions into the “where” clause of any query on a particular object in a particular session. Once we add a tenant relation to the WidgetModel from our example (we can assume a tenant in this case is a company), then it is easy to add such a filter.


@Entity
@Table(name = "Widget")
@FilterDef(
  name = "restrictToTenant",
  defaultCondition = "tenantId = :tenantId",
  parameters = @ParamDef(name = "tenantId", type = "long")
)
@Filter(name = "restrictToTenant")
public class WidgetModel {
  ...
}

We will also want to add a tenant to the UserModel, which means it is easiest to move the @FilterDef annotation to a package-info.java file. Combined with the default condition, this package level filter def is now easy to add to any entity.


@FilterDef(
  name = "restrictToTenant",
  defaultCondition = "tenantId = :tenantId",
  parameters = @ParamDef(name = "tenantId", type = "long")
)
package dao.entities;
import org.hibernate.annotations.FilterDef;
import org.hibernate.annotations.ParamDef;

We’ll also have to configure the Hibernate bundle in our app class to load our whole package and see package level filters.


private final ScanningHibernateBundle<ExampleConfig> hibernate = new ScanningHibernateBundle<ExampleConfig>("dao.entities") {
    @Override
    protected void configure(Configuration configuration) {
      // Register package so global filters in package-info.java get seen.
      configuration.addPackage("dao.entities");
      super.configure(configuration);
    }
    @Override
    public PooledDataSourceFactory getDataSourceFactory(ExampleConfig config) {
      return config.getDatabaseConfig();
    }
  };

Now we have some Hibernate filters on all the appropriate entities, but if you run the app, you will see nothing has changed. This is because Hibernate filters are disabled by default and need to be manually enabled for every individual Hibernate session.

Since we are using Dropwizard’s @UnitOfWork annotation, we already have a separate session that gets set up for each request. So we need to add some code that will run before each request, after the unit of work annotation, to enable our multitenancy filter. Fortunately, the RequestEventListener we set up earlier meets all of these criteria. In particular, we can use the @Priority annotation to guarantee it runs after @UnitOfWork has set up a database session, which is why we used an event listener instead of the simpler filter.

Since we already have a tenant ID in our listener, is is easy to amend it to enable our Hibernate filter.


public class MultitenancyRequestListener implements RequestEventListener {
  private TenantDAO tenantDAO;
  private SessionFactory sessionFactory;
  private Cache<Long, TenantModel> tenants;
  public MultitenancyRequestListener(TenantDAO tenantDAO, SessionFactory sessionFactory) {
    this.tenantDAO = tenantDAO;
    this.sessionFactory = sessionFactory;
    tenants = CacheBuilder.newBuilder()
      .maximumSize(1000)
      .expireAfterWrite(1, TimeUnit.HOURS)
      .build();
  }
  @Override
  public void onEvent(RequestEvent event) {
    if (event.getType() == RequestEvent.Type.RESOURCE_METHOD_START) {
      try {
        Long tenantId = Long.valueOf(event.getContainerRequest().getUriInfo().getPathParameters().getFirst("tenantId"));
        TenantModel tenant = tenants.get(tenantId, () -> tenantDAO.getTenant(tenantId).get());
        TenantRequestData.tenant.set(tenant);
        sessionFactory.getCurrentSession().enableFilter("restrictToTenant").setParameter("tenantId", tenantId);
      } catch (Exception e) {
        throw new WebApplicationException("Tenant not found", FORBIDDEN);
      }
    }
  }
}

And just like that, we have read multitenancy. You’ll see that if you hit the API endpoints with different tenant IDs now, you will get different responses.

Locking Down Between Tenants

Our multitenancy is working great, but we now have a problem: The roles we added in the last post are no longer suitable. If you browse the API, you’ll see that a MANAGER user of Tenant 1 can browse the top-secret widgets of Tenant 2 without a problem. This is because our roles themselves have no concept of different tenants. We need to lock this down so that our tenants are isolated and only accessible to their own users.

Fortunately, doing this will be pretty easy. Since we already have a tenant ID in our URL, and it’s attached to the user associated with the request, we simply need to compare the two when doing role checks. This is really easy to add to our security context.


public class CustomSecurityContext implements SecurityContext {
  private final CustomAuthUser principal;
  private final Long tenantId;
  private final SecurityContext securityContext;
  public CustomSecurityContext(CustomAuthUser principal, Long tenantId, SecurityContext securityContext) {
    this.principal = principal;
    this.tenantId = tenantId;
    this.securityContext = securityContext;
  }
  ...
  @Override
  public boolean isUserInRole(String role) {
    return principal.getTenantId().equals(tenantId) && role.equals(principal.getRole().name());
  }

Then, we just need to provide the security context the tenant ID from the path in CustomAuthFilter:


  @Override
  public void filter(ContainerRequestContext requestContext) throws IOException {
    Optional<CustomAuthUser> authenticatedUser;
    try {
      CustomCredentials credentials = getCredentials(requestContext);
      authenticatedUser = authenticator.authenticate(credentials);
    } catch (AuthenticationException e) {
      throw new WebApplicationException("Unable to validate credentials", Response.Status.UNAUTHORIZED);
    }
    if (authenticatedUser.isPresent()) {
      // Provide tenant ID from path to security context
      Long tenantId = parseTenantId(requestContext);
      SecurityContext securityContext = new CustomSecurityContext(authenticatedUser.get(), tenantId, requestContext.getSecurityContext());
      requestContext.setSecurityContext(securityContext);
    } else {
      throw new WebApplicationException("Credentials not valid", Response.Status.UNAUTHORIZED);
    }
  }

And the tenant ID from the current user ID to the authenticated user in CustomAuthenticator:


  @Override
  @UnitOfWork
  public Optional<CustomAuthUser> authenticate(CustomCredentials credentials) throws AuthenticationException {
    CustomAuthUser authenticatedUser = null;
    Optional<UserModel> user = userDAO.getUser(credentials.getUserId());
    if (user.isPresent()) {
      UserModel userModel = user.get();
      Optional<TokenModel> token = tokenDAO.findTokenForUser(userModel);
      if (token.isPresent()) {
        TokenModel tokenModel = token.get();
        if (tokenModel.getId().equals(credentials.getToken())) {
          // Pass the user's tenant ID
          authenticatedUser = new CustomAuthUser(userModel.getName(), userModel.getTenant().getId(), userModel.getRole());
        }
      }
    }
    return Optional.fromNullable(authenticatedUser);
  }

We now have secure, isolated multitenancy for both reads and writes in our API, just by having a logged-in user and a path parameter.

If you’ve stuck around for this entire thing (or even just this part), thanks a lot! I hope this guide is helpful. You can see the code for the entire series here.

The post Dropwizard Deep Dive – Part 3: Multitenancy appeared first on Atomic Spin.

Dropwizard Deep Dive – Part 2: Authorization

Welcome back! This is Part 2 of a three-part series on extending Dropwizard to have custom authentication, authorization, and multitenancy. In Part 1, we set up custom authentication. When we left off, we had just used the Java annotations @RolesAllowed and @PermitAll to authenticate our resource methods, so they will only run for credentialed users. In this part, we’ll cover Dropwizard authorization. We are going to extend the code we added to check the role assigned to a user and further restrict our methods based on whether that matches.

We can turn role-checking on by enabling another dynamic feature within Jersey. In order for it work, we just need to set up a SecurityContext object that can tell if a given role applies and set that security context on each incoming request. Most of the code and techniques here are actually a core part of JAX-RS and can be used entirely outside of Dropwizard. All of the example code I’m going to show in my series lives in this repo if you want to follow along.

Enabling Role-Checking

To make Jersey check role annotations before each request, we need to enable the RolesAllowedDynamicFeature, which is a core part of Jersey, not Dropwizard. We can enable it in our app like so:

environment.jersey().register(RolesAllowedDynamicFeature.class);

If you just activate that, you’ll notice that you can no longer use any of the endpoints annotated with @RolesAllowed (though those with @PermitAll still work). These endpoints will return a 403, because they have no way to validate their set of roles against the logged-in user. Fixing that is our next step.

Custom Security Context

A SecurityContext is a core JAX-RS object that is attached to a request context for the purposes of validating security. We are going to modify our auth filter to attach a security context to each authenticated request. The security context we attach will have logic to check the roles in the @RolesAllowed annotation against the authenticated user (to which we will also add a role field).

First off, we need to create a custom subclass of SecurityContext that can check our roles:

public class CustomSecurityContext implements SecurityContext {
  private final CustomAuthUser principal;
  private final SecurityContext securityContext;
  public CustomSecurityContext(CustomAuthUser principal, SecurityContext securityContext) {
    this.principal = principal;
    this.securityContext = securityContext;
  }
  @Override
  public Principal getUserPrincipal() {
    return principal;
  }
  @Override
  public boolean isUserInRole(String role) {
    return role.equals(principal.getRole().name());
  }
  @Override
  public boolean isSecure() {
    return securityContext.isSecure();
  }
  @Override
  public String getAuthenticationScheme() {
    return "CUSTOM_TOKEN";
  }
}

The most important part of this is the isUserInRole method which drives our Dropwizard authorization code. It will be called once for each role we define in our @RolesAllowed annotation, and if it returns “true” for any of them, we are authorized to use the method.

Now we need to update our auth filter method to attach a security context whenever we authenticate a user. We don’t have to do anything when there is no authenticated user, because the default security context has no user attached and will fail authorization checks. We also need to make sure to set the @Prioroity of our auth filter to Priorities.AUTHENTICATION so this code will run before any other filters that depend on authentication.

@PreMatching
@Priority(Priorities.AUTHENTICATION)
public class CustomAuthFilter extends AuthFilter<CustomCredentials, CustomAuthUser> {
  private CustomAuthenticator authenticator;
  public CustomAuthFilter(CustomAuthenticator authenticator) {
    this.authenticator = authenticator;
  }
  @Override
  public void filter(ContainerRequestContext requestContext) throws IOException {
    Optional<CustomAuthUser> authenticatedUser;
    try {
      CustomCredentials credentials = getCredentials(requestContext);
      authenticatedUser = authenticator.authenticate(credentials);
    } catch (AuthenticationException e) {
      throw new WebApplicationException("Unable to validate credentials", Response.Status.UNAUTHORIZED);
    }
    if (authenticatedUser.isPresent()) {
      SecurityContext securityContext = new CustomSecurityContext(authenticatedUser.get(), requestContext.getSecurityContext());
      requestContext.setSecurityContext(securityContext);
    } else {
      throw new WebApplicationException("Credentials not valid", Response.Status.UNAUTHORIZED);
    }
  }
  ...
}

Dropwizard Authorization Complete

With the AuthDynamicFeature enabled and our security context attached to authenticated requests, we now have role-based authentication on every incoming request. If you’ve been following Parts 1 and 2 of this post, you’ll see that we have both Dropwizard authentication and Dropwizard authorization for our API. This is probably enough for many apps, but in Part 3, I’ll show you how you can also add multitenancy to a Dropwizard application using a similar annotation-based approach.

You can see the code for just what we’ve done for Parts 1 and 2 here and the complete code for all three parts here.

The post Dropwizard Deep Dive – Part 2: Authorization appeared first on Atomic Spin.

Dropwizard Deep Dive – Part 1: Custom Authentication

This is Part 1 of a three-part series on extending Dropwizard with custom authentication, authorization, and multitenancy. For Part 1, we are going to go over adding custom authentication to Dropwizard.

If you don’t already know, Dropwizard is an awesome Java web API framework. It is my preferred web stack. I’ve written about it previously in: Serving Static Assets with DropWizard, Using Hibernate DAOs in DropWizard Tasks, and Hooking up Custom Jersey Servlets in Dropwizard (note that some of those posts are out-of-date for newer versions of Dropwizard).

It already comes with out-of-the-box support for http basic authentication and OAuth in the dropwizard-auth package. However, in a recent project, I needed to integrate a Dropwizard API with an existing API from another platform using a custom authentication scheme. Fortunately for me, Dropwizard exposes a set of extendible privative for authentication that I was able to leverage in my solution. All of the example code I’m going to share in my posts lives in this repo, if you want to follow along.

Disclaimers

Before we begin, I want to share a couple of disclaimers:
1. This was written for Dropwizard 0.9.x (the current version as of this writing). Future changes to Dropwizard may alter or invalidate details of this post. Be aware if you are using a later version.
2. This post uses hibernate for the database integration. While much of what we do here is possible with JDBI or other database integrations, I am not going to cover or discuss those. I will likely not be able to answer questions about them.

Now let’s get to work.

Adding Auth Annotations

Dropwizard allows us to use the role annotations under the javax.annotation.security package, @RolesAllowed, @PermitAll, and @DenyAll to enforce authentication on our resource methods. You can add them to each method to set the permissions like so:


@POST
@UnitOfWork
@RolesAllowed({"MANAGER"})
public Widget createWidget(Widget widget) {
  WidgetModel widgetModel = widgetDAO.createWidget(widget);
  return new Widget(widgetModel.getId(), widgetModel.getName(), widgetModel.getScope());
}
@GET
@Path("/public")
@UnitOfWork
@PermitAll
public List getPublicWidgets() {
  return getWidgetsForScope(WidgetScope.PUBLIC);
}
@GET
@Path("/private")
@UnitOfWork
@RolesAllowed({"EMPLOYEE", "MANAGER"})
public List getPrivateWidgets() {
  return getWidgetsForScope(WidgetScope.PRIVATE);
}
@GET
@Path("/top-secret")
@UnitOfWork
@RolesAllowed({"MANAGER"})
public List getTopSecretWidgets() {
  return getWidgetsForScope(WidgetScope.TOP_SECRET);
}

In the above example, you can see that only users with the MANAGER role are allowed to create new widgets or view top-secret widgets. Users with an EMPLOYEE or MANAGER roles can see internal widgets, and anyone can see public widgets. It is important to note that if we don’t put any of these annotations on to a resource method, it will be open to anyone by default. This is different from the @PermitAll annotation, which still authenticates a user and just disregards what roles they have. To protect against this, I usually use reflection to write a test like the one here that ensures every resource method has one of these annotations.

If you just add these roles and start up your application, you are going to be disappointed. These annotations don’t have any handler by default, so we are going to need to add an AuthFilter to Dropwizard to make them do something.

Adding an AuthFilter

In order to make our annotations do something, we need to use the tools in the dropwizard-auth package (which you’ll need to add to your dependency list). If we were just using HTTP basic auth or OAuth, we could use the out-of-the-box tools for those and hook them up to a role schema. However, since we are hooking into a custom authentication scheme, we are going to have to create our own stuff using the package primitives.

The first thing we need to create is a Jersey filter that will run before each request and execute our authentication code. Dropwizard provides a convenient base class called AuthFilter that will do the job. A bare-bones AuthFilter is parameterized to a type of credentials and security principal and would look like this:


@PreMatching
@Priority(Priorities.AUTHENTICATION)
public class CustomAuthFilter extends AuthFilter {
  @Override
  public void filter(ContainerRequestContext requestContext) throws IOException {
    throw new WebApplicationException(Response.Status.UNAUTHORIZED);
  }
}

We can register the filter inside our Dropwizard application’s run method using the AuthDynamicFeature like so:


CustomAuthFilter filter = new CustomAuthFilter();
environment.jersey().register(new AuthDynamicFeature(filter));

Now our filter will run before each request annotated with @RolesAllowed, @PermitAll, or @DenyAll to authenticate the user. Right now, though, our filter just rejects every request with a 401 status code. The next thing we need to do is add an Authenticator which will actually run the logic of authenticating a user’s credentials.

Adding an Authenticator

Our authenticator exposes a single method, authenticate, which takes in a CustomCredentials as an argument. The authenticator then uses the userId and token in the credentials to authenticate the user against the token stored in our database for that user. If the token matches, we return the user wrapped as an optional—otherwise, we return an empty optional. Also note that since we are using hibernate, we need to new up our authenticator using UnitOfWorkProxyFactory like so:


CustomAuthenticator authenticator = new UnitOfWorkAwareProxyFactory(hibernate)
      .create(CustomAuthenticator.class, new Class[]{TokenDAO.class, UserDAO.class}, new Object[]{tokenDAO, userDAO});

And our whole authenticator looks like this:


public class CustomAuthenticator implements Authenticator {
  private TokenDAO tokenDAO;
  private UserDAO userDAO;
  public CustomAuthenticator(TokenDAO tokenDAO, UserDAO userDAO) {
    this.tokenDAO = tokenDAO;
    this.userDAO = userDAO;
  }
  @Override
  @UnitOfWork
  public Optional authenticate(CustomCredentials credentials) throws AuthenticationException {
    CustomAuthUser authenticatedUser = null;
    Optional user = userDAO.getUser(credentials.getUserId());
    if (user.isPresent()) {
      Optional token = tokenDAO.findTokenForUser(user.get());
    
      if (token.isPresent()) {
        TokenModel tokenModel = token.get();
    
        if (tokenModel.getId().equals(credentials.getToken())) {
          authenticatedUser = new CustomAuthUser(tokenModel.getUser().getId(), tokenModel.getUser().getName());
        }
      }
    }
    
    return Optional.fromNullable(authenticatedUser);
  }
}

Now we just need to hook up our CustomAuthFilter to our CustomAuthenticator by calling the authenticate method with some credentials. We’ll create the credentials in our auth filter by parsing our request context. (In our case, this means pulling the credentials out of cookies.)


public class CustomAuthFilter extends AuthFilter {
  private CustomAuthenticator authenticator;
  public CustomAuthFilter(CustomAuthenticator authenticator) {
    this.authenticator = authenticator;
  }
  @Override
  public void filter(ContainerRequestContext requestContext) throws IOException {
    Optional authenticatedUser;
    try {
      CustomCredentials credentials = getCredentials(requestContext);
      authenticatedUser = authenticator.authenticate(credentials);
    } catch (AuthenticationException e) {
      throw new WebApplicationException("Unable to validate credentials", Response.Status.UNAUTHORIZED);
    }
    
    if (!authenticatedUser.isPresent()) {
      throw new WebApplicationException("Credentials not valid", Response.Status.UNAUTHORIZED);
    }
  }
  private CustomCredentials getCredentials(ContainerRequestContext requestContext) {
    CustomCredentials credentials = new CustomCredentials();
    try {
      String rawToken = requestContext
        .getCookies()
        .get("auth_token")
        .getValue();
    
      String rawUserId = requestContext
        .getCookies()
        .get("auth_user")
        .getValue();
    
      credentials.setToken(UUID.fromString(rawToken));
      credentials.setUserId(Long.valueOf(rawUserId));
    } catch (Exception e) {
      throw new WebApplicationException("Unable to parse credentials", Response.Status.UNAUTHORIZED);
    }
    return credentials;
  }
}

Notice that we parse the credentials out of the request cookies and create a CustomCredentials instance. We then pass those credentials to the CustomAuthenticator in the filter method. If the authenticator returns a user, we are properly authenticated. Otherwise, we abort the request with a 401 error.

Custom Authentication Complete

And with that, we now have custom authentication on all of our annotated resource methods. None of them can be successfully called without a valid userId and token combination in the request cookies.

But what about those roles that we listed in the method with the @RolesAllowed annotation? Right now, any of the methods are open to any authenticated user. To check the roles on the user requires a little more work on our part, which we will cover in Part 2.

You can see the code for Part 1 here and the complete code for all three parts here.

The post Dropwizard Deep Dive – Part 1: Custom Authentication appeared first on Atomic Spin.

Uploading Files in Rails Using Paperclip and Active Admin

I recently came across a situation where I needed to be able to upload a file to a Rails server with Active Admin. I did a quick search on Google and found this post by Job, a fellow Atom.

Our use cases were a little bit different, though. He was storing the file contents directly in the database, whereas I needed to be able to uplaod a firmware image file to the server’s filesystem, parse the file name, and perform some validations on the file. I decided to use the Paperclip gem to manage the file processing and storage. Using Job’s advice on Active Admin file uploads, I expanded his example to incorporate Paperclip.

What is Paperclip?

Paperclip, as its name suggests, is a file attachment library for ActiveRecord. It is designed to treat files much like other attributes, and it provides a whole slew of built-in extensions (e.g. validations, pre/post processing callback hooks, security enhancements, etc.). The setup is very simple, and getting the basics up and running takes only a few minutes. For the sake of this example, we will set up the ActiveRecord model, create a migration, and get rolling with the Active Admin file upload.

Install the Paperclip Gem

Installing the gem is easy. Just add

gem 'paperclip'

to your Gemfile and run

bundle install

Create a Migration

If you have an existing model where you want to add Paperclip support, then you simply need to create a migration. In this case, I already had a table and a corresponding model for my firmware image, so I just needed to add a migration to add the columns that Paperclip requires. The easiest way to do this is with the Rails migration generator:

rails generate paperclip firmware image

This will automatically create a migration file that looks like this:


class AddImageColumnsToFirmware < ActiveRecord::Migration
  def up
    add_attachment :firmware, :image
  end
  def down
    remove_attachment :firmware, :image
  end
end

The add_attachment helper will automatically create the following columns in your table:

  • image_file_name
  • image_content_type
  • image_file_size
  • image_updated_at

Technically, only the *_file_name column is required for Paperclip to operate, so you could throw away the others if you don’t need them.

Paperclip Your Models

The true beauty of Paperclip is how well it integrates with ActiveModel. To add Paperclip support to your model, start by adding the following line to your model class:


class Firmware < ActiveRecord::Base
  has_attached_file :image
  # more stuff to come
end

With that one line, Paperclip is now integrated with your Rails model, and it will automatically handle your file “attachments” just like any other Rails attribute! There are, of course, plenty of other options that you can add to the has_attached_file attribute (e.g. specifying a file path, style type, etc.), but I won’t go into that right now. For our purposes, the defaults should be just fine.

Validations!

Paperclip also makes it really easy to perform powerful validations on the file. If any validation fails, the file will not be saved to disk and the associon will be rolled back, just like any other ActiveRecord validation. There are built-in helpers to validate attachment presence, size, content type, and file name.

In our case, we really just need to validate the file name to ensure that it had the proper format and also that it was a unique file name. The following validation did the trick:


validates_attachment_file_name :image, :matches => [/_d+_d+_d+.bin$/]
validates_uniqueness_of :image_file_name # this is a standard ActiveRecord validator

Additional Processing

I also wanted to be able to grab the firmware version out of the file name. The best way to do this is with a before_post_process callback in the model, like this:


before_post_process :parse_file_name
def parse_file_name
  version_match = /_(?d+)_(?d+)_(?d+).bin$/.match(image_file_name)
  if version_match.present? and version_match[:major] and version_match[:minor] and version_match[:patch]
    self.major_version = version_match[:major]
    self.minor_version = version_match[:minor]
    self.patch_version = version_match[:patch]
  end
end
  

Before the file is saved, but after the filename is validated (so we can be sure it has the proper formatting), we extract the major, minor, and patch numbers from the filename and save them in our database.

Configure Active Admin for File Upload

Now that Paperclip is all set up and wired into our Rails model, we need to actually set up the file upload piece in Active Admin. I won’t go into much detail, since I relied on Job’s post as a reference. Basically, all we need to do is:

  1. Define the Index page contents

    Enumerate which columns will be displayed. If any special decoration is required, such as customized column title, sort properties, or row formatting, they can be specified easily. For example, I wanted a link to download the firmware image, so I added a link_to in the “Image” column. Note that the file path is stored in the image.url attribute.

  2. Specify which parameters may be changed

    Use the permit_params method to whitelist any attributes.

  3. Create the upload form

    Use the f.input :image, as: :file syntax to automatically create a file upload field in Active Admin.

The code snippet below is the Active Admin page, which allows the user to create, view, edit, and delete a firmware image.


ActiveAdmin.register Firmware do
  permit_params :image
  
  index do
    selectable_column
    id_column
    column 'Image', sortable: :image_file_name do |firmware| link_to firmware.image_file_name, firmware.image.url end
    column :image_file_size, sortable: :image_file_size do |firmware| "#{firmware.image_file_size / 1024} KB" end
    column "Version" do |firmware| "#{firmware.major_version}.#{firmware.minor_version}.#{firmware.patch_version}" end
    column :created_at
    actions
  end
  form do |f|
    f.inputs "Upload" do
      f.input :image, required: true, as: :file
    end
    f.actions
  end
end

And that’s it! We now have a fully functional file-upload implementation using Active Admin backed by Paperclip. It’s really quite simple, and it only took a few minutes to get it set up and running.

The post Uploading Files in Rails Using Paperclip and Active Admin appeared first on Atomic Spin.

Super Fast Numeric Input with HTML Ranges – Part 3

In Parts 1 and 2, I showed you how we structured and styled a decimal picker for mobile devices. In this final part, we’ll set up a basic Ember.js app to showcase the control and then wire up its components.

We’ll begin by starting a fresh Ember project. (If you’ve never used Ember, checkout the excellent Quick Start Guide) by running the following in your shell of choice.


ember new sample
cd sample
npm install ember-cli-version-checker
ember install ember-gestures
ember install ember-cli-sass
ember generate template application
ember generate component decimal-picker
ember generate component fader-control
touch app/styles/app.scss
ember server

Point a web browser at http://localhost:4200, and you should see a blank white page. Let’s fill in the pieces that we built in the last two posts (with a few modifications). Since we’re working with Ember, we can split them up into resuable components. Each component will need its own template (we’re using Handlebars, but you could use most any templating language) and a script.

We’ll start with the faders.


<div class=digit>
  <div class=fader>
    {{input type="range" min=min max=max value=value class=(concat "range-value-" value)}}
    <div class="min-value">{{min}}</div>
    <div class="max-value">{{max}}</div>
  </div>
  <div class=display>
    <span class=value>{{value}}</span>
    {{#if showDecimal}}
        <span class=decimal>.</span>
    {{/if}}
  </div>
</div>

This template is just our <div class=digit> from before, but with a few modifications. We replaced the <input> element from Part 1 with an Ember input helper and replaced each manual value we’d typed in with a data binding ({{min}}, {{max}}, {{value}}, etc.). We also wired up the input’s class so that it updates dynamically any time the control’s value changes. This will engage the .range-value-#{$i} classes we set up last time. Onward to the script!

When we designed the control, we wanted users to be able to input numbers quickly and without conciously thinking about the control. This meant the faders had to be both draggable and tappable. On Android, we were in luck. Android’s stock browser (like most desktop browsers) provides tap and drag behavior by default. But of course, we weren’t targeting just Android.

Enter mobile Safari. For some reason, mobile Safari provides the drag behavior, but only if you start your tap exactly on the thumb. If you tap anywhere else on the track, the control blinks to let you know that it registered your tap, but doesn’t move the thumb at all. This was less than ideal for a thumb-friendly UI, so we rolled up our sleeves and built a custom event handler to provide the behavior we were looking for across all browsers.

We used the Ember Gestures plugin to get easy access to touchStart and touchMove events. Any time the user taps or drags within the fader’s bounding box, we calculate the nearest available notch and snap the thumb to that spot. Thanks to Ember’s two-way data bindings, setting the fader’s value also updates the display and switches to the appropriate track background.


import Ember from 'ember';
import RecognizerMixin from 'ember-gestures/mixins/recognizers';
export default Ember.Component.extend(RecognizerMixin, {
  touchStart(e) {
    this.handleTouchEvents(e);
  },
  touchMove(e) {
    this.handleTouchEvents(e);
  },
  handleTouchEvents: function(e) {
    if (e.target.nodeName === "INPUT") {
      var boundingRect = e.target.getBoundingClientRect();
      var touchPointVertical = e.originalEvent.changedTouches[0].pageY;
      if (touchPointVertical <= boundingRect.bottom && touchPointVertical >= boundingRect.top) {
        var inputHeight = Math.ceil(boundingRect.bottom - boundingRect.top);
        var notches = e.target.max - e.target.min;
        var notchSize = Math.ceil(inputHeight / notches);
        var distanceFromTopOfInput = touchPointVertical - Math.ceil(boundingRect.top);
        var notchesFromTop = distanceFromTopOfInput / notchSize;
        var notchesFromBottom = Math.round(notches - notchesFromTop);
        if (notchesFromBottom > e.target.max) {
          notchesFromBottom = e.target.max;
        }
        if (notchesFromBottom < e.target.min) {
          notchesFromBottom = e.target.min;
        }
        this.set('value', notchesFromBottom);
        e.preventDefault();
      }
    }
  }
});

Now that we have our fader component set up, it’s time to put a few of them together into the broader decimal picker. We’ll do that with a decimal-picker component. Since we only needed three digits for this particular control, we spelled out the names of the units, tenths, and hundredths values, but you could easily make this component generic enough to support any number of decimal places.


<div class="decimal-picker">
  {{fader-control value=units
                  min=min.units
                  max=max.units
                  showDecimal=true}}
  {{fader-control value=tenths
                  min=min.tenths
                  max=max.tenths}}
  {{fader-control value=hundredths
                  min=min.hundredths
                  max=max.hundredths}}
</div>
<div class="buttons">
  <button class="done" {{action 'done'}}>DONE</button>
  <button class="clear" {{action 'clear'}}>CLEAR</button>
</div>

Here we include three fader-control components with appropriate data bindings and wire up our buttons to trigger actions that we’ll define in our component’s script.


import Ember from 'ember';
import RecognizerMixin from 'ember-gestures/mixins/recognizers';
export default Ember.Component.extend(RecognizerMixin, {
  // Since these properties are passed in when we use our component, we don't
  // technically need them here. I like to include them so that when I'm looking
  // at a component's script by itself, I know which properties I can use
  // without having to dig around in the template.
  units: 0,
  tenths: 0,
  hundredths: 0,
  faders: 3,
  minimum: 0.00,
  min: Ember.computed('minimum', function() {
    return this.decimalToDigits(this.get('minimum'));
  }),
  maximum: 2.99,
  max: Ember.computed('maximum', function() {
    return this.decimalToDigits(this.get('maximum'));
  }),
  combinedValue: Ember.computed('units', 'tenths', 'hundredths', 'faders', function() {
    let units = parseInt(this.get('units'));
    let tenths = parseInt(this.get('tenths')) / 10;
    let hundredths = parseInt(this.get('hundredths')) / 100;
    let denominator = 100;
    let val = units + tenths + hundredths;
    return Math.round(val * denominator) / denominator;
  }),
  init: function() {
    this._super();
    this.setProperties(this.decimalToDigits(this.get('initialValue')));
  },
  touchMove(e) {
    // Since we're handling touch events ourselves in the fader control, we don't want
    // to process touchMove events at this level.
    e.preventDefault();
  },
  decimalToDigits: function(decimal) {
    let decimalValue = Number(decimal).toFixed(this.get('faders') - 1);  // A string like "2.99"
    return {
      units: decimalValue[0],
      tenths: decimalValue[2],
      hundredths: decimalValue[3]
    };
  },
  actions: {
    done: function() {
      alert(this.get('combinedValue'));
    },
    clear: function() {
      this.setProperties(this.decimalToDigits(this.get('initialValue')));
    },
  }
});

Finally, we’ll use our component in application.hbs


{{decimal-picker faders=3
                 initialValue=1.50
                 minimum=0.00
                 maximum=2.99}}

…and copy our finished stylesheet from last time into app.scss.


* {
  box-sizing: border-box;  // always
}
html, body {
  background-color: #ccc;
  font-size: 8px;
}
input[type="range"] {
  background: transparent;
  box-shadow: none;
  border-style: none;
  margin: 0;
  padding: 0;
  &,
  &::-webkit-slider-runnable-track,
  &::-webkit-slider-thumb {
    -webkit-appearance: none;
  }
}
$fader-width: 50vh;
$fader-height: 25vw;
$gutter: 5vw;
$thumb-width: $fader-width * 0.1;
input[type="range"] {
  width: $fader-width;
  &::-webkit-slider-thumb {
    position: relative;
    border-style: none;
    width: $thumb-width;
    z-index: 2;
    height: $fader-height * 1.1;
    margin-top: $fader-height * -0.05;
    border-radius: 4px;
    background: white;
    box-shadow: -1px 1px 2.5px #888;
  }
  &::-webkit-slider-runnable-track {
    position: relative;
    border-style: none;
    width: $fader-width;
    height: $fader-height;
    border-radius: 4px;
  }
}
@mixin fader-backgrounds($width, $min, $max) {
  $unit-width: $width / ($max - $min);
  @for $i from $min through $max {
    &.range-value-#{$i}::-webkit-slider-runnable-track {
      background-size: ($i - $min) * $unit-width 100%;
    }
  }
}
input[type="range"]::-webkit-slider-runnable-track {
  $fill: linear-gradient(green, green) no-repeat;
  $background: #aaa;
  background: $fill, $background;
}
input[type="range"][min="0"][max="9"] {
  @include fader-backgrounds($width: $fader-width, $min: 0, $max: 9);
}
input[type="range"][min="0"][max="2"] {
  @include fader-backgrounds($width: $fader-width, $min: 0, $max: 2);
}
$button-height: 88px;
$bottom-margin: 38px;
.decimal-picker {
  $container-height: 100vw;
  $faders: 3;
  bottom: ($container-height / -2);
  left: ($container-height / 2);
  height: $container-height;
  padding-top: $gutter/2;
  position: absolute;
  margin: 0;
  margin-top: 4rem;
  margin-bottom: $button-height + $bottom-margin;
  transform: rotate(270deg);
  transform-origin: 0 50%;
  width: 75vh;
  -webkit-touch-callout:none;
  -webkit-user-select:none;
  user-select:none;
  -webkit-tap-highlight-color:rgba(0,0,0,0);
}
.decimal {
  position: absolute;
  right: -1.5rem;
  font-family: serif;
}
.fader {
  position: relative;
  height: $fader-height;
  width: $fader-width;
  float: left;
}
.digit {
  float: left;
  margin-top: $gutter;
}
.display {
  height: $fader-height;
  width: $fader-height;
  margin-left: $gutter/2;
  line-height: $fader-height;
  font-size: $fader-height - 2vw;
  transform: rotate(90deg);
  text-align: center;
  float: left;
  font-weight: bold;
  font-family: sans-serif;
}
.max-value, .min-value {
  position: absolute;
  transform: rotate(90deg);
  -webkit-transform: rotate(90deg);
  font-family: sans-serif;
  font-size: 2em;
  bottom: -0.125em;
  z-index: 1;
}
.max-value {
  right: 0.4em;
  color: darken(#ccc, 20%);
}
.min-value {
  left: 0.5em;
  color: darken(green, 5%);
}
.buttons {
  position: absolute;
  bottom: 0;
  left: 0;
  width: 100%;
  box-shadow: 0px -2.5px 5px #888;
}
button {
  height: $button-height;
  width: 50%;
  float: left;
  border: 0;
  font-size: 2.5em;
  color: white;
  font-weight: 100;
}
.done {
  background: green;
}
.clear {
  background: darken(green, 5%);
}

Decimal Picker In Action

And that’s it! We now have a fast, accessible, user-friendly control that we can include in our mobile app. You can find the source code for this project on Github.

The post Super Fast Numeric Input with HTML Ranges – Part 3 appeared first on Atomic Spin.

Super-Fast Numeric Input with HTML Ranges – Part 2

In Part 1 of this series, I laid the groundwork for setting up a custom decimal picker. In this post, I’ll show you how to finish styling the control. To make our decimal picker look just right, we employed some advanced CSS trickery.

Faking Out WebKit with Linear Gradients

To reinforce the displayed value of each fader in our control and add some visual interest, we wanted to color in the section of track below each thumb control. In some rendering engines, this is really easy.

Unfortunately, WebKit is not one of those engines. To trick WebKit into displaying our filled-in faders, we used the CSS background property in a bit of a hacky way. Initially, we thought we might be able to specify two background colors and sizes.


input[type="range"]::-webkit-slider-runnable-track {
  background: green, #aaa;
  background-size: 50% 100%;  // Fill the first half of the track with green
}

As it happens, the background size rule doesn’t work with solid colors, only with images. Fortunately, we can trick the rendering engine into thinking that we’re specifying an image using a linear gradient. We don’t actually want a smooth grade—just a solid block of green, so we specify the same value for the start and end colors of the gradient.

Now that WebKit thinks we’re rendering a background image, it happily displays a fader track that’s half green and half gray. With that in place, we use a simple Sass loop to generate an appropriate background size for each potential notch in our faders. We’ll set the range-value-# classes in Part 3. The end result looks something like this:


@mixin fader-backgrounds($width, $min, $max) {
  $notch-width: $width / ($max - $min);
  @for $i from $min through $max {
    &.range-value-#{$i}::-webkit-slider-runnable-track {
      background-size: ($i - $min) * $notch-width 100%;
    }
  }
}
input[type="range"]::-webkit-slider-runnable-track {
  $fill: linear-gradient(green, green) no-repeat;  // Our pretend background image
  $background: #aaa;
  background: $fill, $background;
}
input[type="range"][min="0"][max="9"] {
  @include fader-backgrounds($width: $fader-width, $min: 0, $max: 9);
}
input[type="range"][min="0"][max="2"] {
  @include fader-backgrounds($width: $fader-width, $min: 0, $max: 2);
}

Turning the World

If you’ve been following along, you know we have a pretty large challenge to tackle. Our faders are currently horizontal, but the design calls for a vertical control. CSS to the rescue! We’ll just use WebKit’s built-in vertical setting.


input[type="range"] {
  -webkit-appearance: slider-vertical;
}

Go home, WebKit. You’re drunk.

But this is 2016. We have transform now! We’ll have to sacrifice just a bit of readability, but we can accomplish our visual goals with a little rotation. This one’s a bit…counterintuitive, so I’ll mark it up as we go.


// We'll need these later
$button-height: 175px;
$bottom-margin: 75px;
$faders: 3;
.decimal-picker {
  transform: rotate(270deg);  // Turn the decimal picker 90º to the left
  transform-origin: 0 50%;  // Pivot on the vertical center of the left edge
  // Properties that are calculated prior to the rotation
  $container-height: 100vw;  // The picker’s pre-rotation height will be its
                             // post-rotation width (100% of the viewport)
  position: absolute;  // You’ll get no help from the flow here
  bottom: ($container-height / -2);  // Pull the pre-rotation picker down
                                     // (eventually, to the right) by 50% of its
                                     // height (eventually, its width)
  left: ($container-height / 2);  // Pull the pre-rotation picker to the left
                                  // (eventually, up) by 50% of its height
                                  // (eventually, its width)
  height: $container-height;
  width: 75vh;
  padding-top: $gutter/2;  // Nudge the picker’s contents away from the
                           // (post-rotation) left edge
  // Properties that are calculated after the rotation
  margin: 4rem 0 0 0;
  margin-bottom: $button-height + $bottom-margin;
}
.decimal-picker {
  // We don’t want Webkit to flash an outline of our control every time a user
  // taps on it.
  -webkit-touch-callout:none;
  -webkit-user-select:none;
  user-select:none;
  -webkit-tap-highlight-color:rgba(0,0,0,0);
}
.digit {
  // A digit is a column consisting of a fader control and a value display
  margin-top: $gutter;
  float: left;
}
.fader {
  // A fader is a box for our input[type="range"] and the min/max value labels
  position: relative;
  height: $fader-height;
  width: $fader-width;
  float: left;
  .max-value, .min-value {
    position: absolute;
    transform: rotate(90deg);  // Since we want these to appear right-side-up,
                               // and their parent is rotated 90º to the left,
                               // we have to rotate them 90º to the right.
    font-family: sans-serif;
    font-size: 2em;
    bottom: -0.125em;
    z-index: 1;
  }
  .max-value {
    right: 0.4em;
    color: darken(#ccc, 20%);
  }
  .min-value {
    left: 0.5em;
    color: darken(green, 5%);
  }
}
.display {
  // A display is just a box for showing the current value of an input
  height: $fader-height;
  width: $fader-height;
  margin-left: $gutter/2;
  line-height: $fader-height;
  font-size: $fader-height - 2vw;
  transform: rotate(90deg);  // Since we want the value to appear right-side-up,
                             // and its parent is rotated 90º to the left, we
                             // have to rotate the display 90º to the right.
  text-align: center;
  float: left;
  font-weight: bold;
  font-family: sans-serif;
  .decimal {
    position: absolute;
    right: -1.5rem;
    font-family: serif;  // A square decimal point didn't quite fit here
  }
}

decimal_picker_part_4

And Now, Some Buttons

After regaining our sense of balance, we’ll finish the visual portion of this control by adding “Done” and “Clear” buttons. Nothing too major here—just a bit of absolute positioning and some box shadows.


<div class="buttons">
  <button class="done">DONE</button>
  <button class="clear">CLEAR</button>
</div>

.buttons {
  position: absolute;
  bottom: 0;
  left: 0;
  width: 100%;
  box-shadow: 0px -5px 10px #888;
}
button {
  height: $button-height;
  width: 50%;
  float: left;
  border: 0;
  font-size: 2.5em;
  color: white;
  font-weight: 100;
}
.done {
  background: green;
}
.clear {
  background: darken(green, 5%);  // I ❤️ Sass
}

decimal_picker_part_5

Next Step

In Part 3, we’ll wire up the dynamic pieces using Ember and work around WebKit’s errors in judgement with some custom event handlers and a little math.

The post Super-Fast Numeric Input with HTML Ranges – Part 2 appeared first on Atomic Spin.

Interacting with the Puppet CA from Ruby

I recently ran into a known bug with the puppet certificate generate command that made it useless to me for creating user certificates.

So I had to do the CSR dance from Ruby myself to work around it, it’s quite simple actually but as with all things in OpenSSL it’s weird and wonderful.

Since the Puppet Agent is written in Ruby and it can do this it means there’s a HTTP API somewhere, these are documented reasonably well – see /puppet-ca/v1/certificate_request/ and /puppet-ca/v1/certificate/. Not covered is how to make the CSRs and such.

First I have a little helper to make the HTTP client:

def ca_path; "/home/rip/.puppetlabs/etc/puppet/ssl/certs/ca.pem";end
def cert_path; "/home/rip/.puppetlabs/etc/puppet/ssl/certs/rip.pem";end
def key_path; "/home/rip/.puppetlabs/etc/puppet/ssl/private_keys/rip.pem";end
def csr_path; "/home/rip/.puppetlabs/etc/puppet/ssl/certificate_requests/rip.pem";end
def has_cert?; File.exist?(cert_path);end
def has_ca?; File.exist?(ca_path);end
def already_requested?;!has_cert? && File.exist?(key_path);end
 
def http
  http = Net::HTTP.new(@ca, 8140)
  http.use_ssl = true
 
  if has_ca?
    http.ca_file = ca_path
    http.verify_mode = OpenSSL::SSL::VERIFY_PEER
  else
    http.verify_mode = OpenSSL::SSL::VERIFY_NONE
  end
 
  http
end

This is a HTTPS client that uses full verification of the remote host if we have a CA. There’s a small chicken and egg where you have to ask the CA for it’s own certificate where it’s a unverified connection. If this is a problem you need to arrange to put the CA on the machine in a safe manner.

Lets fetch the CA:

def fetch_ca
  return true if has_ca?
 
  req = Net::HTTP::Get.new("/puppet-ca/v1/certificate/ca", "Content-Type" => "text/plain")
  resp, _ = http.request(req)
 
  if resp.code == "200"
    File.open(ca_path, "w", Ob0644) {|f| f.write(resp.body)}
    puts("Saved CA certificate to %s" % ca_path)
  else
    abort("Failed to fetch CA from %s: %s: %s" % [@ca, resp.code, resp.message])
  end
 
  has_ca?
end

At this point we have the CA and saved it, future requests will be verified against this CA. If you put the CA there using some other means this will do nothing.

Now we need to start making our CSR, first we have to make a private key, this is a 4096 bit key saved in pem format:

def write_key
  key = OpenSSL::PKey::RSA.new(4096)
  File.open(key_path, "w", Ob0640) {|f| f.write(key.to_pem)}
  key
end

And the CSR needs to be made using this key, Puppet CSRs are quite simple with few fields filled in, can’t see why you couldn’t fill in more fields and of course it now supports extensions, I didn’t add any of those here, just a OU:

def write_csr(key)
  csr = OpenSSL::X509::Request.new
  csr.version = 0
  csr.public_key = key.public_key
  csr.subject = OpenSSL::X509::Name.new(
    [
      ["CN", @certname, OpenSSL::ASN1::UTF8STRING],
      ["OU", "my org", OpenSSL::ASN1::UTF8STRING]
    ]
  )
  csr.sign(key, OpenSSL::Digest::SHA1.new)
 
  File.open(csr_path, "w", Ob0644) {|f| f.write(csr.to_pem)}
 
  csr.to_pem
end

Let’s combine these to make the key and CSR and send the request to the Puppet CA, this request is verified using the CA:

def request_cert
  req = Net::HTTP::Put.new("/puppet-ca/v1/certificate_request/%s?environment=production" % @certname, "Content-Type" => "text/plain")
  req.body = write_csr(write_key)
  resp, _ = http.request(req)
 
  if resp.code == "200"
    puts("Requested certificate %s from %s" % [@certname, @ca])
  else
    abort("Failed to request certificate from %s: %s: %s: %s" % [@ca, resp.code, resp.message, resp.body])
  end
end

You’ll now have to sign the cert on your Puppet CA as normal, or use autosign, nothing new here.

And finally you can attempt to fetch the cert, this method is designed to return false if the cert is not yet ready on the master – ie. not signed yet.

def attempt_fetch_cert
  return true if has_cert?
 
  req = Net::HTTP::Get.new("/puppet-ca/v1/certificate/%s" % @certname, "Content-Type" => "text/plain")
  resp, _ = http.request(req)
 
  if resp.code == "200"
    File.open(cert_path, "w", Ob0644) {|f| f.write(resp.body)}
    puts("Saved certificate to %s" % cert_path)
  end
 
  has_cert?
end

Pulling this all together you have some code to make keys, CSR etc, cache the CA and request a cert is signed, it will then do a wait for cert like Puppet does till things are signed.

def main
  abort("Already have a certificate '%s', cannot continue" % @certname) if has_cert?
 
  make_ssl_dirs
  fetch_ca
 
  if already_requested?
    puts("Certificate %s has already been requested, attempting to retrieve it" % @certname)
  else
    puts("Requesting certificate for '%s'" % @certname)
    request_cert
  end
 
  puts("Waiting up to 120 seconds for it to be signed")
  puts
 
  12.times do |time|
    print "Attempting to download certificate %s: %d / 12r" % [@certname, time]
 
    break if attempt_fetch_cert
 
    sleep 10
  end
 
  abort("Could not fetch the certificate after 120 seconds") unless has_cert?
 
  puts("Certificate %s has been stored in %s" % [@certname, ssl_dir])
end

Super-fast Numeric Input with HTML Ranges – Part 1

During a recent project, we were tasked with improving the experience of entering a handful of decimal numbers into a mobile web app. In this part of the app, we knew users would be repeatedly entering a number, followed by a decimal point, then two more numbers. The stock ascii keyboards were cumbersome, requiring seven taps and an awkward page scroll on most devices. The numeric keyboards saved a tap or two depending on the platform, but they still suffered from the page scroll problem. With either generic keyboard, we knew we’d have to add form validation to make sure the values were in the right range.

We felt we could build a better experience. In this three-part series, we’ll show you how we crafted a custom decimal picker from the ground up.

Mobile Fader Control

Starting with Good Bones

We knew we’d be building a touch control, but we also wanted something that would be accessible to other input methods. In addition to being the right thing to do, building custom controls based on web standards makes writing acceptance tests way easier later in the development process. To build our faders, we started with vanilla HTML range inputs, a few divs to hold their values, and a couple of layers of containers.


<div class="decimal-picker">
  <div class=digit>
    <div class=fader>
      <input type=range min=0 max=2 value=1 />
    </div>
    <div class=display>
      <span class=value></span>
      <span class=decimal></span>
    </div>
  </div>
  <div class=digit>
    <div class=fader>
      <input type=range min=0 max=9 value=0 />
    </div>
    <div class=display>
      <span class=value></span>
    </div>
  </div>
  <div class=digit>
    <div class=fader>
      <input type=range min=0 max=9 value=0 />
    </div>
    <div class=display>
      <span class=value></span>
    </div>
  </div>
</div>

Bare HTML Sliders

Functional, but nothing to write home about.

Resetting the Browser

Every browser implements its own unique style of range input. Because we wanted a consistent experience across mobile platforms, we had to remove all the standard styling and build our own. Here’s how we reset things:


* {
  box-sizing: border-box;  // always
}
input[type="range"] {
  background: transparent;
  box-shadow: none;
  border-style: none;
  margin: 0;
  padding: 0;
  &,
  &::-webkit-slider-runnable-track,
  &::-webkit-slider-thumb {
    -webkit-appearance: none;
  }
}

Invisible Slider Controls

Making It Our Own

To make the controls our own, we added style rules for the -webkit-slider-runnable-track and -webkit-slider-thumb pseudo-elements. Our development target was a tightly controlled web view running on Android 4.4+ or iOS 8.0+, so we only included the webkit-prefixed pseudo-elements. If you’re targeting Firefox or Windows Phone, there are similarly prefixed rules available.


$fader-width: 200px;
$fader-height: 60px;
input[type="range"] {
  &::-webkit-slider-thumb {
    position: relative;
    border-style: none;
    width: $fader-width * 0.1;
    height: $fader-height * 1.1;
    margin-top: $fader-height * -0.05;
    border-radius: 3px;
    background: white;
  }
  &::-webkit-slider-runnable-track {
    position: relative;
    border-style: none;
    width: $fader-width;
    height: $fader-height;
    background: green;
    border-radius: 3px;
  }
}

Styled Slider Controls

Next Steps

In Part 2, we’ll finish styling the control. Then in Part 3, we’ll add just a touch of JavaScript to tie everything together into a nice mobile experience that’s faster and more fun than a simple numeric input field.

The post Super-fast Numeric Input with HTML Ranges – Part 1 appeared first on Atomic Spin.

     

Using CircleCI to Test and Deploy an iOS App

When starting a new greenfield project at Atomic, we always ask ourselves about tooling surrounding testing and deployment. We have had a lot of luck with CircleCI for both mobile and web applications, so when I found out CircleCI had a solution for iOS, I was excited to take advantage of it. In this post, I’ll review how to use CircleCI with your iOS application and explain how I handled some bumps in the road on the path to CI and easy deployments.

So, what is CircleCI? In short, it’s a software-as-a-service platform for continuous integration and deployment. My goal with using CircleCI was two-fold:

  1. Continuous integration testing without having to maintain a server myself
  2. Easy deployments using TestFlight for our internal testers

I followed this excellent blog post explaining the process of getting up and running with CircleCI, and I highly recommend it for iOS developers interested in giving CircleCI a try. What follows is a step-by-step journal of my progress as I deployed a new app to CircleCI.

1. Create an Account

Log in to CircleCI and sign up. We went with the most basic OS X tier of server, which costs $40 a month. You can find complete pricing information here.

Log in and add your OS X repository to your CircleCI account. They have completely seamless integration with GitHub.

2. Share Your Xcode Scheme

In Xcode, make sure your scheme is shared, and commit the change:

  1. Choose Product > Scheme > Manage Schemes.
  2. Select the Shared option for the scheme to share, and click “Close.”
  3. Push the change.

For most applications, you can just share your main project scheme. If only one scheme is shared, CircleCI will choose it automatically.

3. Create and Upload Code Signing Certificate

CircleCI requires you to upload a provisioning profile and code signing certificate. Install the Fastlane tool suite to make this step easier and avoid having to muck around with Xcode. To create the signing certificate, run:

1
2
  $ mkdir certificates
  $ cert --output_path certificates

cert will create three files, including the .P12 file you need to upload. Again, refer to the blog post I linked to above (my instructions came directly from it). At this point, I hit my first problem:

1
2
3
4
5
6
7
8
  Password (for foo@bar.com): ***********
  [09:05:29]: Sending Crash/Success information. More information on: https://github.com/fastlane/enhancer
  [09:05:29]: No personal/sensitive data is sent. Only sharing the following:
  [09:05:29]: {:cert=>1}
  [09:05:29]: cert
  [09:05:29]: This information is used to fix failing tools and improve those that are most often used.
  [09:05:29]: You can disable this by setting the environment variable: FASTLANE_OPT_OUT_USAGE=1
  /Users/foobar/.rvm/gems/ruby-2.2.4/gems/spaceship-0.27.2/lib/spaceship/two_step_client.rb:39:in `handle_two_step': [!] spaceship currently doesn't support the push based 2 step verification, please switch to SMS based 2 factor auth in the mean-time (RuntimeError)

What’s happening here is Fastlane’s Spaceship tool doesn’t support Apple’s push-based two-step verification process, so when I ran cert, it failed, and I got a two-step verification challenge on my Macbook Pro. This turned out to be pretty annoying—the error message suggests switch to SMS based 2 factor auth, but I couldn’t figure out how to do this with my AppleID. I resorted to temporarily disabling two-step verification like this:

1
2
3
4
5
  Sign in to your Apple ID account page.
  In the Security section, click Edit.
  Click Add a Trusted Phone number.
  To add a number, enter the phone number and verify it with a text.
  To remove a number, click the X next to the phone number you want to remove.

If anyone knows how to switch to SMS for an iPhone user, I’d love to hear it.

4. Dealing with an Invalid Issuer Error

After disabling two-step verification, I hit another problem while generating the code signing certificate. Specifically, I had an expired Apple Worldwide Developer Relations Certificate Authority certificate. I had no idea this even existed. The fix for this issue is described in this Stack Overflow post. The solution is to delete the expired certificate in Keychain Access and download a new certificate from Apple.

5. Upload the .P12 File

Finally, cert succeeded, and I had a .P12 file to upload to CircleCI for code signing. You can upload the file by going to your CircleCI application and choosing Project Settings > iOS Code Signing.

6. Create and Upload a Provisioning Profile

The other file CircleCI needs to deploy your application is a provisioning profile. To create one, you can use the “sigh” tool:

1
2
  $ mkdir certificates
  $ cert --output_path certificates

sigh will create a file with a .mobileprovision, unless there is some problem. In my case, there was a problem. When sigh runs, it asks for a bundle identifier (which you should have given Xcode when you first created the project). In order for sigh to work, an app with that identifier needs to exist on the Apple developer portal. I hadn’t created the app on the developer portal yet, so sigh failed. There’s a command line fix to this problem:

1
  $ produce -u yourappleid -a yourappbundle

The produce command creates a new app on the developer portal. After running produce my .mobileprofile file was created, so I added the file to the root directory of my repository and pushed the changes. During the build, CircleCI will automatically find this file and use it.

7. Configure circle.yml to Sign Your App

The final step is to set up the CircleCI configuration file to build and deploy your application. CircleCI can use “gym” to sign the application with the .P12 file you uploaded earlier, but it needs one more bit of information: the exact code signing identity string, which you can find by running security:

1
  security find-identity -p codesigning

The output of this tool is a list of valid signing identities. Pick the appropriate one and copy the string (without the UUID). For example, it might look like this: "iPhone Distribution: YourCompany, Inc. (ABXU908T71)".
Now, add that string to circle.yml.

1
2
3
4
5
6
7
8
9
machine:
  environment:
    GYM_CODE_SIGNING_IDENTITY: "iPhone Distribution: YourCompany, Inc. (ABXU908T71)"
 
deployment:
  beta_distribution:
    branch: master
     commands:
       - gym

And add circle.yml to the root of your repo and push the changes. The above configuration tells CircleCI to deploy whenever changes are made to the master branch using gym, which is set up with the GYM_CODE_SIGNING_IDENTITY.

8. Deploy to iTunes Connect

Finally, you are all set to configure CircleCI to deploy your application to iTunes Connect. To do this, you need to set up two environment variables in CircleCI under your app’s settings, ITUNES_CONNECT_ACCOUNT (your iTunes connect account username), and ITUNES_CONNECT_PASSWORD:
circle_setup. CircleCI uses these credentials to run ipa, which will upload your successful build, provided all tests pass.

The other piece of information you need is your iTunes Connect AppleID, which can be found on the iTunes Connect website under My Apps -> Your App -> App Information -> Apple ID. This is a number that is passed to ipa with the -i option.

Finally, update your circle.yml again as follows:

1
2
3
4
5
6
7
8
9
10
machine:
  environment:
    GYM_CODE_SIGNING_IDENTITY: "iPhone Distribution: YourCompany, Inc. (ABXU908T71)"
 
deployment:
  staging:
    branch: master
    commands:
      - gym
      - ipa distribute:itunesconnect -i 8675309 --upload --verbose

At this point, everything should work, and you will be greeted with a green build in CircleCI–successful test and deployment! Visit iTunes connect, and there will be a new build under Activity. It takes a while for Apple to process the file, but once processing completes, the app is ready for testing and your TestFlight testers will be notified.

Conclusion

I spent a few hours working through some convoluted issues, but in the end, the pain was worth it. Now, every time a member of the team pushes a change to master (usually by merging in a feature branch), tests run automatically, and—if they pass—a build is automatically sent to testers. It’s a thing of beauty.

The post Using CircleCI to Test and Deploy an iOS App appeared first on Atomic Spin.