How to Debug Stored Procedures in Visual Studio in 3 Steps

My first project at Atomic was a C#-based web application using Visual Studio. As time passed, I became familiar with many of the shortcuts and tools that Visual Studio provides to help with common development tasks. Whenever there was a section of code that I didn’t quite understand, I would use the debugging tools to my advantage.

The application relied quite heavily on stored procedures, which I was used to writing within SQL Server Management Studio (SSMS). Unfortunately, SSMS doesn’t provide many tools to help with writing complex stored procedures. Not having much SQL experience beyond basic SELECT, INSERT, and UPDATE statements, I decided to use Visual Studio’s tools to help me out.

Debugging Stored Procedures

Stepping Through Stored Procedures

Before we begin, I would like to clarify that I do not think this method is required in every case. If the stored procedure in question is not very complex, or if you prefer not to use a debugger, then this method is not for you. For those of us who need a little extra help once in a while, here are the instructions:

Step One: Connect to the database.

In order to perform any debugging, you’ll need to establish a connection to the database containing the stored procedure. Within Server Explorer, select the “Connect to Database” option, and fill in the required connection information.

Step Two: Locate the desired stored procedure.

After connecting to the desired database, you will now be able to use Server Explorer to navigate through the different parts of the database. If you already know the setup of SQL Server Management Studio, this will seem quite familiar.

Open up the “Data Connections” section that is now available in Server Explorer, and expand the database where you want to connect. There should be a “Stored Procedures” folder with all of the stored procedures in the database. Open this file, and find the specific stored procedure that you wish to debug. Right-click on the stored procedure and select the “EXECUTE” option. This will open a new query window where you can execute your stored procedure as shown below. If your stored procedure requires parameters as input, Visual Studio will prompt you to enter the values before opening the new query window.


USE [test_db]
GO
DECLARE @return_value Int
EXEC    @return_value = [dbo].[s_My_Stored_Procedure]
SELECT  @return_value as 'Return Value'
GO

Step Three: Execute with debugging.

In the top left corner next to the green arrow, you’ll see a dropdown icon (don’t click the green arrow). Click the dropdown arrow, and select “Execute With Debugger.” This will start executing the stored procedure and allow you to use the familiar debugging options (e.g., Step In, Step Over, Continue, etc.).

Invaluable Tools

Part of being a great developer is knowing your toolset well and using it to the best of your ability. What tools are invaluable to your work?

The post How to Debug Stored Procedures in Visual Studio in 3 Steps appeared first on Atomic Spin.

Conference Room A/V Build-Out

We recently moved to our new building at 1034 Wealthy. We took the opportunity to update the A/V equipment for our conference rooms. Previously, we largely relied on projectors for presentation capabilities, an external USB microphone/speaker for audio, built-in webcams on laptops for video, and a table where we staged everything. This worked, but it was certainly not ideal. With the new building, I had the opportunity to standardize a new conference room A/V build-out that would be better suited to our needs.

All of our new conference rooms now have a mobile TV stand which holds all of our A/V equipment. This includes a large flatscreen TV, dedicated webcam, dedicated microphone/speaker, and all necessary cables and connectors. Our new setup provides important capabilities required for many of our meetings, especially teleconferences: mobility, audio input, audio output, video input, and video output.

Capabilities

Mobility

I choose the Kanto Living MTM82PL mobile TV mount, which includes the mounting hardware for a flatscreen TV, a small shelf, and a shelf for a webcam above the TV. It is a sleek, yet sturdy platform which allows our A/V build-out to be mobile. While largely dedicated to conference rooms, it can also be moved out to other areas–such as our cafe–for events or meet-ups.

Video Output

The Samsung 65″ Class KU6300 6-Series 4K UHD TV was selected as our primary display. This provides a much better picture and much higher resolution than the old projectors we were using. It has a native resolution of 3840 x 2160, a 64.5″ screen (diagonal), and 3 HDMI ports. While not all of our devices can support that resolution at this point (for example, AppleTVs only support up to 1080p), it still seemed like a worthwhile investment to help future-proof the solution.

Video Input

I chose the Logitech HD Pro Webcam C920 for video capabilities. It supports 1080p video when used with Skype for Windows, and 720p video when used with most other clients. The primary benefit of this webcam is that it can be mounted above the TV on the mobile stand, providing a wide view of the entire room–rather than just the person directly in front of the built-in laptop webcam.

Audio Input/Output

We had previously made use of the Phoenix Audio Duet PCS as a conference room “telephone” for web meetings–it provides better audio capabilities for a group of people than a stand-alone laptop. We placed one of these in each of the conference rooms as part of the A/V build-out. It acts as the microphone and speaker, while using the Logitech webcam for video input and the Samsung TV for video output.

Helpers

Of course, I needed a few other items to tie all of these different capabilities together.

Cabling

I purchased 20 ft. Luxe Series High-Speed HDMI cables so people can connect directly to the Samsung TVs for presentations. This type of connection allows computers to utilize the full resolution of the new TVs.

Adapters

The Moshi Mini DisplayPort to HDMI adapter provides connectivity for those Atoms whose MacBooks do not natively support HDMI.

Presentation Helpers

I decided to purchase Apple TVs to allow for wireless presentation capabilities. With AirPlay, Macs (and other compatible devices) can transmit wirelessly to the TV–without the need for an HDMI cable. This is convenient for getting up and running quickly without any cable clutter, but it isn’t always appropriate (which is why a direct HDMI connection is available as well).

Cable Management

In addition to the standard cable ties and other cable management tricks, I’ve found that Cozy Industries, makers of the popular MagCozy, also makes a DisplayCozy. This helps keep the Moshi HDMI adapter with the HDMI cable.

Power Distribution

While the mobile TV cart provides a great deal of flexibility, the new building also has wide spaces between electrical outlets. To ensure that the A/V build-out would be usable in most spaces, I decided to add a surge protector with an extra-long cord. The Kensington Guardian 15′ works well for this.

Finished Product

Atomic A/V Cart
Atomic Mobile A/V Solution

The post Conference Room A/V Build-Out appeared first on Atomic Spin.

The Tradeoff of Multiple Repositories

More often than I expect, I come across software projects that consist of multiple source control repositories. The reasons vary. Perhaps it’s thought that the web frontend and backend aren’t tightly coupled and don’t need to be in the same repository. Perhaps there’s code that’s meant to be used throughout an entire organization. Regardless, there are real costs involved in the decision to have a development team work in distinct, yet related, repositories. I believe these costs are always overlooked.

Double (or n Times) the Gruntwork

The most obvious cost involved is additional gruntwork. Let’s imagine a project with a mobile app and web service, each having its own Git repository. When it’s time to start a new feature, the feature branch will need to be created twice. When the work is finished, two pull requests will need to be made. When it’s appropriate to make a commit, it might need to be done twice. When it’s time to push, it might need to be done twice. To help manage all of this, an extra terminal might be appropriate.

Individually, none of these costs is very significant. Collectively, they represent a moderate inconvenience and cognitive burden. I’ve seen developers weigh this and decide it’s worth the cost, because they are trying to achieve some other ideal.

Ultimately, these inconveniences are just symptoms of a more fundamental—and easily overlooked—tradeoff.

Context: Not Version-Controlled

A repository is essentially a set of snapshots in time. For any commit, it’s easy to see not only what changes were made, but also precisely what other files existed and contained at that point in time. This is pretty obvious, after all. It’s one of the biggest selling points of version control.

With a project consisting of one single repository, that snapshot encapsulates everything there is to know about the source code. Once there are multiple repositories involved in a single project, this context is fragmented.

This fragmentation manifests in various ways. Let’s look at some examples:

  • When moving code between repositories, neither one has knowledge of the other. Information about where the code came from or went is lost.
  • If your frontend branch repo depends on your server to be running with a corresponding branch, there’s no native or reasonable way to express that relationship. Information is lost.

The Real Tradeoff of Multiple Repositories

Breaking a project into multiple repositories involves a fundamental tradeoff. By doing so, information about the broader context of the application is pushed entirely outside of version control.

Although it’s possible to work to counteract this, for example, by establishing team practices, using Git submodules, or building custom machinery, it will require work. That’s work spent to regain what you get for free by using a single repository.

Therefore, the most likely place that this information will move is into the culture and individual minds of the team. This is a much more ephemeral and unreliable place than a source repository. It makes it harder to onboard new developers and coordinate things like continuous integration.

Conclusion

It’s up to your unique situation whether it’s a win or loss to split your code into multiple repositories, but the costs are both real and easily overlooked. I’d strongly suggest weighing these tradeoffs thoughtfully. And, if you find yourself on a project where these costs are bringing you down, I’ve written a blog post on how to super-collide your repositories together.

The post The Tradeoff of Multiple Repositories appeared first on Atomic Spin.

Puppet Lint Plugins – 2.0 Upgrade and new repo

After the recent puppet-lint 2.0 release and the success of our puppet-lint 2.0 upgrade at work it felt like the right moment to claw some time back and update my own (11!) puppet-lint plugins to allow them to run on either puppet-lint 1 or 2. I’ve now completed this and pushed new versions of the gems to rubygems so if you’ve been waiting for version 2 compatible gems please feel free to test away.

Now I’ve realised exactly how many plugins I’ve ended up with I’ve created a new GitHub repo, unixdaemon-puppet-lint-plugins, that will serve as a nicer discovery point to all of my plugins and a basic introduction to what they do. It’s quite bare bones at the moment but it does present a nicer approach than clicking around my github profile looking for matching repo names.

Be Explicit with Your API’s Data

I recently managed to take a feeling I’ve had about API design and formulate it into a specific recommendation: Be explicit about state when crossing system boundaries.

My API Design Scenario

One of the software packages I support is an API that provides data to a collection of small (physical) displays. The displays are deployed around a customer’s site, and they poll the API regularly for information to show. Since the hardware is out there, it’s hard to update if we want it to show something else.

The actual deploys on real customer sites varied much more than the testbed we had here for development. It’s no surprise that things differed, but they differed dramatically across different customers, and the software doesn’t handle every case.

My API Design Problem

The key problem that we hadn’t anticipated was a partial failure mode. We’ve found that, due to various complexities of setup (user accounts, permissions, rate limiting, and other things), an “all or nothing” approach wasn’t optimal. In other words, even if we can’t send 100% of the data, it’s still valuable for our displays to show 90% of it.

However, the hardware does not handle this option very well. While it’s easy to update the API so it can only send the data it’s able to, the hardware can only do one of two things:

  • Show the data sent over
  • Show an error on the display (if the HTTP code is an error status)

Unfortunately, there’s no way to fail partially. We can’t show and update the data, while still indicating that the unit needs attention. Adding this capability is something we will consider for a future release.

My New API Design Rule

Hence my new rule: Be explicit about state when crossing system boundaries.

Don’t automatically infer “error” from your data. It’s much much easier to be nuanced when you have a field where you can put data.

This isn’t meant as a suggestion to make a field for every single thing. It’s certainly okay for calculations to happen in a remote client.

The important thing is that key system states (success, failure, online, up to date, load, and similar metrics) are explicit. Format strings, human labels, colors, and other things that are just “data” might not matter, but to avoid unnecessary coupling, anything that will drive an interaction should be explicit in your data.

The post Be Explicit with Your API’s Data appeared first on Atomic Spin.

Getting Android ListView Right the First Time

ListView is an Android UI element commonly used when you want to display a scrollable list of items. Unless you have a simple, static list of items, you’ll probably end up subclassing BaseAdapater in order to provide content for Android ListView. The basic process of doing this is fairly straightforward, but there are a few mistakes that are easy to make if you’re not careful.

Mistake 1: Updating the UI from a Background Thread

Most UI frameworks require updating user interface elements only from the main thread, and Android is no exception. However, it may not be immediately obvious that you’re modifying the UI from a background thread. If your list requires some network calls or heavy processing, you’ll certainly be offloading that work to another thread.

ListView contains some additional guards that will throw a fatal exception if they detect that you’ve updated the underlying data model from a background thread. But if you have a race condition or other timing issue, you may not ever see the exception under normal development conditions. You’ll know it happened when your app crashes and you see a message like this in logcat:

Fatal Exception: java.lang.IllegalStateException
The content of the adapter has changed but ListView did not receive a notification. Make sure the content of your adapter is not modified from a background thread, but only from the UI thread. Make sure your adapter calls notifyDataSetChanged() when its content changes.

The error description is actually somewhat helpful in resolving the issue:

  • Make sure ListView’s adapter only changes its contents on the main thread
  • Call notifyDataSetChanged() immediately after updating contents

And “immediately” really does mean immediately–if you wait even until the next event loop, ListView will have a chance to redraw and therefore notice that the adapter’s contents have changed.  Note that it is not sufficient just to make sure you are calling notifyDataSetChanged() on the main thread; you need to also make sure that although your background work is done on a different thread, it does not update your adapter from that thread.

Mistake 2: Not Using a ViewHolder

In order to have the best user experience, you’ll want to keep scrolling performance fast. An easy way to kill scrolling performance is to do a lot of work in the adapter’s getView() method. Like getCount(), you want this method to return as quickly as possible. The process of configuring a row usually involves inflating a view, finding the relevant subviews (like TextView, ImageView, etc.), and populating them with data for the row.

ListView helps a lot in this regard by recycling views. If it has a view available to reuse, it will pass it in the convertView parameter. This saves you from having to inflate a new view, but you should also avoid looking up all of the subviews again. A simple ViewHolder class can keep references to these subviews when you first create the view.


public class MyRowViewHolder {
    TextView titleView;
    TextView subtitleView;
    ImageView photoImage;
}

Then you just need to have the getView() method associate an instance of the ViewHolder with the row:


@Override
public View getView(int position, View convertView, ViewGroup parent) {
    MyRowViewHolder rowViewHolder;
    if (convertView == null) {
        View rowView = inflater.inflate(R.id.my_row_view, parent, false);
        rowViewHolder = new MyRowViewHolder();
        rowViewHolder.titleView = (TextView)rowView.findViewById(R.id.title_textview);
        rowViewHolder.subtitleView = (TextView)rowView.findViewById(R.id.subtitle_textview);
        rowViewHolder.photoImage = (ImageView)rowView.findViewById(R.id.photo_imageview);
        rowView.setTag(rowViewHolder);
    } else {
        rowViewHolder = (MyRowViewHolder)rowView.getTag();
    }
    // Update the contents of rowViewHolder's subviews here
}

Mistake 3: Inflating Views and Passing Null for the “Root” Parameter

This applies to more than just ListView, but this mistake is especially easy to make since the adapter is just inflating a row and returning it to the ListView. The Android documentation says that the root parameter is optional, but does not discuss the consequences of leaving it out.

When building a layout in XML, you can specify attributes such as layout_width on an element like a TextView. When the layout is inflated, a subclass of ViewGroup.LayoutParams is used to represent these layout parameters. For instance, a TextView within a LinearLayout will have LinearLayout.LayoutParams. But if the layout inflater does not have a root view group to use as a reference, it cannot know which LayoutParams subclass to use.

As a result, your inflated view may not have all of the layout parameters you are expecting. It’s also entirely possible that the defaults coincide with the values you specified, so you won’t even notice. Also remember that inflating a view does not automatically attach it to the view hierarchy unless you use the inflate (int resource, ViewGroup root, boolean attachToRoot) version of the method, passing true for attachToRoot.

Other Common Android ListView Mistakes

What other ListView mistakes have you seen? Feel free to share them in the comments below.

The post Getting Android ListView Right the First Time appeared first on Atomic Spin.

Respecting the Value of Face Time

The way we interact and work with others has changed drastically over the past few decades. Email, chat, and teleconferencing have bridged huge gaps of geography and facilitated us to work across boundaries.

This flexibility has allowed individuals to work from home so they can tend a sick child or deal with other real-life complications. Work/life balance is tough, but these advances in technology have helped bridge the gap. While all of today’s communication options come in handy, there’s still real value in face-to-face communication. In this post, I’ll suggest when in-person meetings are helpful and offer some tips about how to conduct them.

Face Time

The Disadvantages of Written Communication

It is very difficult to express intent and motivation in an email or chat. When we are typing our communication, we are usually more formal than we would in person. This frequently leads to misunderstandings. Misunderstandings can create tension between team members, causing lost productivity and lack of enjoyment in our work.

Written words can also leave much room for interpretation. Our interpretation of the words of a teammate or customer is unfortunately affected by the situations we are in—both professionally and personally. If we are frustrated, it may come across as anger. Maybe we feel a loss of control over a recent shortcoming, so we interpret even constructive criticism in a negative light. This can damage the spirit and effectiveness of our team.

The Advantages of Face Time

As we’ve seen on social media these days, it is easy for people to lash out or spark controversy with their words. This often leads to anger and mistrust. It is very easy to attack ideas—and the people they come from—when you don’t have to see them. On the other hand, meeting face-to-face helps level the playing field. It helps us better realize that we are all human beings, with our own quirks and interests.

In-person communication also allows us to use gestures, facial expressions, and body language to better express our intent. We can more easily crack jokes and have fun, without having to keep our guard up. This gives us a greater sense of freedom in our communication and boosts creativity.

Most of all, meeting with our team creates bonds. We share more personal stories and interests. Though these bonds may not have a direct measurable impact on productivity, establishing them is invaluable.

When Face-to-Face Time is Crucial

It can be difficult to make the call as to when an in-person meeting is necessary. However, there are frequently cues that meeting face-to-face would be helpful.

I can’t seem to get my point across!

In a group setting, it can be difficult to align points of view. If you feel like your points are being ignored or misunderstood, don’t hesitate to contact the specific person you’re in conflict with. Do this by walking over to their desk or picking up the phone after the meeting. Trying to conquer misunderstandings between two people can be difficult to resolve in the context of a group discussion. It can waste a lot of other people’s time, which can cause the overall frustration of the team to grow, which isn’t helpful to your ecosystem.

Personal commitments or impediments

When you are in the context of working on a project, it is hard to communicate any influence that personal situations may be having on you or others in the group. You normally don’t want to air your own personal life to the team. However, discussing these situations person-to-person can help remind each other that you do have a personal life, and that you/they may have a few hurdles to cross so you can get back to focusing on your tasks at work.

Having a one-on-one discussion reminds each of you that you are a person and not a cog in the system. Life is complex and can’t always be completely decoupled from our careers, as much as we all wish that were the case.

Conflicting viewpoints

When there are differing roles/responsibilities on a team, there can be conflict in aligning your goals, even though overall we all just want to get stuff done. These conflicts tend to arise more frequently when pressures are high. Upper management may be putting pressure on your project manager or product owner if deadlines or release dates seem to be at risk.

It is frustrating when someone is responsible to deliver something but isn’t capable of contributing to finishing a given task. Rather than trying to explain technical details in a public audience, having a one-on-one may help your project manager understand that you are making progress and being proactive with the hurdles that stand in your way. As a bonus, getting another perspective on the problem outside of a meeting may help you work out a better strategy.

Evaluating Effectiveness

Teams should regularly evaluate the effectiveness of team communication. As deadlines get closer and pressures rise, it is easy to drop back into old habits and even ignore communication channels so that we can focus on the task at hand. All too frequently, project management tends to add more meetings in order to keep on track.

But getting all team members in the mix and adding more meetings can take time away from getting the actual work done. It is much more effective to pull key individuals into these meetings, if they are really necessary. Getting too much input from too many individuals usually leads to thrashing and increased pressure across the board.

Having all developers focused on tackling emergent needs is ineffective. It’s a good idea to identify a point person for triaging these requests so they don’t halt the train when they pop up. Most emergencies don’t require immediate attention—especially not the attention of the full team.

Getting to the Core

Our work shouldn’t define our lives. We all have family and friends that are (hopefully) more valuable than our jobs and obligations. Teams will never get along perfectly. Breaking down the walls between each other and working through disagreements is necessary and even therapeutic. Taking on the challenges of real-world projects as a team rather than as individuals is much more effective and far more enjoyable.

The post Respecting the Value of Face Time appeared first on Atomic Spin.

Testing Data Migrations in Rails

When working on a Rails project, you will inevitably need to move data around in your database. Some join table value will need to be moved into its own table or what have you. When approaching these kinds of migrations, there are two major complications: future-proofing and testing. In this post, let’s walk through an example migration.

Inserting Data

Rule of thumb: Do not use any ActiveRecord Models in a migration. As a project develops and changes, so do its classes. When building a migration, you want it to be reliable and reproducable. Migrations should behave exactly the same every time they are run. What if an after_save hook has been added? What if the table_name has changed? What if that class doesn’t even exist anymore?

We do know that migrations are tied to our schema.rb version and that we can depend on tables and columns, just not AR models. We use ar_class_for_table to dynamically build a namespaced ActiveRecord model for the given table.


module ArClassForTable
  def ar_class_for_table table, &blk
    klass = Class.new ActiveRecord::Base
    unique_module = Module.new
    ArClassForTable.const_set "Id#{ArClassForTable.unique_id}", unique_module
    unique_module.const_set table.to_s.classify, klass
    klass.table_name = table
    klass.class_eval &blk if block_given?
    klass
  end
  
  def self.unique_id
    @unique_id ||= 0
    @unique_id += 1
  end
end
ActiveRecord::Migration.extend ArClassForTable

In your migration, you can use it as a standard ActiveRecord model, but this one won’t shift out from under you with the rest of your codebase.


  user_klass = ar_class_for_table(:users)
  user_klass.where(...)
  user_klass.create(...)
  # etc..

Testing

Wrangling your database to be in just the right setup for a migration test can be tricky. I wish Rails had better baked-in harnessing for testing data migrations, but since I couldn’t find any, here are two options:

Never been squashed

If you have never squashed your migrations down, you still have all the migrations since the beginning of your project. Your test should look like this:

  1. Drop all tables.
  2. Migrate to the version just before the one you wish to test.
  3. Insert test data using ar_class_for_table or SQL.
  4. Run migration.
  5. Assert results using ar_class_for_table or SQL.

Squashed migrations

If you have squashed your migrations by removing old ones and relying solely on your schema.rb to “catch you up,” we need to do a little more work:

  1. Drop all tables.
  2. Load schema.rb.
  3. Migrate down to the version just before the one you wish to test.
  4. Insert test data using ar_class_for_table or SQL.
  5. Run migration.
  6. Assert results using ar_class_for_table or SQL.

Here are all the helpers needed to make the above testing plan work:


require 'spec_helper'
require Rails.root.join('db/migrate/20160712211943_populate_orgs_for_students')
describe MyMigrationThing, type: :migration do
  def drop_all_tables
    ActiveRecord::Base.connection.tables.each do |table|
      ActiveRecord::Base.connection.drop_table(table)
    end
  end
  def migrate(opts={})
    quietly do
      version = opts[:version] ? opts[:version].to_i : nil
      down = opts[:dir] == :down
      up = opts[:up] == :up
      if down
        ActiveRecord::Migrator.down(DB_MIGRATIONS_PATHS)
      elsif up
        ActiveRecord::Migrator.up(DB_MIGRATIONS_PATHS)
      else
        ActiveRecord::Migrator.migrate(DB_MIGRATIONS_PATHS, version)
      end
    end
  end
  def quietly
    old_verbose = ActiveRecord::Migration.verbose
    ActiveRecord::Migration.verbose = false
    ActiveRecord::Base.logger.silence do
      yield
    end
  ensure
    ActiveRecord::Migration.verbose = old_verbose
  end
  before do
    quietly do
      drop_all_tables
      load Rails.root.join('db/schema.rb')
      migrate version: THIS_TEST_VERSION
      migrate dir: :down
    end
  end
  after do
    drop_all_tables
    load Rails.root.join('db/schema.rb')
    migrate
  end
  it 'does a thing' do
    # insert test data
    migrate dir: :up
    expect(...)
  end

When dealing with the migration of production data, your code must be future-proof and well tested. I’ve presented a couple of options here. Leave your approaches in the comments.

The post Testing Data Migrations in Rails appeared first on Atomic Spin.

IoT Made Easy by Particle

I love the Internet of Things (IoT) uprising that is happening right now. I mostly spend my days writing software, but my degree is in electrical/computer engineering, so IoT technologies combine a lot of things that I am interested in.

When the Raspberry Pi first came out, I got very excited and immediately began building my first IoT device. It was a lot of fun, but I quickly discovered that making an IoT product is really hard and requires a lot more work that I had initially thought.

A few years later, I was delighted to find that a company called Particle is trying to make the lives of people like me much easier. Particle is revolutionizing the world of IoT by building infrastructure that supports taking a product from prototype to mass production with minimal time and effort.

What Makes IoT So Hard?

I know that a lot of you could easily wire up a breadboard, write a little code, and toggle an LED from a smartphone over a weekend without much trouble. But that’s not a product. That is a prototype, and it’s really just the tip of the iceberg. To make a product that you could actually feel good about selling to customers requires a lot more thought and a lot more work.

Getting Connected

Connectivity may seem like a basic thing, but it must not be overlooked. A lot of IoT devices don’t have screens and keyboards, so how can users configure their device to connect up to the internet? Do they have to enter their network credentials somehow?

You could use Bluetooth for this, but that creates even more work and potential problems to debug. Also, there’s the additional hardware expense of a Bluetooth radio. Do you have a way of debugging customers’ connection problems? Can/should you use WPS? The average consumer probably doesn’t know what WPS is. How many phone calls are you prepared to handle to help resolve these issues?

In-Field Updates

Handling firmware/software updates is another big challenge. Once you have a large number of devices in the field, how will you update them if you find a bug or want to add a new feature? Can you handle releasing updates to a group of beta testers? Can your device handle an update that is terminated half way through? These are just a few of the questions you will need to address.

Particle to the Rescue

Particle is trying to make it as simple as possible for people like us to make quality IoT devices and bring them to market quickly and easily. They have created a large range of both hardware and software tools to help make that possible.

The Particle ecosystem works like this: The devices you build are configured to connect to a WiFi network using a smartphone app. Once connected, the devices communicate directly to the Particle Cloud which provides an interface for your client applications (web, mobile, desktop) to send and receive data to and from the device. All of this is accomplished with ease using Particle’s open source software libraries for firmware and software.

Hardware

If you’re just getting started with prototyping, Particle offers development boards called Photon and Electron, which support WiFi and 2G/3G cellular connectivity respectively. These devices include programmable micro-controllers where you can upload your own code to add your custom functionality. They are also CC-, CE- and IC-certified so you don’t have to worry about that legal stuff. Particle’s firmware libraries take care of all the connectivity work for you and make it possible to send data to the cloud with only a few lines of code.

Once you graduate past the prototyping stage, Particle offers hardware components that you can mount on your own custom PCB. These include an MCU + WiFi component or an MCU + WiFi + PCB antenna that is pre-certified by the FCC.particle-iot

Cloud

The Particle Cloud is used to manage and track your devices in the field and to associate them with your customers’ accounts. As a business owner, you can access the Particle Console, a web-based dashboard where you can view the status of products in the field, issue firmware updates with the click of a button, view logs from devices, set up custom webhooks, and manage interactions with other services like IFTTT.

Open Source SDKs

One of the most valuable features that Particle offers is a collection of open source libraries. The collection includes firmware, mobile (Android and iOS), and web (Node) SDKs that allow you to get your product off the ground and fully functional very quickly. The mobile SDKs even include customizable UI components that guide users through the entire process of connecting to devices and configuring them to connect to a WiFi network.

All of the Fun, No Boilerplate

Developing an IoT device can be a lot of fun. It can also be a lot of headache if you attempt to spin up an entire ecosystem yourself. What I love about Particle is that it allows me to spend my time working on the interesting pieces of the product development because they’ve done all the boilerplate work for me. Thanks, Particle!

I know Particle has a few competitors out there, but I’ll admit, I haven’t looked into them much. If you have experience with any similar companies, I’d love to hear about it.

The post IoT Made Easy by Particle appeared first on Atomic Spin.

Capability Feature Flags for Backward Compatibility

Earlier this year, Ryan Abel wrote about Managing Multiple Releases in a Production Application. One of the strategies he discussed was using “feature flags” to manage when sets of features are released in production. I’ve found that feature flags work well when there’s a need to maintain backward compatibility with multiple versions of an external integration. In my case, it’s with a Bluetooth Low Energy (BLE) device, but the same would hold true for a remote web service API, etc.

Feature Flags

Pete Hodgson has an excellent article on Feature Toggles on Martin Fowler’s site. He describes how these toggles (or flags) allow you to “ship alternative codepaths within one deployable unit and choose between them at runtime.”

The article lists several categories of feature toggles:

  • Release Toggles
  • Experiment Toggles
  • Ops Toggles
  • Permissioning Toggles

I’m going to add my own category to that list: Capability Toggles (although I’m going to switch back to calling them “flags” from here on out).

Capability Flags

These types of flags are set based on the capabilities of an external integration point. Very often, they will be determined by the version number of whatever they’re integrating with.

Alternatively, when working with something that provides some level of discoverability, a version number isn’t strictly necessary. For example, the services and characteristics of a BLE device can be discovered and used to determine the remote device’s capabilities.

They can even be used for enabling/disabling functionality based on the version of the OS or hardware the application is running.

Capability flags will be long-lived. They stick around in the code for as long as backward compatibility is supported for the older version of the device/API. This could potentially be for the lifetime of the application.

Granular Features

I’ve found it’s better to create separate flags for each individual fine-grained feature. This is preferable even when they can all be lumped together in one large set (i.e., all of the features that require Version 2.0 of an external device).

By focusing the functionality associated with any particular flag, you can make sure your code is better documented. It will also be much more robust at supporting future rule changes when a feature becomes available.

Decouple Feature Availability

The logic that determines if a feature is available should be decoupled from the place where the flag is used. I like to implement separate classes or functions for each feature flag to encapsulate the rules for determining its availability. When the application’s state changes, the list of “rules” are invoked again to determine the current set of available features.

This way, the code that’s branching based on a feature flag isn’t concerned about why the feature is/isn’t available. That’s not its responsibility.

Example: Battery Level

Let’s take, for example, a mobile app that’s a companion to an external BLE device. The first version of the external device doesn’t provide any information about its current battery level to the app. This was a big request from users. The second version of the device added support for the standard Battery Service and Battery Level characteristic.

If the app is connected to a device that can provide the battery information, we want it to show a battery level indicator on a Dashboard screen. When connected to older devices, the indicator shouldn’t be visible.

To handle this, we’ll establish a battery_level_supported flag. The rule for determining whether this feature is available or not will be the presence of the Battery Level characteristic in the set of discovered characteristics. When the app first connects to a device, it goes through the discovery process and updates the app’s internal representation of services and characteristics available on the device. If the Battery Level characteristic is in that set, the rule for battery_level_supported adds it to the set of available features.

In the code that determines what should be rendered on the Dashboard, the set of available features is inspected. If it contains battery_level_supported, the battery level indicator is rendered.

Conclusion

Maintaining backward compatibility with older versions of anything is difficult. Using granular, decoupled feature flags can be a great way to manage support for your application’s changing dependencies.

The post Capability Feature Flags for Backward Compatibility appeared first on Atomic Spin.