Leveling up with POWER8

Over the past year I've had the privilege of working on a new POWER architecture ppc64le (PowerPC 64bit Little Endian). We've had a long relationship with IBM at the Open Source Lab (OSL) providing resources for the POWER architecture to FOSS projects.

Earlier this year, IBM graciously donated three powerful …

Leveling up with POWER8

Over the past year I've had the privilege of working on a new POWER architecture ppc64le (PowerPC 64bit Little Endian). We've had a long relationship with IBM at the Open Source Lab (OSL) providing resources for the POWER architecture to FOSS projects.

Earlier this year, IBM graciously donated three powerful POWER8 machines to the OSL to replace our aging FTP cluster. This produced a few challenges we needed to overcome to make this work:

  • We're primarily an x86 shop, so we needed to get used to the POWER8 platform in a production environment
  • This platform is extremely new, so we're on the leading edge using this hardware in a production environment
  • It was recommended to use the new ppc64le architecture by the IBM engineers, which was still getting support from RedHat at the time
  • There was no CentOS ppc64le build yet and RHEL 7 wasn't quite officially supported yet (however, it was added in 7.1)
  • This platform uses a different boot process than other machines, namely the OPAL firmware which uses the Petitboot boot loader

POWER8 architecture differences

We've been using IBM POWER7 hardware for many years which requires the use of a proprietary management system called the Hardware Management Console (HMC). It was an extremely difficult system to use and was so foreign to how we normally manage systems. So when we got our first POWER8 system, I was delighted to see that they did away with the HMC and provided an abstraction layer called OPAL (also known as skiboot) to manage the system. Basically, it meant these machines actually use open standards and essentially boot like an x86 machine more or less (i.e. what we're more used to).

When you first boot a POWER8 machine that is using OPAL, you use IPMI to connect to the serial console (which needs to be enabled in the FSP). The FSP stands for the Flexible Service Processor which is basically the low level firmware that manages the hardware of the machine. When it first boots up, you'll see a Linux kernel booting up and then a boot prompt running Petitboot. Petitboot is a kexec based boot loader.

This basically allows you to do the following:

  • Auto-boots a kernel
  • Gives you a bash prompt to diagnose the machine or setup hardware RAID
  • Give you a sensible way to install an OS remotely

When it boots an installed system, it'll actually do a kexec and reboot into the actual OS kernel. Overall, it is an easy and simple way to remotely manage a machine.

Operating System Setup

The next major challenge was creating a stable operating system environment. When I first started to test these machines, I was using a beta build of RHEL 7 for ppc64le that contained bugs. Thankfully 7.1 was released recently which provided a much more stable installer and platform in general. Getting the OS installed was the easy part, the more challenging part was getting our normal base system up with Chef. This required manually building a chef client for ppc64le since none existed yet. We ended up building the client using the Omnibus Chef build on a guest which meant we had to bootstrap the build environment some a little too.

The next challenge was installing all of the base packages and packages we needed for running our FTP mirrors. Most of those packages are in EPEL however there is no ppc64le builds (yet) for the architecture. So we needed up having to build many of the dependencies using mock and hosting it in a local repository. Thankfully we didn't require many dependencies, and all of the builds had no compile problems.

Storage layout and configuration

One of the other interesting parts of this was the storage configuration for the servers. These machines came with five 387GB SSDs and ten 1.2TB SAS disks. The hardware RAID controller comes with a feature called RAID6T2 which provides a two tier RAID solution but visible as a single block device. Essentially it uses the SAS disks for the cold storage and the SSDs are for hot cache access and writing.

Being a lover of open source, I was interested in seeing how this performed against other technologies such as bcache. While I don't have all of the numbers still, the hardware RAID out performed bcache by quite a bit. Eventually I'd like to see if there are other tweaks so we aren't reliant on a proprietary RAID controller, but for now the controller is working flawlessly.

Production deployment and results

We successfully deployed all three new POWER8 servers without any issue on June 18, 2015. We're already seeing a large increase in utilization on the new machines as they have far more I/O capacity and throughput than the previous cluster. I've been extremely impressed with how stable and how fast these machines are.

Since we're on the leading edge of using these machines, I'm hoping to write more detailed and technical blog posts on the various steps I went through.

Ganeti Tutorial PDF guide

As I mentioned in my previous blog post, trying out Ganeti can be cumbersome and I went out and created a platform for testing it out using Vagrant. Now I have a PDF guide that you can use to walk through some of the basics steps of using Ganeti along with even testing a fail-over scenario. Its an updated version of a guide I wrote for OSCON last year. Give it a try and let me know what you think!

Ganeti Tutorial PDF guide

As I mentioned in my previous blog post, trying out Ganeti can be cumbersome and I went out and created a platform for testing it out using Vagrant. Now I have a PDF guide that you can use to walk through some of the basics steps of using Ganeti along with even testing a fail-over scenario. Its an updated version of a guide I wrote for OSCON last year. Give it a try and let me know what you think!

Trying out Ganeti with Vagrant

Ganeti is a very powerful tool but often times people have to look for spare hardware to try it out easily. I also wanted to have a way to easily test new features of Ganeti Web Manager (GWM) and Ganeti Instance Image without requiring additional hardware. While I do have the convenience of having access to hardware at the OSU Open Source Lab to do my testing, I’d rather not depend on that always. Sometimes I like trying new and crazier things and I’d rather not break a test cluster all the time. So I decided to see if I could use Vagrant as a tool to create a Ganeti test environment on my own workstation and laptop.

This all started last year while I was preparing for my OSCON tutorial on Ganeti and was manually creating VirtualBox VMs to deploy Ganeti nodes for the tutorial. It worked well but soon after I gave the tutorial I discovered Vagrant and decided to adapt my OSCON tutorial with Vagrant. Its a bit like the movie Inception of course, but I was able to successfully get Ganeti working with Ubuntu and KVM (technically just qemu) and mostly functional VMs inside of the nodes. I was also able to quickly create a three-node cluster to test failover with GWM and many facets of the webapp.

The vagrant setup I have has two parts:

  1. Ganeti Tutorial Puppet Module
  2. Ganeti Vagrant configs

The puppet module I wrote is very basic and isn’t really intended for production use. I plan to re-factor it in the coming months into a completely modular production ready set of modules. The node boxes are currently running Ubuntu 11.10 (I’ve been having some minor issues getting 12.04 to work), and the internal VMs you can deploy are based on the CirrOS Tiny OS. I also created several branches in the vagrant-ganeti repo for testing various versions of Ganeti which has helped the GWM team implement better support for 2.5 in the upcoming release.

To get started using Ganeti with Vagrant, you can do the following:

git clone git://github.com/ramereth/vagrant-ganeti.git
git submodule update --init
gem install vagrant
vagrant up node1
vagrant ssh node1
gnt-cluster verify

Moving forward I plan to implement the following:

  • Update tutorial documentation
  • Support for Xen and LXC
  • Support for CentOS and Debian as the node OS

Please check out the README for more instructions on how to use the Vagrant+Ganeti setup. If you have any feature requests please don’t hesitate to create an issue on the github repo.

Trying out Ganeti with Vagrant

Ganeti is a very powerful tool but often times people have to look for spare hardware to try it out easily. I also wanted to have a way to easily test new features of Ganeti Web Manager (GWM) and Ganeti Instance Image without requiring additional hardware. While I do have the convenience of having access to hardware at the OSU Open Source Lab to do my testing, I'd rather not depend on that always. Sometimes I like trying new and crazier things and I'd rather not break a test cluster all the time. So I decided to see if I could use Vagrant as a tool to create a Ganeti test environment on my own workstation and laptop.

This all started last year while I was preparing for my OSCON tutorial on Ganeti and was manually creating VirtualBox VMs to deploy Ganeti nodes for the tutorial. It worked well but soon after I gave the tutorial I discovered Vagrant and decided to adapt my OSCON tutorial with Vagrant. Its a bit like the movie Inception of course, but I was able to successfully get Ganeti working with Ubuntu and KVM (technically just qemu) and mostly functional VMs inside of the nodes. I was also able to quickly create a three-node cluster to test failover with GWM and many facets of the webapp.

The vagrant setup I have has two parts:

  1. Ganeti Tutorial Puppet Module
  2. Ganeti Vagrant configs

The puppet module I wrote is very basic and isn't really intended for production use. I plan to re-factor it in the coming months into a completely modular production ready set of modules. The node boxes are currently running Ubuntu 11.10 (I've been having some minor issues getting 12.04 to work), and the internal VMs you can deploy are based on the CirrOS Tiny OS. I also created several branches in the vagrant-ganeti repo for testing various versions of Ganeti which has helped the GWM team implement better support for 2.5 in the upcoming release.

To get started using Ganeti with Vagrant, you can do the following:

git clone git://github.com/ramereth/vagrant-ganeti.git
git submodule update --init
gem install vagrant
vagrant up node1
vagrant ssh node1
gnt-cluster verify

Moving forward I plan to implement the following:

  • Update tutorial documentation
  • Support for Xen and LXC
  • Support for CentOS and Debian as the node OS

Please check out the README for more instructions on how to use the Vagrant+Ganeti setup. If you have any feature requests please don't hesitate to create an issue on the github repo.

Rebalancing Ganeti Clusters

One of the best features of Ganeti is its ability to grow linearly by adding new servers easily. We recently purchased a new server to expand our ever growing production cluster and needed to rebalance cluster. Adding and expanding the cluster consisted of the following steps:

  1. Installing the base OS on the new node
  2. Adding the node to your configuration management of choice and/or installing ganeti
  3. Add the node to the cluster with gnt-node add
  4. Check Ganeti using the verification action
  5. Use htools to rebalance the cluster

For simplicity sake I’ll cover the last three steps.

Adding the node

Assuming you’re using a secondary network, this is how you would add your node:

gnt-node add -s <secondary ip> newnode

Now lets check and make sure ganeti is happy:

gnt-cluster verify

If all is well, continue on otherwise try and resolve any issue that ganeti is complaining about.

Using htools

Make sure you install ganeti-htools on all your nodes before continuing. It requires haskell so just be aware of that requirement. Lets see what htools wants to do first:

hbal -m ganeti.example.org
Loaded 5 nodes, 73 instances
Group size 5 nodes, 73 instances
Selected node group: default
Initial check done: 0 bad nodes, 0 bad instances.
Initial score: 41.00076094
Trying to minimize the CV...
 1. openmrs.osuosl.org             g1.osuosl.bak:g2.osuosl.bak => g5.osuosl.bak:g1.osuosl.bak 38.85990831 a=r:g5.osuosl.bak f
 2. stagingvm.drupal.org           g3.osuosl.bak:g1.osuosl.bak => g5.osuosl.bak:g3.osuosl.bak 36.69303985 a=r:g5.osuosl.bak f
 3. scratchvm.drupal.org           g2.osuosl.bak:g4.osuosl.bak => g5.osuosl.bak:g2.osuosl.bak 34.61266967 a=r:g5.osuosl.bak f

<snip>

 28. crisiscommons1.osuosl.org      g3.osuosl.bak:g1.osuosl.bak => g3.osuosl.bak:g5.osuosl.bak 4.93089388 a=r:g5.osuosl.bak
 29. crisiscommons-web.osuosl.org   g2.osuosl.bak:g1.osuosl.bak => g1.osuosl.bak:g5.osuosl.bak 4.57788814 a=f r:g5.osuosl.bak
 30. aqsis2.osuosl.org              g1.osuosl.bak:g3.osuosl.bak => g1.osuosl.bak:g5.osuosl.bak 4.57312216 a=r:g5.osuosl.bak
Cluster score improved from 41.00076094 to 4.57312216
Solution length=30

I’ve shortened the actual output for the sake of this blog post. Htools automatically calculates which virtual machines to move and how using the least amount of operations. In most these moves, the VMs may simply be migrated, migrated & secondary storage replaced, or migrated, secondary storage replaced, migrated. In our environment we needed to move 30 VMs around out of the total 70 VMs that are hosted on the cluster.

Now lets see what commands we actually would need to run:

hbal -C -m ganeti.example.org
Commands to run to reach the above solution:

 echo jobset 1, 1 jobs
 echo job 1/1
 gnt-instance replace-disks -n g5.osuosl.bak openmrs.osuosl.org
 gnt-instance migrate -f openmrs.osuosl.org

 echo jobset 2, 1 jobs
 echo job 2/1
 gnt-instance replace-disks -n g5.osuosl.bak stagingvm.drupal.org
 gnt-instance migrate -f stagingvm.drupal.org

 echo jobset 3, 1 jobs
 echo job 3/1
 gnt-instance replace-disks -n g5.osuosl.bak scratchvm.drupal.org
 gnt-instance migrate -f scratchvm.drupal.org

<snip>

 echo jobset 28, 1 jobs
 echo job 28/1
 gnt-instance replace-disks -n g5.osuosl.bak crisiscommons1.osuosl.org

 echo jobset 29, 1 jobs
 echo job 29/1
 gnt-instance migrate -f crisiscommons-web.osuosl.org
 gnt-instance replace-disks -n g5.osuosl.bak crisiscommons-web.osuosl.org

 echo jobset 30, 1 jobs
 echo job 30/1
 gnt-instance replace-disks -n g5.osuosl.bak aqsis2.osuosl.org

Here you can see the commands it wants  you to execute. Now you can either put these all in a script and run them, split them up, or just run them one by one. In our case I ran them one by one just to be sure we didn’t run into any issues. I had a couple of VMs not migration properly but those were exactly fixed. I split this up into a three day migration running ten jobs a day.

The length of time that it takes to move each VM depends on the following factors:

  1. How fast your secondary network is
  2. How busy the nodes are
  3. How fast your disks are

Most of our VMs ranged in size from 10G to 40G in size and on average took around 10-15 minutes to complete each move. Addtionally, make sure you read the man page for hbal to see all the various features and options you can tweak. For example, you could tell hbal to just run all the commands for you which might be handy for automated rebalancing.

Conclusion

Overall the rebalancing of our cluster went without a hitch outside of a few minor issues. Ganeti made it really easy to expand our cluster with minimal to zero downtime for our hosted projects.

Rebalancing Ganeti Clusters

One of the best features of Ganeti is its ability to grow linearly by adding new servers easily. We recently purchased a new server to expand our ever growing production cluster and needed to rebalance cluster. Adding and expanding the cluster consisted of the following steps:

  1. Installing the base OS on the new node
  2. Adding the node to your configuration management of choice and/or installing ganeti
  3. Add the node to the cluster with gnt-node add
  4. Check Ganeti using the verification action
  5. Use htools to rebalance the cluster

For simplicity sake I'll cover the last three steps.

Adding the node

Assuming you're using a secondary network, this is how you would add your node:

gnt-node add -s <secondary ip> newnode

Now lets check and make sure ganeti is happy:

gnt-cluster verify

If all is well, continue on otherwise try and resolve any issue that ganeti is complaining about.

Using htools

Make sure you install ganeti-htools on all your nodes before continuing. It requires haskell so just be aware of that requirement. Lets see what htools wants to do first:

$ hbal -m ganeti.example.org
Loaded 5 nodes, 73 instances
Group size 5 nodes, 73 instances
Selected node group: default
Initial check done: 0 bad nodes, 0 bad instances.
Initial score: 41.00076094
Trying to minimize the CV...
1. openmrs.osuosl.org g1.osuosl.bak:g2.osuosl.bak g5.osuosl.bak:g1.osuosl.bak 38.85990831 a=r:g5.osuosl.bak f
2. stagingvm.drupal.org g3.osuosl.bak:g1.osuosl.bak g5.osuosl.bak:g3.osuosl.bak 36.69303985 a=r:g5.osuosl.bak f
3. scratchvm.drupal.org g2.osuosl.bak:g4.osuosl.bak g5.osuosl.bak:g2.osuosl.bak 34.61266967 a=r:g5.osuosl.bak f

<snip>

28. crisiscommons1.osuosl.org g3.osuosl.bak:g1.osuosl.bak g3.osuosl.bak:g5.osuosl.bak 4.93089388 a=r:g5.osuosl.bak
29. crisiscommons-web.osuosl.org g2.osuosl.bak:g1.osuosl.bak g1.osuosl.bak:g5.osuosl.bak 4.57788814 a=f r:g5.osuosl.bak
30. aqsis2.osuosl.org g1.osuosl.bak:g3.osuosl.bak g1.osuosl.bak:g5.osuosl.bak 4.57312216 a=r:g5.osuosl.bak
Cluster score improved from 41.00076094 to 4.57312216
Solution length=30

I've shortened the actual output for the sake of this blog post. Htools automatically calculates which virtual machines to move and how using the least amount of operations. In most these moves, the VMs may simply be migrated, migrated & secondary storage replaced, or migrated, secondary storage replaced, migrated. In our environment we needed to move 30 VMs around out of the total 70 VMs that are hosted on the cluster.

Now lets see what commands we actually would need to run:

$ hbal -C -m ganeti.example.org

Commands to run to reach the above solution:

echo jobset 1, 1 jobs
echo job 1/1
gnt-instance replace-disks -n g5.osuosl.bak openmrs.osuosl.org
gnt-instance migrate -f openmrs.osuosl.org
echo jobset 2, 1 jobs
echo job 2/1
gnt-instance replace-disks -n g5.osuosl.bak stagingvm.drupal.org
gnt-instance migrate -f stagingvm.drupal.org
echo jobset 3, 1 jobs
echo job 3/1
gnt-instance replace-disks -n g5.osuosl.bak scratchvm.drupal.org
gnt-instance migrate -f scratchvm.drupal.org

<snip\>

echo jobset 28, 1 jobs
echo job 28/1
gnt-instance replace-disks -n g5.osuosl.bak crisiscommons1.osuosl.org
echo jobset 29, 1 jobs
echo job 29/1
gnt-instance migrate -f crisiscommons-web.osuosl.org
gnt-instance replace-disks -n g5.osuosl.bak crisiscommons-web.osuosl.org
echo jobset 30, 1 jobs
echo job 30/1
gnt-instance replace-disks -n g5.osuosl.bak aqsis2.osuosl.org

Here you can see the commands it wants you to execute. Now you can either put these all in a script and run them, split them up, or just run them one by one. In our case I ran them one by one just to be sure we didn't run into any issues. I had a couple of VMs not migration properly but those were exactly fixed. I split this up into a three day migration running ten jobs a day.

The length of time that it takes to move each VM depends on the following factors:

  1. How fast your secondary network is
  2. How busy the nodes are
  3. How fast your disks are

Most of our VMs ranged in size from 10G to 40G in size and on average took around 10-15 minutes to complete each move. Addtionally, make sure you read the man page for hbal to see all the various features and options you can tweak. For example, you could tell hbal to just run all the commands for you which might be handy for automated rebalancing.

Conclusion

Overall the rebalancing of our cluster went without a hitch outside of a few minor issues. Ganeti made it really easy to expand our cluster with minimal to zero downtime for our hosted projects.