Decomissioning my Drupal blog

If you are looking at this blog post right now... my live Drupal site has finally been decommissioned.. or not .. well these pages are served statically but the content is still generated by an ancient aging Drupal 6, which is hiding somewhere in a container that I only start when I need it.

Given my current low blog volume .. and the lack of time to actually migrate all the content to something like Jekyll or Webby I took the middle road and pulled the internet facing Drupal offline. My main concern was that I want to keep a number of articles that people frequently point to in the exact same location as before. So that was my main requirement, but with no more public facing drupal I have no more worrying about the fact that it really needed updating, no more worrying about potential issues on wednesday evenings etc

My first couple of experiments were with wget / curl but I bumped into. Sending a Drupal site into retirement which pointed me to httrack which was a new tool for me ..

As documented there
httrack http://127.0.0.1:8080/blog -O . -N "%h%p/%n/index%[page].%t" -WqQ%v --robots=0
creates a usuable tree but root page ends up in blog/blog which is not really handy.
So the quick hack for that is to go into the blog/blog subdir and regexp the hell out of all those files generated there direction one level below :)
for file in `ls`; do cat $file | sed -e "s/..///blog//g" > ../$file ; done

httrack however has one annoying default in which it puts metatdata in the footer of a page it mirrors, where it comes from and when it was generated thats very usefull for some use cases, but not for mine as it means that every time I regenerate the site it actually generates slightly different content rather than identical pages. Luckily I found the -%F "" param to keep that footerstring empty

And that is what you are looking at right now ...

There are still a bunch of articles I have in draft .. so maybe now that I don't have to worry about the Drupal part of things I might blog more frequent again, or not..

Decomissioning my Drupal blog

If you are looking at this blog post right now... my live Drupal site has finally been decommissioned.. or not .. well these pages are served statically but the content is still generated by an ancient aging Drupal 6, which is hiding somewhere in a container that I only start when I need it.

Given my current low blog volume .. and the lack of time to actually migrate all the content to something like Jekyll or Webby I took the middle road and pulled the internet facing Drupal offline. My main concern was that I want to keep a number of articles that people frequently point to in the exact same location as before. So that was my main requirement, but with no more public facing drupal I have no more worrying about the fact that it really needed updating, no more worrying about potential issues on wednesday evenings etc

My first couple of experiments were with wget / curl but I bumped into. Sending a Drupal site into retirement which pointed me to httrack which was a new tool for me ..

As documented there
httrack http://krisbuytaert.be/blog -O . -N "%h%p/%n/index%[page].%t" -WqQ%v --robots=0
creates a usuable tree but root page ends up in blog/blog which is not really handy.
So the quick hack for that is to go into the blog/blog subdir and regexp the hell out of all those files generated there direction one level below :)
for file in `ls`; do cat $file | sed -e "s/..///blog//g" > ../$file ; done

httrack however has one annoying default in which it puts metatdata in the footer of a page it mirrors, where it comes from and when it was generated thats very usefull for some use cases, but not for mine as it means that every time I regenerate the site it actually generates slightly different content rather than identical pages. Luckily I found the -%F "" param to keep that footerstring empty

And that is what you are looking at right now ...

There are still a bunch of articles I have in draft .. so maybe now that I don't have to worry about the Drupal part of things I might blog more frequent again, or not..

How GitHub Uses GitHub to Build GitHub

I wrote a post a while back linking to an interesting video about the culture at GitHub, entitled: Optimizing for Happiness – why you want to go work at Github!.

Since then, i’ve watched a few other interesting talks about the culture and how they work at GitHub and two in particular are worth noting here.

Firstly, Zach Holman, one of the early “Githubbers” recently gave a talk about “How GitHub Uses GitHub to Build GitHub“:

Build features fast. Ship them. That’s what we try to do at GitHub. Our process is the anti-process: what’s the minimum overhead we can put up with to keep our code quality high, all while building features as quickly as possible? It’s not just features, either: faster development means happier developers. This talk will dive into how GitHub uses GitHub: we’ll look at some of our actual Pull Requests, the internal apps we build on our own API, how we plan new features, our Git branching strategies, and lots of tricks we use to get everyone – developers, designers, and everyone else involved with new code. We think it’s a great way to work, and we think it’ll work in your company, too.

You can watch the video here and also check out a series of blog posts he wrote on the same subject.

The second talk i’d recommend I had the pleasure of seeing live at a local conference i’ve attend (DIBI Conference). It’s by Corey Donohoe (@atmos):

The talk will cover the metrics driven approach GitHub uses to analyze performance and growth of our product. It will cover deployment strategies for rapid customer feedback as well as configuration management to ensure reproducibility.

You can watch the video here.

Both are great talks and well worth a watch.

Optimizing for Happiness – why you want to go work at Github!

If you are a manager or high up in any company then I highly recommend you watch this video of a recent talk by Tom Preston-Werner, co-founder of Github. It’s around an hour in length but I urge you to take the time to watch it – it’s packed full of great advice all the way through.

The way traditional businesses approach the management and organization of creative, intellectual workers is wrong. By throwing away everything that blocks productivity (meetings, deadlines, managers, titles, strict vacation policies, etc) and treating your employees as the responsible adults that they are, huge amounts of potential can be unlocked and employee happiness and retention can be at unprecedented highs. At GitHub we’ve embraced a philosophy that gets things done and strips away policy and procedure in favor of smart decision making and personal responsibility. Come see how we make it work and how you can reap the same benefits in your own company.

The video goes into both how they recruit and how they run a profitable and productive company.

At GitHub we don’t have meetings. We don’t have set work hours or even work days. We don’t keep track of vacation or sick days. We don’t have managers or an org chart. We don’t have a dress code. We don’t have expense account audits or an HR department.

We pay our employees well and give them the tools they need to do their jobs as efficiently as possible. We let them decide what they want to work on and what features are best for the customers. We pay for them to attend any conference at which they’ve gotten a speaking slot. If it’s in a foreign country, we pay for another employee to accompany them because traveling alone sucks. We show them the profit and loss statements every month. We expect them to be responsible.

We make decisions based on the merits of the arguments, not on who is making them. We strive every day to be better than we were the day before.

We hold our board meetings in bars.

We do all this because we’re optimizing for happiness, and because there’s nobody to tell us that we can’t.

You can watch the video here.

Tell me now that you don’t want to work at Github?