Sorting terraform variable blocks

When developing terraform code, it is easy to end up with a bunch of variable definitions that are listed in no particular order.

Here's a bit of python code that will sort terraform variable definitions. Use it as a filter from inside vim, or as a standalone tool if you have all your variable definitions in one file.

eg:

tf_sort < variables.tf > variables.tf.sorted
mv variables.tf.sorted variables.tf

Here's the code:

#!/usr/bin/env python
# sort terraform variables

import sys
import re

# this regex matches terraform variable definitions
# we capture the variable name so we can sort on it
pattern = r'(variable ")([^"]+)(" {[^{]+})'


def process(content):
    # sort the content (a list of tuples) on the second item of the tuple
    # (which is the variable name)
    matches = sorted(re.findall(pattern, content), key=lambda x: x[1])

    # iterate over the sorted list and output them
    for match in matches:
        print ''.join(map(str, match))

        # don't print the newline on the last item
        if match != matches[-1]:
            print


# check if we're reading from stdin
if not sys.stdin.isatty():
    stdin = sys.stdin.read()
    if stdin:
        process(stdin)

# process any filenames on the command line
for filename in sys.argv[1:]:
    with open(filename) as f:
        process(f.read())

My Oracle Support: Authenticate with CLI

I was wondering since some times how I could authenticate on MOS using CLI. Now I got my answer: By reversing the whole SSO auth process in a python script that generate a cookies.txt file, usable with wget.

I know that a simpler method exists using directly wget, but as it doesn't work with every MOS page, I'll prefer a more generic way of doing things.

Here is a quick how to use:

  1. Edit MOSLogin.py and setup variables inside.
  1. Install python dependencies (linux/debian):
apt-get install python-pip
pip install BeautifulSoup
pip install requests
  1. Run the script
$ ./login.py 
[-] Initialization...done
[-] Gathering JSESSIONID..done
[-] Trying loginSuccess.jsp...done
[-] Signing in...done
[-] Trying to submit credentials...done
[-] Checking that credentials are valid...done
[-] Logged-in.
  1. Use the cookies.txt with wget
wget --load-cookies=/tmp/cookies.txt --save-cookies=/tmp/cookies.txt --no-check-certificate http://MOSURL

With a little bit of time and fun, you can imagine every tool based on this to ease your sysadmin life. You can even fetch your SR summary/updates using this cookie...

Get the MOSLogin.py file...

OSUOSL is hiring: Full-Time Developer in Corvallis

Want to work at the coolest place for open source and support the missions of some of the most important open source projects?

Oregon State University’s Open Source Lab is recruiting a full-time software developer who will analyze, design, and test software code for Ganeti Web Manager, the Protein Geometry Database and several other homegrown Open Source Lab projects. Development at the OSUOSL includes collaborations with academic and research faculty internal and external to OSU.

Reporting to the Operations Manager of the Open Source Lab, the Analyst Programmer will contribute in-depth knowledge of open source software development using languages such as Python, Ruby and Java. The person in this position is responsible for developing and modifying complex software applications, documenting code and development processes, and overseeing student software developers. This position will allow the candidate to interact with many of the open source projects hosted by the OSL. We seek candidates with a high level of initiative, motivation, and a high degree of success in previous endeavors.

To review more a more detailed job description and apply, check out the Analyst Programmer role on Oregon State University’s Jobs page.

OSUOSL is hiring: Full-Time Developer in Corvallis

osl

Want to work at the coolest place for open source and support the missions of some of the most important open source projects?

Oregon State University’s Open Source Lab is recruiting a full-time software developer who will analyze, design, and test software code for Ganeti Web Manager, the Protein …

OSUOSL is hiring: Full-Time Developer in Corvallis

osl

Want to work at the coolest place for open source and support the missions of some of the most important open source projects?

Oregon State University’s Open Source Lab is recruiting a full-time software developer who will analyze, design, and test software code for Ganeti Web Manager, the Protein Geometry Database and several other homegrown Open Source Lab projects. Development at the OSUOSL includes collaborations with academic and research faculty internal and external to OSU.

Reporting to the Operations Manager of the Open Source Lab, the Analyst Programmer will contribute in-depth knowledge of open source software development using languages such as Python, Ruby and Java. The person in this position is responsible for developing and modifying complex software applications, documenting code and development processes, and overseeing student software developers. This position will allow the candidate to interact with many of the open source projects hosted by the OSL. We seek candidates with a high level of initiative, motivation, and a high degree of success in previous endeavors.

To review more a more detailed job description and apply, check out the Analyst Programmer role on Oregon State University's Jobs page.

On BeautifulSoup

I’m doing some fairly hardcore screenscraping using Python, so I decided to use BeautifulSoup. After all:

Beautiful Soup won’t choke if you give it bad markup

Oh yes it will:

<html>
 <body>
  <a href="/""></a>
 </body>
</html>
  File "/usr/lib/python2.6/HTMLParser.py", line 115, in error
    raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: malformed start tag, at line 3, column 14

lxml parses this fine.

The other issue I’m seeing is the old document.write('<scr' + 'ipt>') trick. Even if it’s enclosed in a CDATA block, BeautifulSoup chokes on it.

lxml, again, parses it fine. And it has built-in CSS selector and XPath support.

DNS Zone File Fun with Python and Emacs

Sometimes we’re faced with a boring, manual, labourious job which really needs to be done, will take a fairly long time, and be pretty unpleasant.

Whenever I’m faced with something like this, especially if it involves text, I try to make it interesting by setting myself the challenge of writing a script and/or using my editor to do the job faster than had I done it manually.

In this case, I had two problems. Firstly. I had just made about 50 entries into a reverse DNS zone.

These look like this:

1 IN PTR router.nso

Having done this manually, and rather slowly, I then had to make correpsonding forward entries, of the form:

router in A 192.168.1.1

My first thought was to do it in emacs with re-search-forward and replace-match, but this wasn’t easy to do, so my next plan was to write a python script that does it.

for entry in open("reverselist","r").readlines():
  tokens=entry.strip().split()
  ip=tokens[0]
  name=tokens[3]
  hostname=name.split(".")[0]
  print "%s IN A 192.168.1.%s"%(ip, hostname)

This takes an input file of a list of reverse zone entries, and spits out forward entries. It’s very simple - first we tokenise the input file:

'1  IN PTR router.nso'.strip().split()
['1', 'IN', 'PTR', 'router.nso']

We then take the first and fourth (python counts from 0) tokens - the last part of the IP, and the hostname, and take off only the short name:

>>> ['1', 'IN', 'PTR', 'router.nso'][0]
'1'
>>> ['1', 'IN', 'PTR', 'router.nso'][3]
'router.nso'
>>> ['1', 'IN', 'PTR', 'router.nso'][3].split('.')[0]
'router'

Then we just print them out again in the order we want.

I was then faced with my second problem. The (large) forward zone file is horribly and inconsistently formatted. Adding entries is always a pain, because if you just hit tab a few times you’ll never line up with the rest of the entries. Tidying it up has been on my todo list for a while, so I decided now really was the time to use the power of emacs!

So my task was how to convert various lines such as:

nst-xp-ie8	IN	A	192.168.20.236
svn		IN	CNAME	cap-svn

with their various spacing issues into a consistently spaced file.

In order to do this, we make use of the emacs regexp-replace function. This enables us to describe a pattern to match, and then defines how to match it.

Emacs regular expressions are a lot like sed’s, so I got there pretty quickly.

The first pattern to match is:

\(\w+\)\s-*\(IN\)\s-*\(A\)\s-*\(\S-+\)

  • \w+ means: match any word-constituent character
  • \s- means: match any white space
  • \S- means: match anything that is NOT whitespace
  • \(…\) is a construct which stores the match inside the brackets, so we can use it again later.

So this says:

Match the hostname, IN, A and the IP, and all the space inbetween; store the hostname, IN, A and the IP in variables.

The replacement is much simpler:

\1^I^I\2^I\3^I\4

\1, \2 etc are the variables, holding the matched hostname etc, and ^I is the tab character.

Simple really, but more fun, and done in less time than doing it manually.

DNS Zone File Fun with Python and Emacs

Sometimes we’re faced with a boring, manual, labourious job which really needs to be done, will take a fairly long time, and be pretty unpleasant.

Whenever I’m faced with something like this, especially if it involves text, I try to make it interesting by setting myself the challenge of writing a script and/or using my editor to do the job faster than had I done it manually.

In this case, I had two problems. Firstly. I had just made about 50 entries into a reverse DNS zone.

These look like this:

1 IN PTR router.nso

Having done this manually, and rather slowly, I then had to make correpsonding forward entries, of the form:

router in A 192.168.1.1

My first thought was to do it in emacs with re-search-forward and replace-match, but this wasn’t easy to do, so my next plan was to write a python script that does it.

for entry in open("reverselist","r").readlines():
  tokens=entry.strip().split()
  ip=tokens[0]
  name=tokens[3]
  hostname=name.split(".")[0]
  print "%s IN A 192.168.1.%s"%(ip, hostname)

This takes an input file of a list of reverse zone entries, and spits out forward entries. It’s very simple - first we tokenise the input file:

'1  IN PTR router.nso'.strip().split()
['1', 'IN', 'PTR', 'router.nso']

We then take the first and fourth (python counts from 0) tokens - the last part of the IP, and the hostname, and take off only the short name:

>>> ['1', 'IN', 'PTR', 'router.nso'][0]
'1'
>>> ['1', 'IN', 'PTR', 'router.nso'][3]
'router.nso'
>>> ['1', 'IN', 'PTR', 'router.nso'][3].split('.')[0]
'router'

Then we just print them out again in the order we want.

I was then faced with my second problem. The (large) forward zone file is horribly and inconsistently formatted. Adding entries is always a pain, because if you just hit tab a few times you’ll never line up with the rest of the entries. Tidying it up has been on my todo list for a while, so I decided now really was the time to use the power of emacs!

So my task was how to convert various lines such as:

nst-xp-ie8	IN	A	192.168.20.236
svn		IN	CNAME	cap-svn

with their various spacing issues into a consistently spaced file.

In order to do this, we make use of the emacs regexp-replace function. This enables us to describe a pattern to match, and then defines how to match it.

Emacs regular expressions are a lot like sed’s, so I got there pretty quickly.

The first pattern to match is:

\(\w+\)\s-*\(IN\)\s-*\(A\)\s-*\(\S-+\)

  • \w+ means: match any word-constituent character
  • \s- means: match any white space
  • \S- means: match anything that is NOT whitespace
  • \(…\) is a construct which stores the match inside the brackets, so we can use it again later.

So this says:

Match the hostname, IN, A and the IP, and all the space inbetween; store the hostname, IN, A and the IP in variables.

The replacement is much simpler:

\1^I^I\2^I\3^I\4

\1, \2 etc are the variables, holding the matched hostname etc, and ^I is the tab character.

Simple really, but more fun, and done in less time than doing it manually.