Find all variables used in a terraform module

Want to make sure all the variables declared in a terraform module are actually used in the code?

This code lists all variables used in each of the sub-directories containing terraform code.

It started off as a one-liner but, as usual, the code to make it look pretty is bigger than the main functional code!

#!/usr/bin/env bash

set -euo pipefail

default_ul_char=-

main() {
  process
}

print_underlined () {
  local text="$1" ; shift
  local ul_char
  if [[ -n ${1:-} ]] ; then
    ul_char="$1" ; shift
  else
    ul_char=$default_ul_char
  fi
  printf '%sn%sn' "$text" "${text//?/$ul_char}"
}

process() {
  # loop over all directories
  while read -r dir ; do
    pushd "$dir" >/dev/null
    echo
    print_underlined "$dir" 
    # get a unique list of variables used in all .tf files in this directory
    sort -u < <(
      perl -ne 'print "$1n" while /var.([w-]+)/g' ./*.tf
    )
    popd > /dev/null
  done < <(
    # get a unique list of directories containing terraform files
    # starting in the present working directory
    sort -u < <(
      find . -name '*.tf' -exec dirname {} ;
    )
  )
}

main "$@"

Generate a random password for an RDS MySQL instance

I needed to generate random master passwords for several Amazon RDS MySQL instances.

The specification is as follows:

The password for the master database user can be any printable ASCII character except "/", """, or "@". Master password constraints differ for each database engine.

MySQL, Amazon Aurora, and MariaDB

  • Must contain 8 to 41 characters.

I came up with this:

head -n 1 < <(fold -w 41 < <(tr -d '/"@' < <(LC_ALL=C tr -dc '[:graph:]' < /dev/urandom)))

If you prefer to use pipes (rather than process substitution) the command would look like this:

cat /dev/urandom | LC_ALL=C tr -dc '[:graph:]' | tr -d '/"@' | fold -w 41 | head -n 1

Notes:

  • take a stream of random bytes
  • remove all chars not in the set specified by [:graph:], ie. get rid of everything that is not a printable ASCII character
  • remove the chars that are explicitly not permitted by the RDS password specification
  • split the stream into lines 41 characters long, ie. the maximum password length
  • stop after the first line

How (and Why) to Log Your Entire Bash History

For the last three and a half years, every single command I’ve run from the command line on my MacBook Pro has been logged to a set of log files.

Uncompressed, these files take up 16 MB of disk space on my laptop. But the return I’ve gotten on that small investment is immense. Being able to go back and find any command you’ve run in the past is so valuable, and it’s so easy to configure, you should definitely set it up today. I’m going to share how to do this so you can take advantage of it as well.

Bash Configuration File

You’ll need to configure an environment variable so that it’s loaded in every command line session. On my MacBook Pro, I use the .bash_profile file. On other operating systems, the .bashrc file is an option. See this blog post on .bash_profile vs .bashrc for more on the differences.

PROMPT_COMMAND

The Bash Prompt HOWTO describes the PROMPT_COMMAND environment variable as follows:

Bash provides an environment variable called PROMPT_COMMAND. The contents of this variable are executed as a regular Bash command just before Bash displays a prompt.

We’re going to set the PROMPT_COMMAND variable to be something that logs the most recent line of history to a file. To do this, add the following to your chosen Bash configuration file (.bash_profile for me):


export PROMPT_COMMAND='if [ "$(id -u)" -ne 0 ]; then echo "$(date "+%Y-%m-%d.%H:%M:%S") $(pwd) $(history 1)" >> ~/.logs/bash-history-$(date "+%Y-%m-%d").log; fi'

First, this checks to make sure we’re not root.

If that checks out, it appends a line that includes the current timestamp, the current working directory, and the last command executed to a log file that includes the current date in the filename.

Having the commands stored in separate files like this really helps when you’re trying to find a command you ran sometime last month, for example.


> grep -h logcat ~/.logs/bash-history-2016-04*
2016-04-01.10:18:03 /Users/me 66555  adb logcat
2016-04-01.10:19:56 /Users/me 66555  adb logcat
2016-04-01.11:01:36 /Users/me 66555  adb logcat
2016-04-05.09:50:25 /Users/me/git/android-project 66368  adb logcat
2016-04-05.13:42:54 /Users/me/git/android-project 66349  adb -s emulator-5554 logcat
2016-04-06.10:40:08 /Users/me/git/android-project 66390  adb logcat
2016-04-06.10:48:54 /Users/me/git/android-project 66342  adb logcat

Conclusion

It will only take a few seconds to update your PROMPT_COMMAND so that it logs every command to a file.

And the next time you’re trying to remember the command line options you used with find that one time (but can’t find in your current session’s history), you’ll be able to look it up in the log files.

Oh, and if you want to know how many times you’ve done a git push in the last three and a half years, you can look that up, too (5,585 git pushes for me)!

The post How (and Why) to Log Your Entire Bash History appeared first on Atomic Spin.

Bash Completion, Part 2: Programmable Completion

Don’t miss the previous post in this series: Bash Tab Completion


With Bash’s programmable completion functionality, we can create scripts that allow us to tab-complete arguments for specific commands. We can even include logic to handle deeply nested arguments for subcommands.

Programmable completion is a feature I’ve been aware of for some time, but I only recently took the time to figure out how it works. I’ll provide some links to more in-depth treatments at the end of this post, but for now, I want to share what I learned about using these other resources.

Completion Specifications

First, let’s take a look at what “completion specifications” (or “compspecs”) we have in our shell already. This list of compspecs essentially acts as a registry of handlers that offer completion options for different starting words. We can print a list of compspecs for our current shell using complete -p. The complete built-in is also used to register new compspecs, but let’s not get ahead of ourselves.

Here’s a sampling of compspecs from my shell:

$ complete -p
complete -o nospace -F _python_argcomplete gsutil
complete -o filenames -o nospace -F _pass pass
complete -o default -o nospace -F _python_argcomplete gcloud
complete -F _opam opam
complete -o default -F _bq_completer bq
complete -F _rbenv rbenv
complete -C aws_completer aws

Here, we have some rules for completing the arguments to the following commands:

  • gsutil
  • pass
  • gcloud
  • opam
  • bq
  • rbenv
  • aws

If I type any one of those commands into my shell followed by <TAB><TAB>, these rules will be used to determine the options Bash offers for completion.

OK, so, what are we looking at? Each of the compspecs in our list starts with complete and ends with the name of the command where it will provide programmable completion. Some of the compspecs here include some -o options, and we’ll get to those later. Each of these compspecs includes either -C or -F.

Completion Commands

The compspec for aws uses -C to specify a “completion command,” which is a command somewhere in our $PATH that will output completion options.

As input, the command will receive from Bash two environment variables: COMP_LINE and COMP_POINT. These represent the current line being completed, and the point at which completion is taking place.

As output, the completion command is expected to produce a list of completion options (one per line). I won’t go into the details of this approach, but if you’re curious, you can read the source for the aws_completer command provided by Amazon’s aws-cli project.

Completion Functions

A more common approach to completion is the use of custom completion functions. Each of the compspecs containing -F registers a completion function. These are simply Bash functions that make use of environment variables to provide completion options. By convention, completion functions begin with an underscore character (_), but there’s nothing magical about the function names.

Like the completion commands, completion functions receive the COMP_LINE and COMP_POINT environment variables. However, rather than providing line-based text output, completion functions are expected to set the COMPREPLY environment variable to an array of completion options. In addition to COMP_LINE and COMP_POINT, completion functions also receive the COMP_WORDS and COMP_CWORD environment variables.

Let’s look at some of these completion functions to see how they work. We can use the Bash built-in type command to print out these function definitions (even before we know where they came from).

$ type _rbenv
_rbenv is a function
_rbenv ()
{
    COMPREPLY=();
    local word="${COMP_WORDS[COMP_CWORD]}";
    if [ "$COMP_CWORD" -eq 1 ]; then
        COMPREPLY=($(compgen -W "$(rbenv commands)" -- "$word"));
    else
        local words=("${COMP_WORDS[@]}");
        unset words[0];
        unset words[$COMP_CWORD];
        local completions=$(rbenv completions "${words[@]}");
        COMPREPLY=($(compgen -W "$completions" -- "$word"));
    fi
}

This example demonstrates a few common patterns. We see that COMP_CWORD can be used to index into COMP_WORDS to get the current word being completed. We also see that COMPREPLY can be set in one of two ways, both using some external helpers and a built-in command we haven’t seen yet: compgen. Let’s run through some possible input to see how this might work.

If we type:

$ rbenv h<TAB><TAB>

We’ll see:

$ rbenv h
help hooks

In this case, COMPREPLY comes from the first branch of (COMP_CWORD is 1). The local variable word is set to h, and this is passed to compgen along with a list of possible commands generated by rbenv commands. The compgen built-in returns only those options from a given wordlist (-W) that start with the current word of the user’s input, $word. We can perform similar filtering with grep:

$ rbenv commands | grep '^h'
help
hooks

The second branch provides completion options for subcommands. Let’s walk through another example:

$ rbenv hooks <TAB><TAB>

Will give us:

$ rbenv hooks
exec    rehash  which

Each of these options simply comes from rbenv completions:

$ rbenv completions hooks
exec
rehash
which

And since we haven’t provided another word yet, compgen is filtering with an empty string, analogous to:

$ rbenv completions hooks | grep '^'
exec
rehash
which

If we instead provide the start of a word, we’ll have it completed for us:

$ rbenv hooks e<TAB>

Will give us:

$ rbenv hooks exec

In this case, our compgen invocation might be something like:

$ compgen -W "$(rbenv completions hooks)" -- "e"
exec

Or we can imagine with grep:

$ rbenv completions hooks | grep '^e'
exec

With just a single result in COMPREPLY, readline is happy to complete the rest of the word exec for us.

Registering Custom Completion Functions

Now that we know what it’s doing, let’s use Bash’s extended debugging option to find out where this _rbenv function came from:

$ shopt -s extdebug && declare -F _rbenv && shopt -u extdebug
_rbenv 1 /usr/local/Cellar/rbenv/0.4.0/libexec/../completions/rbenv.bash

If we look in this rbenv.bash file, we’ll see:

$ cat /usr/local/Cellar/rbenv/0.4.0/libexec/../completions/rbenv.bash
_rbenv() {
  COMPREPLY=()
  local word="${COMP_WORDS[COMP_CWORD]}"
  if [ "$COMP_CWORD" -eq 1 ]; then
    COMPREPLY=( $(compgen -W "$(rbenv commands)" -- "$word") )
  else
    local words=("${COMP_WORDS[@]}")
    unset words[0]
    unset words[$COMP_CWORD]
    local completions=$(rbenv completions "${words[@]}")
    COMPREPLY=( $(compgen -W "$completions" -- "$word") )
  fi
}
complete -F _rbenv rbenv

We’ve already seen all of this! This file simply declares a new function and then registers a corresponding completion specification using complete. For this completion to be available, this file only needs to be sourced at some point. I haven’t dug into how rbenv does it, but I suspect that something in the eval "$(rbenv init -)" line included in our Bash profile ends up sourcing that completion script.

Parting Thoughts

Readline

The unsung hero of Bash’s programmable completion is really the readline library. This library is responsible for turning your <TAB> key-presses into calls to compspecs, as well as displaying or completing the resulting options those compspecs provide.

Some functionality of the readline library is configurable. One interesting option that can be set tells readline to immediately display ambiguous options after just one <TAB> key-press instead of two. With this option set, our above examples would look a little different. For example:

$ rbenv h<TAB><TAB>
help hooks

would only need to be:

$ rbenv h<TAB>
help hooks

If this sounds appealing, put the following in your ~/.inputrc:

set show-all-if-ambiguous on

To find out about other readline variables we could set in our ~/.inputrc (and to see their current values), we can use the Bash built-in command bind, with a -v flag.

$ bind -v
set bind-tty-special-chars on
set blink-matching-paren on
set byte-oriented off
set completion-ignore-case off
set convert-meta off
set disable-completion off
set enable-keypad off
set expand-tilde off
set history-preserve-point off
set horizontal-scroll-mode off
set input-meta on
set mark-directories on
set mark-modified-lines off
set mark-symlinked-directories off
set match-hidden-files on
set meta-flag on
set output-meta on
set page-completions on
set prefer-visible-bell on
set print-completions-horizontally off
set show-all-if-ambiguous off
set show-all-if-unmodified off
set visible-stats off
set bell-style audible
set comment-begin #
set completion-query-items 100
set editing-mode emacs
set keymap emacs

For more information, consult the relevant Bash info page node:

$ info -n '(bash)Readline Init File Syntax'

More on Completion

Larger completion scripts often contain multiple compspecs and several helpers. One convention I’ve seen several times is to name the helper functions with two leading underscores. If you find you need to write a large amount of completion logic in Bash, these conventions may be helpful to follow. As we’ve already seen, it’s also possible to handle some, most, or even all of the completion logic in other languages using external commands.

There is a package available from Homebrew called bash-completion that contains a great number of completion scripts for common commands. After installation, it also prompts the user to configure their Bash profile to source all of these scripts. They all live in a bash-completions.d directory under $(brew --prefix)/etc and can be good reading. A similar package should also be available for Linux (and probably originated there).

Speaking of similar features for different platforms, I should also mention that while this post focuses specifically on the programmable completion feature of the Bash shell, other shells have similar functionality. If you’re interested in learning about completion for zsh or fish, please see the links at the end of this post.

Further Reading

This is only the tip of the iceberg of what’s possible with Bash programmable completion. I hope that walking through a couple of examples has helped demystify what happens when tab completion magically provides custom options to commands. For further reading, see the links below.

The post Bash Completion, Part 2: Programmable Completion appeared first on Atomic Spin.

Bash Completion, Part 1: Using Tab Completion

One of the most useful features I learned when I first started working with Linux was the “tab completion” feature of Bash. This feature automatically completes unambiguous commands and paths when a user presses the <TAB> key. I’ll provide some examples to illustrate the utility of this feature.

Using Tab Completion

Completing Paths

I can open Terminal.app, and at the prompt ($), I can type:

$ open ~/Des<TAB>

This will automatically be completed to:

$ open ~/Desktop/

At this point, I can also use tab completion to get a list of ambiguous completion options, given what I’ve already entered. Here I have to press <TAB> twice.

$ open ~/Desktop/<TAB><TAB>

Will show me:

$ open ~/Desktop/
.DS_Store   .localized  hacker.jpg  rug/        wallpapers/
$ open ~/Desktop/

(I keep my desktop clean by periodically sweeping everything under the rug/ directory.)

Completing Commands

This completion feature can also be used to complete commands.
For example, if I type:

$ op<TAB><TAB>

I’ll see:

$ op
opam              opam-switch-eval  opensnoop
opam-admin        open              openssl
opam-installer    opendiff          opl2ofm
$ op

Or if I type:

$ ope<TAB>

I’ll see:

$ open

Learning Shell Commands with Tab Completion

This is useful for learning one’s way around a shell because it includes all the commands in the $PATH. When I first learned to use Bash and Linux, I used to tab-complete all the available commands starting with different letters of the alphabet. Then I’d pick those that sounded interesting, use which to find out where they were located, and use man to read about them.

For example, I might ask myself, what is opensnoop?

$ which opensnoop
/usr/bin/opensnoop

Well, it’s located in /usr/bin, so it probably shipped with OS X–it isn’t something I installed with Homebrew since those commands end up in /usr/local/bin. I wonder what it does?

$ man opensnoop

This brings up the manual page, which tells me, among other things, that opensnoop is a command to “snoop file opens as they occur.” I also learn that it “Uses DTrace.” (If reading these manual pages or “manpages” is new to you, you can use the arrow keys to scroll up and down and press ‘q’ to quit when you’re done.)

Sometimes when I tried to open the manual page for a command, I was brought to a manual page for Bash’s own shell built-ins. This manpage was somewhat informative, but it didn’t really tell me much about how to use the command. I later learned that Bash has a help command that gives a brief overview of each built-in command. There’s also  much more information available in Bash’s info documentation.

You may find command line interfaces opaque at first, but there is often helpful documentation available (without resorting to Google) if you know how to access it. Tab completion was an important first step for me when learning how to access traditional UNIX documentation.

Come back tomorrow, when I’ll explain programmable completion in Bash.

The post Bash Completion, Part 1: Using Tab Completion appeared first on Atomic Spin.

sudo, pipelines, and complex commands with quotes

We've all run into problems like this:

$ echo 12000 > /proc/sys/vm/dirty_writeback_centisecs
-bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied

The command fails because the target file is only writeable by root. The fix seems obvious and easy:

$ sudo echo 12000 > /proc/sys/vm/dirty_writeback_centisecs -bash: /proc/sys/vm/dirty_writeback_centisecs: Permission denied

Huh? It still fails. What gives? The reason it fails is that it is the shell that sets up the re-direction before running the command under sudo. The solution is to run the whole pipeline under sudo. There are several ways to do this:

echo 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs' | sudo sh
sudo sh -c 'echo 12000 > /proc/sys/vm/dirty_writeback_centisecs'

This is fine for simple commands, but what if you have a complex command that already includes quotes and shell meta-characters?

Here's what I use for that:

sudo su <<EOF
echo 12000 > /proc/sys/vm/dirty_writeback_centisecs
EOF

Note that the backslash before EOF is important to ensure meta-characters are not expanded.

Finally, here's an example of a command for which I needed to use this technique:

sudo sh  << EOF
perl -n -e '
use strict;
use warnings;
if (/^([^=]*=)([^$]*)(.*)/) {
  my $pre = $1;
  my $path = $2;
  my $post = $3;
  (my $newpath = $path) =~ s/usr/usr/local/;
  $newpath =~ s/://g;
  print "$pre$newpath:$path$postn"
}
else {
  print
}
' < /opt/rh/ruby193/enable > /opt/rh/ruby193/enable.new
EOF

Conditionally running cron tasks based on arbitrary conditions

Volcane recently asked in ##infra-talk on Freenode if anyone knew of "some little tool that can be used in a cronjob for example to noop the the real task if say load avg is high or similar?"

I came up with the idea to use nagios plugins. So, for example, to check load average before running a task:

/usr/lib64/nagios/plugins/check_load -w 0.7,0.6,0.5 -c 0.9,0.8,0.7 >/dev/null && echo "Run the task here"

Substitute the values used for the -w and -c args as appropriate, or use a different plugin for different conditions.

Atomic Spin » Mike English 2014-10-16 12:00:57

First announced almost a month ago, Shellshock continues to endanger un-patched web servers and Linux devices. So what is it? How can you tell if you’re vulnerable? And how can it be addressed?

What Is Shellshock?

Shellshock is a vulnerability in the bash software program. Bash is a shell, installed to Linux and other operating systems in the Unix family. A shell is a software component that is deeply integrated into the operating system, which is what makes this vulnerability so insidious.

The Shellshock vulnerability is a bug in the parser. It was first introduced more than 20 years ago when a feature to allow exporting functions was added. The danger is that an attacker who could control the content of an environment variable could potentially execute arbitrary code on a vulnerable system. Remote code execution (RCE) vulnerabilities (also called “arbitrary code execution” vulnerabilities) are among the most dangerous. Paired with privilege escalation vulnerabilities or poor security practices (e.g. allowing web servers to run as privileged users), unaddressed arbitrary code execution vulnerabilities can lead to the complete takeover of vulnerable systems.

An unfortunately large number of arbitrary code execution vulnerabilities exist in modern software, most of them caused by bugs in code dealing with memory management. (As an aside, this is one reason many systems programmers are excited by new languages like Rust that provide more safety for memory management.) Bugs with memory management (like stack overflows, underflows, or poor bounds-checking) can be exploited by skilled attackers, but aren’t usually trivially exploited by code this trivial:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

That’s the proof of concept for CVE-2014-6271, the initial Shellshock vulnerability. It fits in a blog post, and with a bit of experience writing shell scripts, it’s easy to see how it operates and how echo vulnerable could be replaced by a malicious payload. In other words, it’s easily exploited by a large number of potential attackers, even those with few resources at their disposal. The ease with which it can be exploited, combined with the fact that it allows for arbitrary code execution, the pervasiveness of bash on modern Unix systems means that Shellshock is a Very Big Deal.

Find more information here:

Checking for Shellshock

Use the bashcheck test script on GitHub.

Known Shellshock Vectors

Shellshocker – a Repository of “Shellshock” Proof of Concept Code

Shell Shock Exploitation Vectors by Daniel Fox Franke

Bugs (CVEs)

There are currently four published CVEs for recently discovered bash vulnerabilities. There are also currently two still embargoed CVEs.

  • CVE-2014-6271 – Original bug reported by Stephané Chazelas.
  • CVE-2014-7169 – “Incomplete fix for CVE-2014-6271″ PoC by Tavis Ormandy (@taviso); posted to Twitter.
  • CVE-2014-7186From RedHat: “It was discovered that the fixed-sized redir_stack could be forced to overflow in the Bash parser, resulting in memory corruption, and possibly leading to arbitrary code execution when evaluating untrusted input that would not otherwise be run as code.”
  • CVE-2014-7187 – “An off-by-one error was discovered in the way Bash was handling deeply nested flow control constructs. Depending on the layout of the .bss segment, this could allow arbitrary execution of code that would not otherwise be executed by Bash.”
  • CVE-2014-6277Reported by Michal Zalewski (lcamtuf) of Google. The prefix-suffix patch does not fix this underlying issue, but reportedly makes it inaccessible to a remote attacker.
  • CVE-2014-6278 – “Sixth bug” reportedly very easy to exploit if only the first CVE-2014-6271 patch is applied. Reported by Michal Zalewski (lcamtuf) of Google. The prefix-suffix patch does not fix this underlying issue, but reportedly makes it inaccessible to a remote attacker.

Shellshock Source Patches

Official – “Upstream” / Chet Ramey

Available from the GNU Project Archive for 2.05b through 4.3 (3.2 and 4.3 patches called out below).

Other – Vendors, 3rd-party (an incomplete list)

  • “Florian’s prefix-suffix patch”
    • Accepted upstream as bash32-054, bash43-027, etc.
    • The upstream version may have compatibility issues that still need to be resolved.
  • “Christos’ patch”
    • Disable feature except with flag, breaks backwards compatibility.
    • Adopted by both FreeBSD and NetBSD.
  • RedHat patch for CVE-2014-7186 & CVE-2014-7187

Vendor Updates

Ubuntu

CentOS (RedHat)

Debian

Apple product security was notified of the issue by Chet Ramey days in advance of the CVE-2014-6271 public disclosure. This issued a “safe by default” Statement:

With OS X, systems are safe by default and not exposed to remote exploits of bash unless users configure advanced UNIX services. We are working to quickly provide a software update for our advanced UNIX users.

Manually applying patches to Bash for OS X – Apple posts source code for open source software they distribute. You can apply official upstream patches to this source. Here is one guide for this approach.

Other Vendor Identifiers

Other Shellshock Mitigations

Firewall Signature Block

One approach to mitigating the issue is to block/drop all traffic that contains the exploit signature '() {', e.g.

1
iptables -A INPUT -m string --algo bm --hex-string '|28 29 20 7B|' -j DROP

But RedHat notes that this “is a weak workaround, as an attacker could easily send one or two characters per packet, which would avoid matching this signature check. It may, in conjunction with logging, provide an overview of automated attempts at exploiting this vulnerability.”

There are several other OS-specific mitigation techniques listed on that page, too.

Binary Patching(?!)

As a really hacky way to patch for CVE-2014-6271, it may be possible to edit the /bin/bash binary directly to break function importing and prevent the feature from being exploited. (For example: schneier.com/blog/archives/2014/09/nasty_vulnerabi.html#c6679473.) An interesting approach, at least.

Prevention

The best way to prevent a vulnerability like this from being exploited on your systems is to enable automatic security updates.

_Updated: The first version of this post indicated that

 

The post Shellshock – CVEs, Patches, Updates, & Other Resources appeared first on Atomic Spin.

Simple Remote Pairing with wemux

Background

Atomic Object is opening an office in Detroit. As part of the preparation for this new venture, I have been looking at ways to simplify remote pairing. I was happy to find out about a new project called wemux.

Wemux is a script that simplifies the management of shared tmux sessions.

wemux

Wemux seeks to facilitate two main usage patterns for shared tmux sessions:

  • Direct user to user connections
  • Multiple users connecting to shared central server*

In each case, wemux has a concept of one user being the "host" and others being "clients"

Each wemux "server" is created on its own socket.

Common usage patterns are captured by wemux’s three main modes:

  • Mirror mode (client is connected in read-only, non-interactive mode)
    • this may be good for live demos, or some forms of remote pairing
    • NB: You should not rely on this being ‘secure’ — you are still giving another user access to your session.
  • Pair mode (client is connected interactively, but shares a cursor/window focus with the host.)
    • this is probably best for most forms of pairing where users may take turns driving, but always view the same screen
  • Rogue mode.
    • This mode allows clients to connect to a shared session, but interact with different windows than the host

*This scenario is likely to be less common, but wemux does include advanced configuration options to enable multi-host capabilities

Installation

Brew

  • It’s not in brew yet, but it there is an open pull request.
  • In the meantime, you can use the formula like this:
    • brew install https://github.com/downloads/zolrath/wemux/wemux.rb

Manual installation

For those who are not using Homebrew, there are instructions for manual installation.

Configuration

  • The homebrew formula automatically adds the current user to the list of users allowed to "host" sessions.
  • If you’d like to have additional users as "hosts" (or make other configuration changes), running wemux conf will open the configuration file with your $EDITOR
  • More information on configuration can be found in the wiki
  • Adding #(wemux status_users) to your status-right or status-left in ~/.tmux.conf will let you see who is currently connected to the wemux server. This is one of my favorite features. Names with an [m] are connected in mirror (read-only) mode.

Use

Here are some of the basic commands to get started.

Host

  • Host a session
    • wemux start
  • See who’s connected
    • wemux users

Client

  • Attach in ‘mirror’ mode
    • wemux mirror
  • Attach in ‘pair’ mode
    • wemux pair
  • Attach in ‘rogue’ mode
    • wemux rogue

Example

If you tend to pair with the same person on a regular basis, you can streamline the process by creating a user for them on your local machine, adding their public SSH key to authorized_keys, and putting wemux pair; exit in their ~/.bash_profile. Then, whenever they connect to your machine, they automatically join your wemux server in pair mode.

The post Simple Remote Pairing with wemux appeared first on Atomic Spin.