Home

linux

Disable All The Caps

If you're like me and absolutely abhor the Caps Lock key, you've probably figured out some way to replace it with a more suitable function. I myself have settled on the following command to make it a duplicate Ctrl key:

$ setxkbmap -option ctrl:nocaps

This works great when placed in ~/.xinitrc and run as X starts, but what about USB keyboards which are plugged in later? Perhaps your pair-programming, or moving your laptop between work and home. This happens frequently enough that I thought it'd be nice to use a udev rule to trigger the command auto-magically whenever a keyboard is plugged in.

The setup is fairly simple in the end, but I found enough minor traps that I thought it was appropriate to document things once I got it working.

It has come to my attention that configuring this via xorg.conf.d actually does affect hot-plugged keyboards.

/etc/X11/xorg.conf.d/10-keyboard.conf

Section "InputClass"
  Identifier "Keyboard Defaults"
  MatchIsKeyboard "yes"
  Option "XkbOptions" "ctrl:nocaps"
EndSection

While this renders the rest of this post fairly pointless, it is a much cleaner approach.

Script

You can't just place the setxkbmap command directly in a udev rule (that'd be too easy!) since you'll need enough decoration that a one-liner gets a bit cumbersome. Instead, create a simple script to add this decoration; then we can call it from the udev rule.

Create the file wherever you like, just note the full path since it will be needed later:

~/.bin/fix-caps

#!/bin/bash
(
  sleep 1
  DISPLAY=:0.0 setxkbmap -option ctrl:nocaps
) &

And make it executable:

$ chmod +x ~/.bin/fix-caps

Important things to note:

  1. We sleep 1 in order to give time for udev to finish initializing the keyboard before we attempt to tweak things.
  2. We set the DISPLAY environment variable since the context in which the udev rule will trigger has no knowledge of X (also, the :0.0 value is an assumption, you may need to tweak it).
  3. We background the whole command with & so that the script returns control back to udev immediately while we (wait a second and) do our thing in the background.

Rule

Now that we have a single callable script, we just need to run it (as our normal user) when a particular event occurs.

/etc/udev/rules.d/99-usb-keyboards.rules

SUBSYSTEM=="input", ACTION=="add", RUN+="/bin/su patrick -c /home/patrick/.bin/fix-caps"

Be sure to change the places I'm using my username (patrick) to yours. I had considered putting the su in the script itself, but eventually decided I might use it outside of udev when I'm already a normal user. The additional line-noise in the udev rule is the better trade-off to me.

And again, a few things to note:

  1. I don't get any more specific than the subsystem and action. I don't care that this runs more often then actually needed.
  2. We need to use the full path to su, since udev has no $PATH.

Testing

There's no need to reload anything (that happens automatically). To execute a dry run via udevadm test, you'll need the path to an input device. This can be copied out of dmesg from when one was connected or you could take an educated guess.

Once that's known, execute:

# udevadm test --action=add /dev/path/to/whatever/input0
...
...
run: '/bin/su patrick -c /home/patrick/.bin/fix-caps'
...

As long as you see the run bit towards the bottom, you should be all set. At this point, you could unplug and re-plug your keyboard, or tell udev to re-process events for currently plugged in devices:

# udevadm trigger

This command doesn't need a device path (though I think you can give it one); without it, it triggers events for all devices.

published on 06 May 2013, tagged with arch linux udev

Hard Mode

Recently, while watching Corey Haines and Aaron Patterson pair-program, I heard Mr. Haines mention vim's "hard mode". Apparently, this is when you disable the motion commands h, j, k, and l.

It's absurd how great this exercise is for increasing your knowledge of vim. There are so many better ways to do everything. Just like complete novices might map the arrow keys to Nop to force learning hjkl, mapping the hjkl keys to Nop forces you to learn all these other ways to move around and edit parts of the file.

The real philosophical shift is thinking in Text Objects rather than Lines and Characters. Words are things, sentences are things, method definitions are things, and these can all be manipulated or navigated through as such.

While you probably can't fully internalize this concept without going through the exercise yourself, I would like to share a few of the very first "better ways" I've been finding while restricted in this way.

Imagine my cursor is a ways down the document, and I need to change the above header in some way. I'm staring at "Search", I know I want my cursor there. I used to just tap k or maybe a few 10ks with a j or two. What was I thinking?

?Se

And I'm there. In this case, the capital "S" made this word rare enough that I didn't have to type very much of it. Recognizing the relative frequency of words or characters can be a useful skill for quicker navigation. Drew Neil, author of practical vim, calls this "Thinking like a scrabble player".

Use the Ex, Luke

Another thing I didn't realize I do a lot is move to some far away line to copy it, only to come right back to paste it. Really? I'm going to type a bunch of js only to then type the exact same number of ks?

You could use search to get to the far away line then double-backtick to jump back, or you could do this:

:2,7co .

This takes lines 2 to 7 and copies them to here. Not only is this less key-strokes (a number which grows proportional to the distance between here and there), but I'd argue it also keeps your focus better.

You can actually cut out a lot of unnecessary motion using commands like this:

:20   " go to line 20
:20d  " delete line 20
:2,7d " delete lines 2 through 7

In any of these commands . can be used to mean the current line. If you really get frustrated, you could use :.+1 and :.-1 to move like j and k -- but I wouldn't recommend it.

Finding Character

It's times like these that I try to find a good first concept. Something that's going to be useful enough to get me further along the habit-building path, but simple enough that I don't have to remember too much.

First, know that 0 puts you at the start of the line. This gives you a common reference to move from so you only have to think in one direction (for now). Second, know that f and t go to a letter (so fa to go to the next "a" in the line). The difference is t goes till the character, stopping with the cursor just before it and f puts the cursor right on top. You can then use ; to repeat the last search, moving a-by-a along the line.

Once you've gotten the hang of this, the capital versions, F and T do the same thing but backwards. , is the key to repeat the last backwards search, but so many people (including me) map that to Leader or LocalLeader that it's difficult to rely on. I haven't found a good solution to this, since the only other convention I know of is the default \ which I can rarely type consistently.

There's a bit of stategy here. It's true of most motions, but it's most recognizable with f. You have two choices in approach: pick the letter that you want to be at (no matter what letter it is) and use ; to repeat the last f or t until it gets you there (regardless of how many key strokes that is), or you can choose a letter that appears first in the line (knowing that it will only take one stroke to get there) but which only gets you near your goal. These are the two extremes, finding the best middle ground (lowest overall keystrokes) for any given scenario is something worth mastering.

Word-wise

In addition to finding by character we can start to think in words. Again, we're making it easy by always starting from 0. Given that, just use w to move word by word with the cursor on the front of each word or e to move word by word but with the cursor on the end of each word. Eventually, I'll attempt to internalize the same commands in the other direction: b and ge.

All of these have capital versions (W, B, E, gE) which have the same behavior but work on WORDS not words.

The exact rules about words vs WORDs aren't worth memorizing. WORDs are basically just a higher level of abstraction. For example, <foo-bar> is 5 words but it's only one WORD.

Conclusion

So far, I've gotten myself to consistently use a number of new vim tricks:

  1. Use search to get where you want
  2. Use Ex commands to manipulate text not near the cursor
  3. Move by word, not by character

There's still plenty to learn, but I've found that just these few simple ideas make me effective enough that I'm sticking with it and not just giving up in frustration.

published on 16 Mar 2013, tagged with vim linux

Dzen

Here's for a small change of pace...

I'd like to talk about a tool I've all but forgotten I'm even using (and that's a compliment to its stability and unobtrusiveness).

dzen is a great little application from the folks at suckless. It's one of those do one thing and do it well types of tools. It's probably not useful at all for anyone with a bloated --ahem, excuse me-- featureful desktop environment or window manager (or both).

In my case, I'm using just XMonad with its beautiful simplicity. This means, of course, that there's no out-of-the box... anything.

I've already covered some of this from an XMonad perspective, so this post is more about dzen's general usefulness.

Volume

First up, a small visual notification when I adjust my volume:

ossvol screenshot 

It fades in (implicitly thanks to xcompmgr) for just a second when I adjust my volume and gives me that nice, unobtrusive indication of the volume level.

The actual volume adjustment can be done in many alsa or oss specific ways; for my implementation, just see the script as it is live. Completely separate of that, however, we can just use dzen to show the notification:

level=$(get_it_from_alsa_or_oss)

# we use a fifo to buffer the repeated commands that are common with 
# volume adjustment
pipe='/tmp/volpipe'

# define some arguments passed to dzen to determine size and color.
dzen_args=( -tw 200 -h 25 -x 50 -y 50 -bg '#101010' )

# similarly for gdbar
gdbar_args=( -w 180 -h 7 -fg '#606060' -bg '#404040' )

# spawn dzen reading from the pipe (unless it's in mid-action already).
if [[ ! -e "$pipe" ]]; then
  mkfifo "$pipe"
  (dzen2 "${dzen_args[@]}" < "$pipe"; rm -f "$pipe") &
fi

# send the text to the fifo (and eventually to dzen). oss reports 
# something like "15.5" on a scale from 0 to 25 so we strip the decimals 
# and send gdbar an optional "upper limit" argument
(echo ${level/.*/} 25 | gdbar "${gdbar_args[@]}"; sleep 1) >> "$pipe"

Pretty easy, and about as light-weight as you can get.

Status bar

Little known fact: you can use the ubiquitous conky to feed a simple statusbar via dzen. This means you can also use dzen escapes in your TEXT block to do cool things:

dzen screenshot 

My statusbar has the following "features"

And here's the conkyrc to achieve it:

background no
out_to_console yes
out_to_x no
override_utf8_locale yes
update_interval 1
total_run_times 0
mpd_host 192.168.0.5
mpd_port 6600

TEXT
[ ^ca(1, mpc toggle)${mpd_status}^ca()

  ${if_mpd_playing}- ${mpd_elapsed}/${mpd_length}$endif ]

  ^fg(\#909090)^ca(1, mpc next)${mpd_title}^ca()^fg() by

  ^fg(\#909090)${mpd_artist}^fg() from

  ^fg(\#909090)${mpd_album}^fg()

  Cpu: ^fg(\#909090)${cpu}%^fg()

  Mem: ^fg(\#909090)${memperc}%^fg()

  Net: ^fg(\#909090)${downspeedf eth0} / ${upspeedf eth0}^fg()

  ${time %a %b %d %H:%M}

Line breaks added for clarity.

The most interesting part is the clickable areas: ^ca( ... )some text^ca() defines an area of "some text" that can be clicked. The two arguments inside the first parens are are "which mouse button" and "what command to run". Pretty simple and damn convenient.

Then all you've got to do is call this from your startup script:

$ conky -c ~/path/to/that | dzen2 -p -other -args

The -p option just means "persist" so the dzen will never close.

Wrap-up

This was just two examples of some uses for a simple "pipe some text in and see it" GUI toolkit -- there are plenty others.

This echoes one of the great things about open-source: something like this is so small, so simple, it could never have survived marketing meetings, planning sessions or cost-benefit analyses -- but here it is, and I find it oh-so-very-useful.

published on 29 Apr 2012, tagged with arch bash linux

Git Submodule Config

Git submodules are pretty great. They allow you to have nested git repositories so that modular parts of your app can exist as separate repos but be worked with as one file tree. Another benefit is that when submodules are pushed to github they appear simply as links to the repos they represent.

If your not familiar, go ahead and google then come back -- how submodules work overall is not the point of this post.

One of the ways I use submodules is to take modular pieces of my dotfiles repo and separate them out into single purpose, independently clonable repos for oh-my-zsh, vim and screen. A level down, inside the vim submodule itself, I use additional submodules in accordance with tpope's awesome pathogen plugin to bundle the various vim plugins I use. At both of these levels there exist submodules of which I am the author and an active developer.

When I work on these submodules, I like to do so from within the parent repo (vs independently in some other directory). This is especially important in vim so that I can test out my changes immediately. If I didn't do this, I would have to hack on the submodule, commit, push, go into the vim repo's copy and pull -- all before seeing the affects (Bret Victor would not be very happy with that workflow).

What this means is the submodule must be added with a pushable remote. And since I like to push using ssh keys and not enter my github credentials each time, I use the git@github url for that. Problem is, when someone wants to clone my repo (that's what it's there for), they're unable to recursively clone those submodules because they don't have access to them using that url. For that to work, I would've had to have added the submodules using the https protocol which allows anonymous checkouts.

As it turns out, due to the unexpected (but perfectly reasonable) behavior of a git submodule add command, I can actually have my cake and eat it too.

You see, when you do a git submodule add <url> <directory>, it writes that url to .gitmodules. This is the file and url that's used when you clone this repo anywhere else and init the submodules within. But this is not the url that's used when you actually try to push or pull from within the submodules!

In addition to .gitmodules, the url of the remote also gets written into the .git/config of the submodule as the origin remote (this is just normal clone behavior). This is the url that's used for push/pull from within the submodule. If you think about it, it makes perfect sense: you're in a valid git repo; when executing a push, you wouldn't expect it to use anything but the remote that was defined and stored in your .git/config.

In some versions of git, I find that a submodule's .git is actually a file pointing to a .git/modules/name/ directory in the parent repo.

Finally, the url/directory mapping for the submodule also gets written into the parent repo's .git/config. What purpose does that serve? If you figure it out, let me know...

So (however unlikely this is) if you find yourself in the same situation as I, this is how you do that:

$ git submodule add https://github.com/you/repo some/dir
$ git commit -m 'added submodule repo'
$ cd some/dir
$ git remote set-url origin git@github.com:you/repo

Now anyone who clones (recursively) will get the anonymous checkout version (as defined in .gitmodules), but the origin remote in the local submodule has been changed to the git@github version and is pushable using ssh keys.

I recently discovered that this can be solved much more elegantly by adding the following to ~/.gitconfig:

[url "git@github.com:pbrisbin/"]
  pushInsteadOf = "https://github.com/pbrisbin/"

Now whenever git encounters the anonymous http remote, it'll silently use the ssh-based url. Aces.

published on 27 Apr 2012, tagged with git linux

Dont Do That

I use Arch linux for a number of reasons. Mainly, it's transparent and doesn't hold your hand. You're given simple, powerful tools and along with that comes the ability to shoot yourself in the foot. This extends to the community where we can and should help those newer than ourselves to manage this responsibility intelligently, but without holding their hand or taking any of that power away through obfuscation.

The Problem

There's always been the potential for a particular command to break your system:

$ pacman -Sy foo

What this command literally means is, "update the local index of available packages and install the package foo". Misguided users assume this is the correct way to ensure you receive the latest version of foo available. While it's true that it is one way, it's not the correct way. Moreover, using this command can easily break your system.

Let's walk through an example to illustrate the problem:

There's nothing here to tell pacman to update gimp since libpng 1.2 is >= 1.0 which meets gimp's dependency contstraints.

However, our user's gimp binary is actually linked directly to /usr/lib/libpng.so.1.0 and is now broken. Sadface.

In this example, the outcome is a broken gimp. However, if the shared dependency were in stead something like readline and the broken package something like bash, you might be left with an unusable system requiring a rescue disk or reinstall. This of course lead to a lot of unhappy users.

The Solution

There are a few options to avoid this, the two most viable being:

  1. Instruct users to not execute -Sy foo unless they know how foo and its dependencies will affect their system.
  2. Instruct Arch maintainers to use a hard constraint in these cases, so firefox and gimp should depend on libpng==1.0

If we went with option two, the user, upon running pacman -Sy firefox would've gotten an error for unresolvable dependencies stating that gimp requires libpng==1.0.

Going this route might seem attractive (especially to users) but it causes a number of repository management headaches dealing with exact version constraints on so many heavily depended-upon packages. The potential headache to the maintainers far out-weighed the level of effort required to educate users on the pitfalls of -Sy.

So, option one it is.

The Wrong Advice

It was decided (using the term loosely) to tell anyone and everyone to always, no matter what, when they want to install foo, execute:

$ pacman -Syu foo

I argue that this advice is so opposite to The Arch Way, that it's downright evil.

What this command really says is, "update your system and install foo". Sure, that's no big deal, it's not harmful, may or may not be quick and ensures you don't run into the trouble we've just described.

Coincidentally, this is also the correct way to ensure you get the absolute latest version of foo -- if and only if foo had a new version released since your last system update.

My issue is not that it doesn't work. My issue is not that it's incorrect advice to those with that specific intention. My issue is that, nine times out of ten, that's not the user's intention. They simply want to install foo.

You're now telling someone to run a command that does more than what they intended. It does more than is required. It's often given out as advice with no explanation and no caveats. "Oh, you want to install something? -Syu foo is how you do that..." No, it really isn't.

You've now wasted network resources, computational resources, the user's time and you've taught them that the command to install foo is -Syu foo. Simplicity and transparency aside, that's just lying.

If you've been given this advice, I'm sorry. You've been done a disservice.

The Correct Advice

To update your system:

$ pacman -Syu

To install foo:

$ pacman -S foo

To update your system and install foo:

$ pacman -Syu foo

Simple, transparent, no breakage. That's the advice you give out.

Sure, by all means, if your true intention is to upgrade the system and install foo, you should absolutely -Syu foo but then, and only then, does that command make any sense.

</rant>

published on 24 Mar 2012, tagged with arch linux pacman

Test Driven Development

With my recent job shift, I've found myself in a much more sophisticated environment than I'm used to with respect to Software Engineering.

At my last position, there wasn't much existing work in the X++ realm; We were breaking new ground, no one cared about elegance; if you got the thing working -- more power to you.

Here, it's slightly different.

People here are working in a sane, documented, open-source world; and they're good. Everyone is acutely aware of what's good design and what's not. There's a focus on elegant code, industry standards, solid OOP principles, and most importantly, we practice Test Driven Development.

I'm completely new to this method for development, and I gotta say, it's quite nice.

Now, I'm not going to say that this is the be-all-end-all of development styles (I'm a functional, strictly-typed, compiler-checked code guy at heart), but I do find it quite interesting -- and effective.

So why not do a write-up on it?

Test Framework

The prerequisite for doing anything in TDD is a good test framework. Luckily, ruby is pretty strong in this area. The way it works is the following:

You subclass Test::Unit and define methods that start with test_ where you execute system logic and make assertions about certain results; and then you run that class.

Ruby looks for those methods named as test_whatever and runs them "as tests". Running a method as a test means that errors and failures (any of your assert methods returning false) will be logged and displayed at the end as part of the "test report".

All of these test classes can be run automatically by a build-bot and (depending on your test coverage) you get good visibility into what's working and what's not.

This is super convenient and empowering in its own right. In a dynamic language like ruby, tests are the only way you have any level of confidence that your most recent code change doesn't blow up in production.

So now that you've got this ability to write and run tests against your code base, here's a wacky idea, write the tests first.

Test Driven

It's amazing what this approach does to the design process.

I've always been the type that just starts coding. I'm completely comfortable throwing out 6 hours worth of code and starting over. I know my "first draft" isn't going to be right (though it will be useful). I whole-heartedly believe in refactorings, etc. But most importantly, I need to code to sketch things out. It's how I've always worked.

TDD is sort of the same thing. You do a "rough sketch" of the functionality you'll add simply by writing tests that enforce that functionality.

You think of this opaque object -- a black box. You don't know how it does what it does, but you're trying to test it doing it.

This automatically gives you an end-user perspective. You now focus solely on the interface, the input and the output.

This is a wise position to design from.

You also tend to design small self-contained pieces of functionality. Methods that don't care about state, return the same output for a given input, and generally do one simple thing. Of course, you do this because these are the easiest kind of methods to test.

So, out of sheer laziness, you design a cohesive, easy to use, and completely simple interface, an API.

Now you just have to "plumb it up". Hack until the tests pass, and you're done. That might be an over-simplification, but it's not off by much...

Come to think of it, this is exactly the type of design Haskell favors. With gratuitous use of undefined, the super-high-level logic of a Haskell program can be written out with named functions to "do the heavy lifting". If you make these functions simple enough and give them descriptive enough names, they practically write themselves.

So that's TDD (at least my take on it). So far, I like it.

published on 02 Oct 2011, tagged with linux ruby work tdd

Mairix

Mairix is a nice little utility for indexing and searching your emails. Its smooth integration with mutt is also a plus.

I used to use native mutt search, but it's pretty slow. So far, mairix is giving me a good approximation of the google-powered search available in the web interface and it's damn fast.

As I go through this setup, keep in mind the example config files are designed to work with my overall mutt setup; one which is described in two other posts here and here.

If you need a little context, checkout my mutt-config repo which has a fully functioning ~/.mutt, example files for the other apps involved (offlineimap, msmtprc, and now mairix), and any scripts the setup needs.

Mairix

First, of course, install mairix:

pacman -S mairix

Then, setup a ~/.mairixrc which defines where your mails are and their type as well as where to store the results and index. Here's an example:

# where you keep your mail
base=/home/<you>/Mail

# colon separated list of maildirs to index.
#
# I have two accounts each in their own subfolder. the '...' means there 
# are subdirectories to search as well; it's like saying GMail/* and 
# GMX/*
maildir=GMail...:GMX...

# I omit gmail's archive folder so as to pevent duplicate hits
omit=GMail/all_mail

# search results will be copied to base/<this folder> for viewing in 
# mutt
mfolder=mfolder

# and the path to the index itself
database=/home/<you>/Mail/.mairix_database

With that in place, run mairix once to build the initial index. This first run will be slower but in my tests, subsequent rebuilds were almost instant.

In situations like these, I'll usually add a verbose flag so I can be sure things are working as expected.

At this point, you could actually do some searching right from the commandline:

mairix some search term # search and populate mfolder
mutt -f mfolder         # open it in mutt

This wasn't the usage I was after however, I'm typically already in mutt when I want to search my mails.

Mutt

My original script for this purpose was pretty simple. It prompted for the search term and ran it. The problem was you then needed a separate keybind to actually view the results.

Thankfully, Scott commented and provided a more advanced script which got around this issue. Many thanks to Scott and whomever wrote the script in the first place.

This version does some manual tty trickery to build its own prompt, read your input, execute the search and open the results. All from just one keybind.

I merged the two scripts together into what you see below. The main changes from Scott's version are the following:

  1. I kept my clear, purge, search method rather than relying on cron to keep the index up to date.
  2. I removed the append-search functionality; not my use-case.
  3. I removed the <return> from the ^G trap; it was getting executed by mutt and opening the first message in the inbox after a cancelled search.
  4. I fixed it so that backspace works properly in the prompt.

So, here it is:

#!/bin/bash

read_from_config() {
  local key="$1" config="$HOME/.mairixrc"

  sed '/^'"$key"'=\([^ ]*\) *.*$/!d; s//\1/g' "$config"
}

read -r base    < <(read_from_config 'base')
read -r mfolder < <(read_from_config 'mfolder')

# prevent rm / further down...
[[ -z "$base$mfolder" ]] && exit 1

searchdir="$base/$mfolder"

set -f                          # disable globbing.
exec < /dev/tty 3>&1 > /dev/tty # restore stdin/stdout to the terminal,
                                # fd 3 goes to mutt's backticks.
saved_tty_settings=$(stty -g)   # save tty settings before modifying
                                # them

# trap <Ctrl-G> to cancel search
trap '
  printf "\r"; tput ed; tput rc
  printf "/" >&3
  stty "$saved_tty_settings"
  exit
' INT TERM

# put the terminal in cooked mode. Set eof to <return> so that pressing
# <return> doesn't move the cursor to the next line. Disable <Ctrl-Z>
stty icanon echo -ctlecho crterase eof '^M' intr '^G' susp ''

set $(stty size) # retrieve the size of the screen
tput sc          # save cursor position
tput cup "$1" 0  # go to last line of the screen
tput ed          # clear and write prompt
tput sgr0
printf 'Mairix search for: '

# read from the terminal. We can't use "read" because, there won't be
# any NL in the input as <return> is eof.
search=$(dd count=1 2>/dev/null)

# clear the folder and execute a fresh search
( rm -rf "$searchdir"
  mairix -p
  mairix $search
) &>/dev/null

# fix the terminal
printf '\r'; tput ed; tput rc
stty "$saved_tty_settings"

# to be executed by mutt when we return
printf "<change-folder-readonly>=$mfolder<return>" >&3

A non-trivial macro provides the interface to the script. It sets a variable called my_cmd to the output of the script, which should be the actual change-folder command, then executes it.

macro generic ,s "<enter-command>set my_cmd = \`$HOME/.mutt/msearch\`<return><enter-command>push \$my_cmd<return>" "search messages"

I've gotten used to "comma-keybinds" from setting that as my localleader in vim. It's nice because it very rarely conflicts with anything existing and it's quite fast to type.

One downside which I've been unable to fix (and believe me, I've tried!) is that if you press ^G to cancel a search but you've typed a few letters into the prompt, mutt will read those letters as commands (via the push) and execute them.

The only thing I could do is prefix those characters with something. I've decided to use /. That makes mutt see it as a normal search which you can execute or ^G again to cancel. Annoying, but better than mutt flailing around executing rando commands...

I haven't had the time yet to learn all the tricks, but here are some of the more useful-looking searches from man mairix:

Useful searches

   t:word                             Match word in the To: header.

   c:word                             Match word in the Cc: header.

   f:word                             Match word in the From: header.

   s:word                             Match word in the Subject: header.

   m:word                             Match word in the Message-ID: 

                                      header.

   b:word                             Match word in the message body 
                                      (text or html!)

   d:[start-datespec]-[end-datespec]  Match messages with Date: headers 
                                      lying in the specific range.

Multiple body parts may be grouped together, if a match in any of them 
is sought.

   tc:word  Match word in either the To: or Cc: headers (or both).

   bs:word  Match word in either the Subject: header or the message body 
            (or both).

   The a: search pattern is an abbreviation for tcf:; i.e. match the 
   word in the To:, Cc: or From: headers.  ("a" stands for "address" in 
   this case.)

The "word" argument to the search strings can take various forms.

   ~word        Match messages not containing the word.  

   word1,word2  This matches if both the words are matched in the 
                specified message part.

   word1/word2  This matches if either of the words are matched in the 
                specified message part.

   substring=   Match any word containing substring as a substring

   substring=N  Match any word containing substring, allowing up to N 
                errors in the match.

   ^substring=  Match any word containing substring as a substring, with 
                the requirement that substring occurs at the beginning 
                of the matched word.

Happy searching!

published on 03 Jul 2011, tagged with arch bash linux mutt

Pacprune

A fairly long time ago, there was a thread on the Arch forums about clearing your pacman cache.

Pacman's normal -Sc will remove all versions of any packages that are no longer installed and -Scc will clear that plus old versions of packages that are still installed.

The poster wanted a way to run -Scc but also keep the last 1 or 2 versions back from installed. There was no support for this in pacman directly, so a bit of a bash-off ensued.

I wrote a pretty crappy script which I posted there, it laid around in my ~/.bin collecting dust for a while, but I recently rewrote it. I'm pretty proud of the result for its effectiveness and succinctness, so I think it deserves a little discussion.

The methodology of the two versions is the same, but this new version leans heavily on good ol' unix shell-scripting principles to provide the exact same functionality in way less code, memory, and time.

Approach

The first approach discussed on the thread was to parse filenames for package and version, then do a little sort-grepping to figure out which versions to keep and which versions to discard. This method is fast, but provably inaccurate if a package name contains numbers on the end.

I went a different way.

For each package, pull the .PKGINFO file out of the archive, parse the pkgname and pkgversion variables out of it, then do the same sort-grepping to figure out what to discard.

My first implementation of this algorithm was really bad. I'd parse and write pkgname|pkgversion to a file in /tmp. Then I'd grep unique package names using -m to return at most the number of versions you want to keep (of each package) and store that in another file. I'd then walk those files and rm the packages.

Ick.

Needs moar unix

The aforementioned ugliness, plus some configuration and error checking weighed in at 162 lines of code, used two files, and was dirt slow. I decided to re-attack the problem with a unix mindset.

In a nutshell: write small units that do one thing and communicate via simple text streams.

The first unit this script needs is a parser. It should accept a list of packages (relative file paths) on stdin, parse and output two space-separated values on stdout: name and path. The path will be needed by the next unit down the line, so we need to pass it through.

parse() {
  local package opt

  while read -r package; do
    case "$package" in
      *gz) opt='-qxzf' ;;
      *xz) opt='-qxJf' ;;
    esac
    
    bsdtar -O $opt "$package" .PKGINFO |\
        awk -v package="$package" '/^pkgname/ { printf("%s %s\n", $3, package) }'
  done
}

11 lines and damn fast. Thank god for bsdtar's -q option. It tells the extraction to stop after finding the file I've requested. Since the .PKGINFO file is usually the first thing in the archive, we barely do any work to get the values.

It's also done completely in RAM by piping tar directly to awk.

Step two would be the actual pruning. Accept that same space-separated list on stdin and for any package versions beyond the ones we want to keep (the 3 most recent), echo the full path to the package file on stdout.

prune() {
  local name package last_seen='' num_seen=0

  while read -r name package; do
    [[ -n "$last_seen" ]] && [[ "$last_seen" != "$name" ]] && num_seen=0

    num_seen=$((num_seen+1))

    # print full path
    [[ $num_seen -gt $versions_to_keep ]] && readlink -f "$package"

    last_seen="$name"
  done
}

Just watch the list go by and count the number of packages for each name. I'm ensuring that the list is coming in reverse sorted already, so once we see the number of packages we want to keep, any same-named packages after that should be printed.

So simple.

This function can get away with being simple because it doesn't take into account what's actually installed on your system. It just keeps the most recent 3 versions of each unique package in the cache. Therefore, to do a full clean, run pacman -Sc first to remove all versions of uninstalled software. Then use this script to clear all but installed plus the two previous versions. This assumes the highest version in the cache is the installed version which may or may not be true in all cases.

All that's left is to make that reverse sorted list and pipe it through.

find ./ -maxdepth 1 -type f -name '*.pkg.tar.[gx]z' | LC_ALL='C' sort -r | parse | prune

So the whole script (new version) weighs in at ~30 lines (with whitespace) and I claim it is exactly as feature-rich as the first version.

I know what you're saying: there's no definition of the cache, no optional safe-list vs actual-removing behavior, there's no removing at all!

Well, you're just not thinking unix.

$ cd /some/cache/of/packages
$ pacprune                  # as a normal user, just print the list that 
                            # should be removed -- totally safe.
$ pacprune | sudo xargs rm  # then do the actual removal

You're free to get as fancy as you'd like too...

$ archiveit() { sudo mv "$@" ~/pkg_archive/; }
$ pacprune | xargs archiveit

And the only configuration is setting the versions_to_keep variable at the top of the script.

The script can be found in my scripts repo.

published on 11 Jun 2011, tagged with linux bash arch

Forks and Children

While writing a small learning exercise in C, I came across a nifty little concept. The task itself was a common one: I wanted to spawn a subprocess to the background while letting the main process continue to loop.

Many thanks go to falconindy who spoon fed me quite a bit as I was wrapping my head around all of this knowledge I'm now shamelessly presenting as my own.

In most languages you have some facility to group code into a logical unit (a haskell function or a bash subshell) then pass that unit to a command which forks it off into the background for you (haskell's forkProcess or bash's simple &).

Forking C

C takes a far different, but I'd say more elegant, approach. C provides a function, fork() which returns a pid_t.

The beauty of fork() is in its simplicity. All it does is create an exact copy of your program in its current state in memory. That's it.

int main() {
    pid_t pid;

    pid = fork();

    // ...
}

Guess what, now you've got two copies of your running program, both sitting at the exact spot where pid is being assigned the output of fork().

In the copy that was the original (the parent), pid will be the process id of the other copy (the child). And in that child copy, pid will be assigned 0. That's it; the full extent of fork().

So how do we use this?

Well, let's say you've got a program (as I did) which should sit and loop forever. When some event happens, we want to take some asynchronous action (in my case throw up a dzen notification).

This is the perfect time to use fork(). We'll let the main thread run continuously, and fork off a child to do its thing when the triggering event occurs.

Here's a simplified version:

#define _GNU_SOURCE

#include <stdio.h>
#include <stdlib.h>

int main() {
    int ret;
    pid_t pid;

    while (1) {
       /* wait for the "event" */
       ret = some_blocking_process();

       if (ret) {
           /* fork it! */
           pid = fork();
           
           if (pid == 0) {
               /* we are the child, take action! */
               some_action(ret);
               exit(EXIT_SUCCESS);
           }
           
           /* and the parent loops forever... */
       }
    }
}

So as you can see, the main program waits until some_blocking_process returns an int. If that int is nonzero, we consider that "the event" so we fork to create a copy of ourselves. If pid is zero, we know we are the child process so we take some_action and then simply exit. The parent process will skip that if statement, loop again and wait for some_blocking_process to signal the next event.

Zombie kids

So I may have lied to you slightly about the simplicity of this approach. The above is all well and good -- it is simple -- but I ran into a small snag while working with my little learner's app...

Zombies.

Turns out, when a child process exits, it reports its return value to its parent; like any good child should. The child does this by sending a SIGCHLD signal.

The parent then knows if all or some of its spawned children finished successfully or not. This is important if you've got some dependant logic or simply want to log that fact.

In my case, I couldn't care less. Succeed fail, whatever. I'm done with you kid -- go away.

Double turns out, if the parent neglects to act on the signal sent from the dying child, it can remain a zombie.

I think this is poor form. I mean, come on, a negligent parent is no reason to make a process wander around aimlessly as a zombie until the next reboot.

Ok, ok, enough with the metaphor. Bottom line -- all you need to do to prevent this is install a simple signal handler which will read (and ignore) the status of the child process in response to said signal.

Here's our same example, but this time with a simple handler added:

#define _GNU_SOURCE

#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <sys/wait.h>

static void sigchld_handler(int signum) {
    /* this just silences a compiler warning you might get since we 
     * discard the signum parameter that is passed in */
    (void) signum;

    /* the actual handling of the signal... */
    while (waitpid(-1, NULL, WNOHANG) > 0);
}

int main() {
    int ret;
    pid_t pid;

    struct sigaction sig_child;

    sig_child.sa_handler = &sigchld_handler;
    sigemptyset(&sig_child.sa_mask);
    sig_child.sa_flags = 0;
    sigaction(SIGCHLD, &sig_child, NULL);

    while (1) {
       ret = some_blocking_process();

       if (ret) {
           pid = fork();
           
           if (pid == 0) {
               some_action(ret);
               exit(EXIT_SUCCESS);
           }
       }
    }
}

That's it, no more zombies.

I noticed in the source for dzen2 that they use a double-fork approach which also prevents zombies -- with no need for signal handlers (yay KISS!):

if (fork() == 0) {
    if (fork() == 0) {
        //
        // child logic...
        //

        exit(EXIT_SUCCESS);
    }

    exit(EXIT_SUCCESS);
}

wait(0);

//
// continue parent logic...
//

I like this approach better.

published on 02 Jun 2011, tagged with c linux

Notes

For me, any sort of general purpose note taking and/or keeping solution needs to meet only a few requirements:

  1. Noting something has to be quick and easy (in a terminal and scriptable)
  2. Notes should be available from anywhere... tothecloud!
  3. Notes should be searchable

Now, just to clarify -- I'm not talking about classroom notes, those things go in note-books. I'm talking about short little blurbs of information I would like to keep and reference at a later time.

Though, I suppose this could work for classroom notes too...

I'm also not talking about reminders, those are the stuff of calendars, not note-keeping apps.

So what's my solution? What else, Gmail!

Gmail

Setting up gmail as a note keeper/searcher is simple. A note is an email from me with the prefix "Note - " in the subject line. Therefore, it's easy to setup a label and a filter to funnel note-mails into a defined folder:

From:    me@whatever.com
Subject: ^Note - 

I also add "Skip inbox" and "Mark as read" as part of the rule.

I know the gmail filters support some level of regex and/or globbing, but I don't know where it ends. I'm hoping that the ^ anchor is supported but I'm not positive.

Requirements 2 and 3 done.

Mutt

So if taking a note is done by just sending an email of a particular consistent format, then it's easy for me to achieve requirement 1 since I use that awesome terminal mail client mutt.

A short bash function gives us uber-simple note taking abilities by handling the boilerplate of a note-mail:

noteit() {
  _have mutt || return 1 # see my dotfiles re: _have

  local subject="$*" message

  [[ -z "$subject" ]] && { echo 'no subject.'; return 1; }

  echo -e "^D to save, ^C to cancel\nNote:\n"

  message="$(cat)"

  [[ -z "$message" ]] && { echo 'no message.'; return 1; }

  # send message
  echo "$message" | mutt -s "Note - $subject" -- pbrisbin@gmail.com

  echo -e "\nnoted.\n"
}

You could probably also streamline note taking by leveraging mutt's -H option. I'll leave reading that man page snippet as an exercise to the reader.

And here's how that might work out in the day-to-day:

//blue/0/~/ noteit test note
^D to save, ^C to cancel
Note:

This is a test note.

< I pressed ^D here >
noted.

//blue/0/~/

You could also use sendmail, mailx, msmtp or whatever other CLI mail solution you want for this.

And there it is, ready to be indexed by the almighty google:

Mutt notes shot 

With a few mutt macros, I think this could get pretty featureful without a lot of code.

Let me know in the comments if there are any other simple or out-of-the-box note-keeping solutions you know of.

Oh, and before anyone mentions it -- no, you can't take notes without internet when you're using this approach. I'm ok with that, I understand if you're not.

published on 26 Mar 2011, tagged with bash gmail linux mutt

Android Receiver

Android notifier is a great little app I just recently found on the marketplace. What it does is use your wifi network or a bluetooth connection to send out a broadcast when certain events happen on your phone.

The idea is to have a companion application running on your computer to listen for the event and pass along the message via some notification system: Growl on Windows/Mac and (I think) gnome-dbus on Linux.

This means your phone can be charging in the other room while you're at your computer and you'll get a nice notification on your desktop when someone's calling you or you get a text.

This is great and all but totally not worth the Gnome or bluetooth library dependencies to get going on Linux. After a brief look at the project's wiki however I knew I could do something simpler.

I was able to put together two scripts I had already in place to achieve a dead-simple android-receiver on my desktop. The first was a script call ncom which used netcat to send commands across a network and execute them on another machine. The second, bashnotify, was something I was playing around with to get pop-up notifications on track changes in mpd.

Netcat

From the project wiki, I found out that the application will send a broadcast packet on port 10600 in a specific format. After some playing around with test messages I was able to put together the following which successfully echod back the message text in a terminal.

while read -d $'\0'; do
  echo $REPLY
done < <(netcat -z -u -l -p 10600 localhost)

The incoming message doesn't end with a newline but rather a null character. That's why using read -d $'\0' and netcats -z option is required. I also found out that I wasn't getting anything from TCP even though the android app should be broadcasting with both protocols. Using UDP via the -u option seems stable so far.

Dzen

I took the dzen code present in bashnotify and tweaked it a little bit so that the notification temporarily covers my entire status bar and shows the message text:

handle_dzen() {
  local message="$*"

  # dzen settings
  local pipe='/tmp/android-receiver.fifo'
  local delay=4
  local x_offset=0
  local y_offset=0
  local height=17
  local font='Verdana-8'
  local foreground='#ffffba'
  local background='#303030'

  if [[ ! -e "$pipe" ]]; then
    mkfifo "$pipe"
    (dzen2 -ta l -h $height -x $x_offset -y $y_offset \
        -fn "$font" -bg $background -fg $foreground < "$pipe"; rm -f "$pipe") &
  fi

  # todo: make this prettier
  (echo "$message"; sleep $delay) >> "$pipe"
}

And there you go.

The end product is no longer moving my charger away from its normal spot because I'm expecting a call. In stead, I'll see this:

Android Receiver Screenshot 

The source for this script can be found in my github.

In my continued attempts to learn some C, I decided to combine the netcat and message parsing functions of the above into a small C app.

The end result is a nice little program that you can find here. It handles binding to the port, parsing and formatting the message, then handing it off as the first argument to a handler script which is in charge of actually displaying the notification to the user.

To match this functionality, I've culled the original script down to only the handle_dzen() function and renamed it to dzen-handler such that it can be used by any application that wants to toss up a brief notification. This script is also available in that android-receiver repo.

published on 11 Dec 2010, tagged with android bash c linux

Vim Registers

When you use an extremely powerful text editor such as vi, vim, or emacs, there are often times where you'll discover a feature or command that literally changes the way you write text. It's not a very large leap to say that, for a developer, that can be life-changing.

I've recently made one such discovery via vim's :help registers command. So I'd like to boil it down a bit and share it here.

Pasting in Vim

Often times when idling in #archlinux, someone will ask about pasting in vim.

Answers typically range from :set paste, to S-<insert>, etc, but one staple response is "*p and "+p.

These commands will take the contents of your X11 selection (currently highlighted text) and clipboard (text copied with C-c) respectively and dump it into your buffer.

I've heard these commands several times but I could never remember them. The reason is because I didn't really know what they did. I mean, obviously I knew that they pasted into vim from said locations, but I didn't know what those three command characters meant. Today, I decided to find out.

Registers in Vim

Vim has a number of what's called registers, they're just dumping grounds for text. Vim uses these to store different snippets of text for different reasons in very auto-magical ways. For instance, this is how undo is implemented in vim.

If you understand how vim is storing this text and how to read and write from these registers yourself, it can really help your work flow.

Here's the list reproduced from :help registers:

  1. The unnamed register ""
  2. 10 numbered registers "0 to "9
  3. The small delete register "-
  4. 26 named registers "a to "z or "A to "Z
  5. four read-only registers ":, "., "% and "#
  6. the expression register "=
  7. The selection and drop registers "*,"+ and "~
  8. The black hole register "_
  9. Last search pattern register "/

Editing commands (think d, y, and p) can be prefixed with a register to tell vim where to read or write the text you're working with.

The unnamed register is the default and holds the most recently deleted or yanked text; it's what's called upon when you just type p without specifying a register.

Now, have you ever dded something, dded something else, but then realized you really want to p that first thing you deleted?

Up until now, I would u back two steps and re-order my deletes so the text I wanted to p was the one most recently dded.

I should've known that vim had a much more powerful way to deal with this. Registers 0 through 9 hold that list of deleted text. In my case I could've simply done "1p to put not the most recently dded text (which is "0p, ""p, or just p), but the text one step before that.

The 26 named registers are meant to be used purposely by you to store snippets as you work. Calling them as a vs A simply means replace or append.

Ever wonder how the . command actually works in vim? Yeah me either. Anyway, it's just the read-only register ". that holds your most recent action. Typing . just tells vim to call it up and execute it.

And finally, the explanation for "*p and "+p, the selection and drop registers. They work just like any other and store the contents of the X11 selection and clipboard. That way, calling "*p simply dumps the register into your buffer.

What's more, you can use Ctrl-v to highlight a visual block, then type "+y to put that text into your clipboard to go paste it somewhere.

Another neat trick is the last search pattern. You can actually write to that register with what's known as a let-@ command. That way, if you're using hlsearch, you can tell vim to highlight words without actually searching for them (and possibly moving your cursor).

:let @/ = "the"

I'll let you :help yourself regarding the other registers.

published on 07 Nov 2010, tagged with linux vim

MapToggle

This snippet, when added to one's ~/.vimrc, allows the easy toggling of commonly used options (i.e. things like hls or wrap) with a single keypress.

First, you'll have to define the actual function:

function! MapToggle(key, opt)
  let cmd = ':set '.a:opt.'! \| set '.a:opt."?\<CR>"
  exec 'nnoremap '.a:key.' '.cmd
  exec 'inoremap '.a:key." \<C-O>".cmd
endfunction

command! -nargs=+ MapToggle call MapToggle(<f-args>)

Then, just map keys to that function:

MapToggle <F4> foldenable
MapToggle <F5> number
MapToggle <F6> spell
MapToggle <F7> paste
MapToggle <F8> hlsearch
MapToggle <F9> wrap

You'll even get a nice notification in your vim command prompt when you toggle the setting

I believe I got this from rson's vimrc, but I'm not sure; If I did, thanks rson.

published on 09 May 2010, tagged with vim linux

HTPC

I've recently finished work on an HTPC. The goal was to run a media center WM on a box that looked appropriate in my cabinet by my TV using a remote. That much I've done; all that's left is tweaking the remote functions and adding to the collection.

Hardware

The first thing I got was the case; I wanted one with a built in remote and a low enough profile to fit in my TV cabinet and not look out of place.

Enter Lian Li's PC-C39. Let me say, it's a great case. It's small, quiet, and looks great. One problem, the remote is garbage.

It doesn't work more than 2 feet away from the sensor. The remote is RF (another flaw IMO) and the sensor is actually over-shielded by the case itself. Solution? Slide open the top of the case (even just an inch), your range will increase tenfold. I did this for a while but wanted something better -- more on that later; anyone reading this should buy the PC-C37B which is the same case but sans the trash remote (and $50 bucks).

Next, I stopped in at MicroCenter to pick up the internal components. I knew I wanted to spend five to six hundred bucks and get a decently powered machine; one that could keep up with whatever HD content I wanted to run without getting too hot.

Here's what I ended up with:

After the usual mail-in-rebates, It'll be just over $550. You could definitely achieve a great system for less, but I wanted something more high-end (and I had just gotten my tax return), so I probably spent a little more than I had to.

So now that I've got a fully functioning box, it's time to fix my remote situation.

Enter Logitech's Harmony 300. I originally bought this thinking it was primarily a PC Media Center remote and would come with its own USB IR receiver. It did not. I was pissed.

In the end, I'm really glad I made that mistake because the remote's awesome. You configure it by plugging it into a computer and using an in-browser control panel (luckily it's mac+firefox compatible), just add devices by Manufacturer number, and that's it.

To get it working with the computer was a bit more involved, but not much.

First, I had to get my own USB IR Receiver. Luckily, amazon had a Dell RC6 receiver. for like $18 bucks, sold. Then it was just a matter of adding its MFR# to the harmony setup and starting lirc.

If you're on Arch, it's like this:

pacman -S lirc
cp /usr/share/lirc/remotes/mceusb/lircd.conf.mceusb /etc/lirc/lircd.conf
/etc/rc.d/lircd start

You can test it by typing irw and pressing some buttons.

You'll want to add lirc_mceusb2 to MODULES and lircd to DAEMONS in /etc/rc.conf.

If you find on reboot that your remote's not working, check if /dev/lirc0 exists (it needs to); if this happens, try a different USB port, that solved it for me

Now I've got just one remote that runs my whole living room. The girlfriend was pleased. There was much rejoicing.

Software

I went with XBMC. Once installed, I set up an autologin by editing /etc/inittab (assuming xbmc is your default username):

## Only one of the following two lines can be uncommented!
# Boot to console
#id:3:initdefault:
# Boot to X11
id:5:initdefault:

# snip...

x:5:respawn:/bin/su xbmc -l -c "/bin/bash --login -c startx >/dev/null 2>&1"

And then adding the following to that user's ~/.xinitrc:

exec /usr/bin/ck-launch-session /usr/bin/dbus-launch --exit-with-session /usr/bin/xbmc --standalone -fs

Recently, the above method prevented shutdown/suspend from working because ck-launch-session was confused. Switching to a mingetty approach solved it -- I'll update the details here as soon as I have some time.

I share my media from the main desktop PC using samba, so I just added the shares in XBMC.

Once added, XBMC scans your sources using some filename regexps that caught pretty much everything I threw at it. It downloaded plot summaries and fanart for all my movies and TV shows, and it of course uses your music collection's tags (which I'm a bit OCD about anyway).

The result is an instantly full and beautiful library. Here are some screenshots:

HTPC Shot  HTPC Shot  HTPC Shot  HTPC Shot 

Remote configuration

XBMC found and used a hotplugged keyboard, the case's built-in RF remote, and my lirc controlled mceusb remote all without issue right out of the box using default button mappings. I was impressed.

If you'd like to customize your remote behavior, there are two files involved: ~/.xbmc/userdata/Lircmap.xml and ~/.xbmc/userdata/keymaps/remote.xml. Defaults can be found in /opt/xbmc/system on an Arch install; just copy them and start editing.

Lircmap.xml will translate the device/button (as reported by irw) to an XBMC button string. Through this file, you can make it so that ... OK mceusb will register as "select". Then, in remote.xml you can actually map select to an XBMC action, like "Select".

It's all explained here and here.

The last little issue I noticed was that after playing a DVD, I couldn't eject. This was fixed by adding the following line to the file /etc/sysctrl.conf:

sys.dev.cdrom.lock = 0

A reboot is required for the change to take effect.

With the update to the 2.6.34 kernel, alsa now has support for audio over hdmi with my chipset (Asus/Nvidia GF210).

It wasn't exactly trivial to get it working though. Basically it took some trial and error to figure out that the audio out I needed was card 1 device 7, so plughw:1,7.

Sadly, specifying this plughw as a custom output device in XBMC's audio setup meant no dmix, which meant no crossfading (two sounds at once).

Thanks to Themaister on the arch forums though, I actually got around this quite quickly.

Save the following as /etc/asound.conf:

pcm.dmixer {
  type dmix
  ipc_key 2048
  slave {
    pcm "hw:1,7"
    period_size 512
    buffer_size 4096
    rate 48000
    format S16_LE
  }
  bindings {
    0 0
    1 1
  }
}

pcm.!default {
  type plug
  slave.pcm dmixer
}

pcm:iec958 {
  type plug
  slave.pcm dmixer
}

Reboot.

In the XBMC audio setup, specify default as the output device and iec958 as the passthrough device.

That's it!

published on 01 May 2010, tagged with arch linux home theater

Controlling MPlayer

MPlayer

MPlayer is an extremely versatile media player, I've begun to use it for absolutely any media that I'm not already piping through mpd. One day while going through my XMonad config, I decided it'd be convenient to bind my media keys to control MPlayer. I already had them bound to control volume/mpd, but I figured Meta + key combinations could be the MPlayer equivalents.

A bit of googling later and I had the solution: a fifo!

Fifos

Fifos (for file in file out) are simply a two way file on your system that can be used for communication; kind of a poor man's socket. You can play with them like this to get the idea:

# in one terminal:
mkfifo ./fifo
tail -f ./fifo

# and in some other terminal:
echo some text > ./fifo

MPlayer setup

The MPlayer manpage states that it can read commands out of a fifo by using the input flag. Combine that with the fact that MPlayer will read any flags from ~/.mplayer/config and we're 90% there.

mkfifo ~/.mplayer_fifo
vim ~/.mplayer/config

Add the following in that file:

input = file=/home/username/.mplayer_fifo

Now fire up a movie. Go to some other terminal and do the following:

echo pause > ~/.mplayer_fifo

If MPlayer didn't pause, double check the above. It works for me.

Keybinds

Now it's really up to you if you want to run these via a wrapper script, or send the commands directly from your keybind configuration. Here's an example wrapper script if you decide to go this way:

#!/bin/bash

fifo="$HOME/.mplayer_fifo"
command="$*"

echo $command > "$fifo" &>/dev/null

Place it in your $PATH, chmod +x it, and bind some keys to script 'play', script 'pause', etc.

Personally, I put a simple function (of basically the above) in my xmonad.hs, then call that from the keybinds. Here's the relevant section of my config:

myKeys = [ ...

         -- Mod+ to control MPlayer
         , ("M-<XF86AudioPlay>", mPlay "pause"   ) -- play/pause mplayer
         , ("M-<XF86AudioStop>", mPlay "stop"    ) -- stop mplayer
         , ("M-<XF86AudioPrev>", mPlay "seek -10") -- seek back 10 seconds
         , ("M-<XF86AudioNext>", mPlay "seek 10" ) -- seek forward 10 seconds

         , ...
         ] 

         where

           mPlay s = spawn $ "echo " ++ s ++ " > $HOME/.mplayer_fifo"

I'm using EZConfig notation in my keybindings.

I'll leave it up to you to figure out your WM's keybind configuration or use some generic tool like xbindkeys.

published on 08 Apr 2010, tagged with arch bash linux

Irssi

Irssi is an IRC client. If that sentence made no sense, then read no further. This post outlines my current irssi setup as I think it's quite nice and others may wish to copy it.

Screenshot

Irssi Screenshot 

Config

For the longest time I didn't really touch ~/.irssi/config except to set up auto connections etc. Then I started using awl.pl (which I'll describe in the scripts section). This meant I no longer had a use for one of the statusbars. So for the sake of completeness, here is the change I made to get the statusbar look you see in the screenshot:

statusbar = {

    # <snip>

    default = {
      window = {

        # disable the default bar containing window list
        disabled = "yes";

        # window, root
        type = "window";
        # top, bottom
        placement = "bottom";
        # number
        position = "0";
        # active, inactive, always
        visible = "active";

        # list of items in statusbar in the display order
        items = {
          barstart = { priority = "100"; };
          time = { };
          user = { };
          window = { };
          window_empty = { };
          lag = { priority = "-1"; };
          more = { priority = "-1"; alignment = "right"; };
          barend = { priority = "100"; alignment = "right"; };
          active = { };
          act = { };
        };
      };

      # <snip>

      prompt = {
        type = "root";
        placement = "bottom";
        # we want to be at the bottom always
        position = "100";
        visible = "always";
        items = {
          barstart = { priority = "100"; };
          time = { };

          user = { }; # added my current nick here b/c it was the only useful
                      # item in the disabled bar

          prompt = { priority = "-1"; };
          prompt_empty = { priority = "-1"; };
          # treated specially, this is the real input line.
          input = { priority = "10"; };
        };

      };

      # <snip>

    };
  };

My full config (sans passwords) can be downloaded here.

Theme

The theme I currently use was originally generane.theme; I've gradually hacked away at it until, at this point, it's entirely unlike that theme. I just call it pbrisbin.theme and it can be found with the above dotfiles. It's basically a really grey theme to go with my overall desktop. Messages from me are a bright-ish grey, with messages to me as bright yellow. Actions (/me stuff) are magenta and offset to the left which I really like.

Bitlbee

Bitlbee is a killer app. It basically sets up a small-footprint IRC server on your local machine, hooks into your various chat protocols (gchat, aim, facebook, twitter), and let's you /join or /query them as if they were any other #channel.

This is great for someone like me who's gotten used to /exec -o foo and other tricks that aren't possible in a normal chat client.

There are a lot of guides online for setting this up so I'm just going to list out a few facts that it took me a minute to figure out or get used to:

Scripts

And the best part, the scripts. All of these can be easily googled for so I won't provide links; the versions on my box could even be out of date anyway.

cap_sasl.pl - in an effort to streamline my dotfiles management, I was looking for ways to get plaintext passwords out of dotfiles. One such way is to use SASL for authentication to freenode. After getting the script, setup can be done via in-irssi commands as many existing how-tos outline. I got gummed up however because I fudged up the server name (freenode vs Freenode) when setting up sasl compared to when I had initially setup the connection...

This is why I prefer to do direct, in-file configuration. So, here are the portions of .irssi/config to support this:

servers = (
  {
    address = "irc.freenode.net";
    chatnet = "freenode";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_capath = "/etc/ssl/certs/";
    autoconnect = "yes";
  },

  ...

And place a file as ~/.irssi/sasl.auth with the following contents:

freenode    <primary nick>  <password>  DH-BLOWFISH

It's important that you use your primary nick or it won't work. For instance, I always talk as brisbin but that's just a secondary nick associated with my primary brisbin33, so I had to use brisbin33 in the sasl setup.

nm.pl - this handles random/unique nick coloring and nick alignment. Personally, I /set neat_maxlength 13.

awl.pl - the advanced window list (sometimes called adv_windowlist.pl). This gives that nice statusbar with the channel names and numbers. Channels turn bright white when active and magenta if I'm highlighted. Personally, I use /set awl_display_key "%w$N.$H$C$S" and awl_maxlines 1.

trackbar.pl - this puts a dashed mark in the buffer at the last point you viewed the conversation. I really like this script, it's simple but affective. If you hop around between windows this is a great little addition to your .irssi/scripts/autorun.

screen_away.pl - thank you rson for turning me onto this. Once I started using irssi exclusively in screen (as outlined here) this script really started coming in handy. It just auto-sets you as away when you detach your screen session and brings you back when you reattach. This means Ctrl-a d logs me off, and when I do reattach I've got all my messages waiting for me right there in window 1.

queryresume.pl - now that I'm using bitlbee as my main IM client, I'm spending a lot of time in queries. This script gives you a little bit of context by printing the last few lines of your most recent query with this person that you've just started a new query with.

hilightwin.pl - this script captures any text that matches your /hilight rules, whether it's nick or keyword-based. Anything you've set up as a hilight will be captured in a dedicated window. Couple this with a smart layout where your hilightwin is dedicated to the top 8 lines of your client, and you can always see who's talking at you, no matter what you're doing. Any google search for this script will not only give you the source, but also the commands required to setup the smart layout to go along with it.

link_titles.pl - this is a script that I recently wrote as a learning exercise in perl. It basically watches the conversation for urls. When it finds one, it visits that page and prints the title element back to the window where the link was sent. Most actual channels I'm in will have a bot that does this, but I wanted to print titles for links sent to me in a query via gchat or aim. The source for this is on my github, hopefully more scripts will show up there soon.

If you have any ideas for a script that's not already available, please let me know in the comments. I'm looking for something perl-ish to work on.

published on 20 Mar 2010, tagged with arch irc linux

Automounting

It seems as users (myself inclusive) progress through the stages of using a distribution like Arch linux, they reach certain stages. Like when you realize how amazing find -exec is. Or crossing over from god, vim is a pain in the ass! to jesus, why doesn't everyone use this?

I find one well-known stage is how can I automount my USB drives? This usually comes early on as a new Arch user ditches GNOME or KDE in favor of something lighter, something more minimalistic, something they can actually be proud to show off in the screenshot thread. Well, ditch the DE and you lose all those nifty little automagical tools, like gnome-volume-manager and the like.

So what do you do? hal should take care of it. Some ck-launch-session black magic might do the trick. Edit some *.fdi file to get it going?

No. Udev does just fine.

Udev

Udev has a little folder called /etc/udev/rules.d. In this folder, are 'rules files' each named 10-some-crap.rules. They are processed one by one each time some udev 'event' occurs, like, say, plugging in a flashdrive.

Go google udev rules, there's alot out there for all sorts of nifty things.

Someone smarter than I added a handful of useful rules to the Arch udev wiki page. The one I use is as follows:

# adjust this line to skip any persistent drives
# i.e. KERNEL!="sd[d-z][0-9]", ...
KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_auto_mount_end"

# Global mount options
ACTION=="add", ENV{mount_options}="relatime,users"

# Filesystem specific options
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id --label %N", ENV{dir_name}="%c"
ACTION=="add", PROGRAM!="/lib/initcpio/udev/vol_id --label %N", ENV{dir_name}="usbhd-%k"
ACTION=="add", RUN+="/bin/mkdir -p /media/%E{dir_name}", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}"
ACTION=="remove", ENV{dir_name}=="?*", RUN+="/bin/umount -l /media/%E{dir_name}", RUN+="/bin/rmdir /media/%E{dir_name}"
LABEL="media_by_label_auto_mount_end"

This file defines how udev reacts to usb drives (/dev/sda1, etc) being added and removed. You plug in a flashdrive, if it has a label, it's mounted at /media/<label>; if not, it's mounted at /media/usbhd_sda1 (for example). umount and remove the drive, and that directory under /media is removed. It's a beautiful thing.

Automount

One problem I found with this is that it works really well. When a device is added it is mounted, period. So whenever I tried to partition a drive, as soon as the partition was initialized it would get mounted, and the partitioning tool would fail with drive is mounted.

For this reason, I had to write a script. I always have to write a script.

What this does is simply write the above rules file or remove it. This effectively turns automounting on or off. So there you go, simple handling of usb flash drive with nothing but udev required.

DVDs and CDs

Just a bit about optical media. The above won't solve any issues related to that. I'll just say this though, if I need to do anything related to CDs or DVDs, I can just reference /dev/sr0 directly. Burning images, playing DVDs, it all works just fine using /dev directly. And when I need to mount it, I'll do it manually. I think a line in fstab will get /dev/sr0 to mount to /media/dvd if that's what your after.

published on 12 Jan 2010, tagged with arch linux bash

Backups

This post is very out of date. The scripts which are its subject no longer exist as I now use two much simpler scripts which can be found in my scripts repo.

Backups are extremely important. In linux, with a little effort and hardrive space, one can easily come up with a fully automated backup solution to suit any needs. Here, I'd like to outline my setup. Feel free to take it and adapt to your needs.

I'll go through what's required, how and why I do it the way I do, as well as the shortcomings of how I'm doing it.

Requirements

My main box runs on one 500G hardrive. So far, this has suited me well even with my extensive movies and music collection. I decided I wanted to have a daily backup and a monthly backup and only one copy of each, so I went out and got a 1TB hardrive, split it, and now use that for both.

All you need is space, so whether you use an internal drive like me, an external USB, or some off-site scp/rsync situation is up to you; you'll just have to modify my below script(s) to suit your setup.

How I do it

The first is a backup script that runs via cron daily and monthly. It can be downloaded from my git repo.

The script defines an array of files to include and another to exclude:

includes=( /srv/http /home/patrick /etc /usr /var /boot )
excludes=( Downloads lost+found )

It takes those directories and just rsyncs them with the backup location:

/mnt/backup/daily/
|-- boot
|-- etc
|-- http
|-- patrick
|-- usr
`-- var

/mnt/backup/monthly/
|-- boot
|-- etc
|-- http
|-- patrick
|-- usr
`-- var

It also creates two text files: one that lists all your installed packages less those that are foreign (from the AUR) and another that lists those foreign packages.

These lists can be used to quickly reinstall everything you had installed at the time of the backup.

pacman -Qqe | grep -Fvx "$(pacman -Qqm)" > "$backup_dir/paclog"
pacman -Qqm > "$backup_dir/aurlog"

Another script I use constantly is retrieve which will take the filenames passed on the commandline and look for them in your backups. If found, the files are retrieved and re-inserted into you live system.

This is great if you've seriously screwed up your xorg.conf (something not in git) and you want to just roll back to what you had yesterday.

The only trick to it is that it has to handle the fact that my backup stores patrick/ at top level even though it's /home/patrick/ on the live system.

retrieve is also no longer available in my git repo.

The last script that I have, I haven't had to use --knocks on wood--. This restore script is intended to be used after a crash and clean re install to restore your system back from the directories made by my backup script.

You guessed it, restore is also no longer in the repo.

Why mine sucks

This solution works for me, but it has it's shortcomings. Here are a few things to be aware of if you decided to implement something like what I have.

Not off-site, or even out-of-box.

If my apartment burns down, my backups are useless. To mitigate this, I've started taking manual copies of my monthly backup and storing them on a separate drive in a fireproof box.

Backups are not rolling

This isn't so bad for the dailies, but my monthly backup occurs every month on the first; this means if you have an issue that's more then two days old, and you happen to notice on the 2nd, you don't have a backup old enough to fix it.

Untested

I've never had to use restore, though I do use retrieve all the time. Anyone will tell you, an untested backup solution is no solution at all. Guess I'm just too lazy to hose my install to test it. Worse comes to worst, I know the backed up data is good; if my restore script fails I can always manually copy everything over. I pretty much did this last time I installed a new Arch box; as I tend to reuse configs, just grabbing them off of my main box's backups really sped up the process.

published on 03 Jan 2010, tagged with arch linux bash

Wifi Pipe

So the other day when I was using wifi-select (awesome tool) to connect to a friends hot-spot, I realized, "hey! This would be great as an openbox pipe menu!"

I'm fairly decent in bash and I knew both netcfg and wifi-select were in bash so why not rewrite it that way?

Wifi-Pipe

A simplified version of wifi-select which will scan for networks and populate an openbox right-click menu item with available networks. Displays security type and signal strength. Click on a network to connect via netcfg the same way wifi-select does it.

Zenity is used to ask for a password and notify of a bad connection. One can optionally remove the netcfg profile if the connection fails.

Requirements

The script now has its own github repo so it doesn't fall victim to bitrot. Please head there for more installation details and a copy of the source.

published on 05 Dec 2009, tagged with arch linux bash openbox

Using Two IMAP Accounts in Mutt

Mutt can be really great with multiple accounts, but it's not exactly intuitive to setup. Here I'll document how I access two Gmail accounts together in one mutt instance.

If you haven't yet seen my previous mutt post, please go read that now. I recommend using that post to get a single account setup first before coming back here. Even if you plan to jump right into a multi-account setup, this post assumes you've at least read the other one and will focus on the differences and required changes to get from there to here.

Offlineimap

To get Offlineimap syncing multiple accounts, we simply need to add additional configuration blocks to sync the second account with another local Maildir.

~/.offlineimaprc

[general]
ui = ttyui
accounts = Personal,Work

[Account Personal]
localrepository = Personal-Local
remoterepository = Personal-Remote

[Account Work]
localrepository = Work-Local
remoterepository = Work-Remote

[Repository Personal-Local]
type = Maildir
localfolders = ~/Mail/Personal

[Repository Work-Local]
type = Maildir
localfolders = ~/Mail/Work

[Repository Personal-Remote]
type = Gmail
remoteuser = username@gmail.com
remotepass = secret
realdelete = no
sslcacertfile = /etc/ssl/certs/ca-certificates.crt

[Repository Work-Remote]
type = Gmail
remoteuser = work-username@gmail.com
remotepass = secret
realdelete = no
sslcacertfile = /etc/ssl/certs/ca-certificates.crt

Obviously, if either of these accounts weren't a Gmail server, the configuration blocks would be different.

You can test your setup by running offlineimap -o to sync things once. It could take a while, but once done, you should have a nice folder structure like this:

Mail/
|-- Personal
|   |-- INBOX
|   `-- ...
`-- Work
    |-- INBOX
    `-- ...

Msmtp

Msmtp also handles multiple accounts very elegantly, we just add another account block for the second account.

~/.msmtprc

account personal
host smtp.gmail.com
port 587
protocol smtp
auth on
from username@gmail.com
user username@gmail.com
password secret
tls on
tls_nocertcheck

account work
host smtp.gmail.com
port 587
protocol smtp
auth on
from work-username@gmail.com
user work-username@gmail.com
password secret
tls on
tls_nocertcheck

account default : personal

Now we can simply call msmtp -a personal or msmtp -a work to use whichever account we want. Omitting the -a option will use the default account which we've set as personal.

Mutt

The goal with mutt is to have certain settings change when we enter certain folders. For example, when we're viewing +Personal/INBOX we want our from setting to be our personal From address and the sendmail setting should be msmtp -a personal. To provide this functionality, we're going to do the following:

  1. Place any account-specific settings in separate files
  2. Use mutt's folder-hook facility to source the proper file and set the proper settings upon entering a folder for a given account.

Here are the two account-specific files:

~/.mutt/accounts/personal

set from      = "username@gmail.com"
set sendmail  = "/usr/bin/msmtp -a personal"
set mbox      = "+Personal/archive"
set postponed = "+Personal/drafts"

color status green default

macro index D \
    "<save-message>+Personal/Trash<enter>" \
    "move message to the trash"

macro index S \
    "<save-message>+Personal/Spam<enter>"  \
        "mark message as spam"

~/.mutt/accounts/work

set from      = "work-username@gmail.com"
set sendmail  = "/usr/bin/msmtp -a work"
set mbox      = "+Work/archive"
set postponed = "+Work/drafts"

color status cyan default

macro index D \
    "<save-message>+Work/Trash<enter>" \
    "move message to the trash"

macro index S \
    "<save-message>+Work/Spam<enter>"  \
        "mark message as spam"

Notice the color line which changes the status bar depending on what account I'm "in" at any given moment.

The following settings will tell mutt to source one of these files upon entering a folder matching the given pattern, this will setup all the correct settings when entering a folder for a given account:

~/.muttrc

set spoolfile = "+Personal/INBOX"

source ~/.mutt/personal

folder-hook Personal/* source ~/.mutt/accounts/personal
folder-hook Work/*     source ~/.mutt/accounts/work

The first two lines effectively set Personal as the default account when we open mutt.

Well, that should do it. Open up mutt, change folders, send some mails, and make sure everything's working as you'd expect.

For reference, my complete and current setup can be found with my dotfiles.

published on 05 Dec 2009, tagged with linux gmail mutt

Text From CLI

This is a short but extensible script to allow text messaging (to verizon customers) straight from the commandline.

Setup requires simply a means to send email from the commandline along with a small script to pass the message off to <number>@vtext.com.

If you already have a CLI mailing solution you can just copy the script and go ahead and change the mail command to mutt, ssmtp, mailx, or whatever you're using.

Email from CLI

I use msmtp to send mails in mutt so it was easy for me to adapt that into a CLI mailing solution.

Here's a ~/.msmtprc for gmail:

# msmtp config file

# gmail
account gmail 
host smtp.gmail.com
port 587
protocol smtp
auth on
from username@gmail.com
user username@gmail.com
password gmail_password
tls on
tls_nocertcheck

account default : gmail

Right now, as-is, it's possible for you to echo "Some text" | msmtp someone@somewhere.com and it'll email just fine. I'd like to make things a little more flexible.

By dropping a file in ~/.mailrc we can change the mail command to use whatever binary we want instead of the default /usr/bin/sendmail. It should have the following contents:

set sendmail=/usr/bin/msmtp

Now, anytime your system mails anything on your behalf, it'll use msmtp.

The Script

The script started out very simply, here it is in its original form:

#!/bin/bash

if [[ $# -lt 2 ]]; then
  echo "usage: $0 [number] [some message]"
  exit 1
fi

number="$1"; shift

echo "$*" | mail "$number@vtext.com"

With this little sendtext.sh script in your back pocket, you can send yourself texts from remind, cron, rtorrent, or any other script to notify you (or other people) of whatever you want.

sendtext.sh 1234567890 'This is a test text, did it work?'

Sure did.

Now, at some point, Ghost1227 got bored again.

He took my sendtext script and ran with it. Added loads of carriers and some new option handling.

I took his update of my script and re-updated it myself. Mainly syntactical changes and minor options handling, just to tailor it to my needs.

The new version with my and ghost's changes can be downloaded from my git repo.

I also added simple phone book support. When sending a message to someone, pass -s <number> <name> and the contact will be saved to a text file. After that, you can just sendtext <name> and the most recent match out of this text file will be used. The service is saved as well (either the default or the one passed as an argument at the time of -s).

published on 05 Dec 2009, tagged with arch linux bash

Screen Tricks

Hopefully, if you're a CLI junky, you've heard of GNU/screen. And if you've heard of it, chances are you're using it.

Screen is a terminal multiplexer. This means that you can start screen in one terminal (say, your SSH connection) and open any number of terminals inside that terminal. This lets me have mutt, ncmpcpp, and a couple of spare shells all open inside my single PuTTY window at work.

This is a great use of screen, but the benefits don't have to end there. When I'm not at work but at home, I can use screen to run applications which I don't want to end if I want to change terminals, log in and out, or even if all of X comes crashing down around me.

See, screen can detach (default binding: C-a d). Better still, It will auto-detach if the terminal it's in crashes or you logout. You can then re-attach it later, from any other ssh session, tty, or X terminal.

This is great for apps like rtorrent and irssi, it's also great for not losing any work if your ssh connection gets flaky. Just re-connect and re-attach.

So now I have a dilemma. When I'm at work, I want to start screen and get a few fresh tabs set up as I've defined in ~/.screenrc: mutt, ncmpcpp, and three shells. But at home I don't want those things to load, I instead want only rtorrent or only irssi to load up in the new screen window.

Furthermore, if rtorrent or irssi are already running in some detached screen somewhere, I don't want to create an entirely new session, I'd rather grab that one and re-attach it here.

The goal was to achieve this without changing the commands I run day to day, affecting any current keybinds, or using any overly complicated scripts.

So, how do I do this as simply and easily as possible? Environment variables.

How to do it

First we set up one main ~/.screenrc which is always called. Then we set up a series of "screenrc extensions" which only load the apps in the screen session via a stanza of screen -t <name> <command> lines.

Next, we dynamically choose which "screenrc extension" to source from the main ~/.screenrc via two environment variables which are either exported from ~/.bashrc (the default) or explicitly set when running the command (the specialized cases).

So, set up a ~/.screenrc like this:

# screen config file; ~/.screenrc

# put all our main screen settings like
# term, shell, vbell, hardstatus whatever
#
# then add this:

# sources environment-specific apps
source "$SCREEN_CONF_DIR/$SCREEN_CONF"

# you can even add some tabs you'll always
# open no matter what

# then always open some terms
screen -t bash $SHELL
screen -t bash $SHELL
screen -t bash $SHELL

Now, how does screen know what "screenrc extension" to source? By setting those variables up in ~/.bashrc:

# dynamically choose which tabs load in screen
export SCREEN_CONF_DIR="$HOME/.screen/configs"
export SCREEN_CONF="main"

In a clean environment, screen will source that default ~/.screen/configs/main, which will:

# example: screen -t [name] [command]
screen -t mail mutt
screen -t music ncmpcpp

Why is this useful? Because, now I can do something like this:

SCREEN_CONF=rtorrent screen

And screen will instead source that explicitly set ~/.screen/configs/rtorrent which yields:

# example: screen -t [name] [command]
screen -t torrents rtorrent 

Et viola, no mutt or ncmpcpp, but rtorrent instead (same thing happens with irssi).

Oh, but it gets better! Now we'll add some aliases to ~/.bashrc to complete the whole thing:

alias irssi='SCREEN_CONF=irssi screen -S irssi -D -R irssi'
alias rtorrent='SCREEN_CONF=rtorrent screen -S rtorrent -D -R rtorrent'

Oh how beautiful, how simple, how easy. I type rtorrent, what happens?

Screen checks for any running screens with session-name "rtorrent" and re-attaches here and now. If none are found, screen opens a new screen (using the rtorrent file) and names the session "rtorrent" so we can -D -R it explicitly thereafter.

All of this happens for irssi too, and can be used for any app (or multi-app setup) you want.

Pretty KISS if I do say so.

published on 05 Dec 2009, tagged with arch linux screen bash

Mutt + Gmail + Offlineimap

Most people use Gmail. Some people like CLI mail clients. This post describes how I use Gmail in the best CLI mail client, mutt. Many people will back me up when I say it's a very good setup.

For reference, my complete and current setup can be found with my dotfiles.

Offlineimap

Step one is to setup Offlineimap to keep ~/Mail in sync with Gmail. This is a two way sync so anything moved, deleted, or sent from any IMAP-connected device or our local mutt interface will act exactly the same. This also has the benefit of storing offline, local copies of all your mails.

First, install Offlineimap and fill in an ~/.offlineimaprc like so:

[general]
ui = ttyui
accounts = Gmail

[Account Gmail]
localrepository = Gmail-Local
remoterepository = Gmail-Remote

[Repository Gmail-Local]
type = Maildir
localfolders = ~/Mail/Gmail

[Repository Gmail-Remote]
type = Gmail
remoteuser = you@gmail.com
remotepass = secret
realdelete = no
maxconnections = 3
sslcacertfile = /etc/ssl/certs/ca-certificates.crt

Test that this works by running offlineimap -o. Your first sync could take some time, but once done, you should see the folders under ~/Mail/Gmail with the proper structure.

Once you're satisfied syncing is working, we'll schedule a periodic sync via cron.

There are some tempting options offlineimap has for daemonizing itself to handle periodic syncing for you -- in my experience these don't work. Scheduling a full offlineimap run via cron is the only working setup I've been able to find.

To work around a thread-joining bug, I've landed on a wrapper script that spawns offlineimap to the background then babysits the process for up to 60 seconds. If it appears to be hung, it's killed.

#!/usr/bin/env bash

# Check every ten seconds if the process identified as $1 is still 
# running. After 5 checks (~60 seconds), kill it. Return non-zero to 
# indicate something was killed.
monitor() {
  local pid=$1 i=0

  while ps $pid &>/dev/null; do
    if (( i++ > 5 )); then
      echo "Max checks reached. Sending SIGKILL to ${pid}..." >&2
      kill -9 $pid; return 1
    fi

    sleep 10
  done

  return 0
}

read -r pid < ~/.offlineimap/pid

if ps $pid &>/dev/null; then
  echo "Process $pid already running. Exiting..." >&2
  exit 1
fi

offlineimap -o -u quiet & monitor $!

Set this script to run as frequently as you want, by adding something like the following to your crontab -- I chose to sync once every 3 minutes:

*/3 * * * * /path/to/mailrun.sh

Msmtp

Now we need a way to send mails. I like msmtp, you can also use other smtp clients. If you choose to install msmtp, the config file is at ~/.msmtprc and should look like this:

account default 
host smtp.gmail.com
port 587
protocol smtp
auth on
from user@gmail.com
user user@gmail.com
password secret
tls on
tls_nocertcheck

You can test this by executing echo "a test message" | msmtp you@gmail.com.

Mutt

Now the fun part! I don't know how many hours I've spent in the past year fine tuning my muttrc, but it'll never be done. Here are the parts required to get this setup working.

set mbox_type   = Maildir
set sendmail    = /usr/bin/msmtp

set folder      = ~/Mail
set spoolfile   = "+INBOX"
set mbox        = "+[Gmail]/All Mail"
set postponed   = "+[Gmail]/Drafts"
unset record

mailboxes +INBOX

macro index D \
    "<save-message>+[Gmail]/Trash<enter>" \
    "move message to the trash"

macro index S \
    "<save-message>+[Gmail]/Spam<enter>" \
    "mark message as spam"

The above should be enough to get a connection and start sending/receiving mail, but here are some other must-have options that make it feel a bit more like gmail:

# main options
set realname   = "Real Name"
set from       = "user@gmail.com"
set mail_check = 0
set envelope_from

unset move           # gmail does that
set delete           # don't ask, just do
unset confirmappend  # don't ask, just do!
set quit             # don't ask, just do!!
unset mark_old       # read/new is good enough for me

# sort/threading
set sort     = threads
set sort_aux = reverse-last-date-received
set sort_re

# look and feel
set pager_index_lines = 8
set pager_context     = 5
set pager_stop
set menu_scroll
set smart_wrap
set tilde
unset markers

# composing 
set fcc_attach
unset mime_forward
set forward_format = "Fwd: %s"
set include
set forward_quote

ignore *                               # first, ignore all headers
unignore from: to: cc: date: subject:  # then, show only these
hdr_order from: to: cc: date: subject: # and in this order

I've left out quite a few tweaks in the above so that those who are happy with mutt's very sane defaults aren't overwhelmed. Keep in mind, man muttrc is a great command for when you're bored.

That should do it. Hopefully this info will get you going in the right direction.

published on 05 Dec 2009, tagged with linux gmail mutt

Goodsong

If you're like me, (which you're probably not...) you enjoy listening to your music with the great music playing daemon known as mpd. You also have your entire collection on shuffle.

Occasionally, I'll fall into a valley of bad music and end up hitting next far too much to get to a good song. For this reason, I wrote goodsong.

What is it?

Essentially, you press one key command to say the currently playing song is good; then press a different key to say play me a good song.

Goodsong accomplishes exactly that. It creates a playlist file which you can auto-magically add the currently playing song to with the command goodsong. Subsequently, running goodsong -p will play a random track from that same list.

Here's the --help:

usage: goodsong [ -p | -ls ]

options:
      -p,--play   play a random good song
      -ls,--list  print your list with music dir prepended

      none        note the currently playing song as good

Installation

Goodsong is available in its current form in my git repo.

Usage

Using goodsong is easy. You can always just run it from CLI, but I find it's best when bound to keys. I'll leave the method for that up to you; xbindkeys is a nice WM-agnostic way to bind some keys, or you can use your a WM-specific configuration to do so.

Personally, I keep Alt-g as goodsong and Alt-Shift-g as goodsong -p.

You're going to have to spend some time logging songs as "good" before the -p option becomes useful.

I recently received a patch from a reader for this script. It adds a few features which I've happily merged in.

published on 05 Dec 2009, tagged with arch bash linux

Dvdcopy

Do not use this for bad things, m'kay?

What it looks like

Dvdcopy Shot 

Usage

usage: dvdcopy [ --option(=<argument>) ] [...]

~/.dvdcopy.conf will be read first if it's found (even if --config
is passed). for syntax, see the help entry for the --config option.
commandline arguments will overrule what's defined in the config.

invalid options are ignored.

options:

  --config=<file>               read any of the below options from a
                                file, note that you must strip the
                                '--' and set any argument-less
                                options specifically to either true
                                or false

                                there is no error if <file> doesn't
                                exist

  --directory=<directory>       set the working directory, default
                                is ./dvdcopy

  --keep_files                  keep all intermediate files; note
                                that they will be removed the next
                                time dvdcopy is run regardless of
                                this option

  --device=<file>               set the reader/burner, default is
                                /dev/sr0

  --title=<number>              set the title, default is longest

  --size=<number>               set the desired output size in KB, 
                                default is 4193404

  --limit=<number>              set the number of times to attempt a
                                read/burn before giving up, default
                                is 15

  --mpeg_only                   stop after transcoding the mpeg
  --dvd_only                    stop after authoring the dvd
  --iso_only                    stop after generating the iso

  --mpeg_dir=<directory>        set a save location for the
                                intermediate mpeg file, default is
                                blank -- don't save it

  --dvd_dir=<directory>         set a save location for the
                                intermediate vob folder, default is
                                blank -- don't save it

  --iso_dir=<directory>         set a save location for the
                                intermediate iso file, default is
                                blank -- don't save it

  --mencoder_options=<options>  pass additional arbitrary arguments
                                to mencoder, multiple options should
                                be quoted and there is no validation
                                on these; you'll need to know what
                                you're doing. the options are placed
                                after '-dvd-device <device>' but
                                before all others

  --quiet                       be quiet
  --verbose                     be verbose

  --force                       disable any options validation,
                                useful if ripping from an image file

  --help                        print this

What's it do?

Pop in a standard DVD9 (~9GB) and type dvdcopy. The script will calculate the video bitrate required to create an ISO under 4.3GB (standard DVD5). It will then use mencoder to create an authorable image and burn it back to a disc playable on any standard player.

Defaults are sane (IMO), but can be adjusted through the config file or the options passed at runtime (or both). I've now added a lot of cool features as described in the help.

How to get it

Install the AUR package here.

Grab the source from my git repo here.

published on 05 Dec 2009, tagged with aur arch bash linux

Downgrade

A special thanks to Kumyco for hosting the A.R.M.

Usage

The 'screenshot'

downgrade xorg-server

The following packages are available in your cache:
       1       local   xorg-server-1.7.1.901-2-x86_64.pkg.tar.gz [installed]
       2       local   xorg-server-1.7.1.901-1-x86_64.pkg.tar.gz
       3       local   xorg-server-1.7.1-1-x86_64.pkg.tar.gz
       4       local   xorg-server-1.7.0.902-1-x86_64.pkg.tar.gz
       5       local   xorg-server-1.7.0.901-1-x86_64.pkg.tar.gz
       6       local   xorg-server-1.6.3.901-1-x86_64.pkg.tar.gz
       7       local   xorg-server-1.6.3-4-x86_64.pkg.tar.gz
       8       local   xorg-server-1.6.3-3-x86_64.pkg.tar.gz
       9       local   xorg-server-1.6.3-2-x86_64.pkg.tar.gz
       10      local   xorg-server-1.6.2-1-x86_64.pkg.tar.gz
       11      local   xorg-server-1.6.1.901-3-x86_64.pkg.tar.gz
       12      local   xorg-server-1.6.1.901-1-x86_64.pkg.tar.gz

       please choose a version, type s to [s]earch A.R.M.: s

The following packages are available from the A.R.M.:
       1       extra   xorg-server-1.7.1.901-1-x86_64.pkg.tar.gz
       2       extra   xorg-server-1.7.1-1-x86_64.pkg.tar.gz

       please choose a version, type q to [q]uit: q

The help

usage: downgrade [ -p <command> ] [ -d <dir> ] [ -m <32|64> ] [ -a ] [ -i ] [ -- ] <pkg> ...
  options:
    -p,--pacman       set install command, default is `sudo pacman -U'
    -d,--pkgdir       set download directory (A.R.M. only), default is `~/Packages'
    -m,--arch         set search architecture (A.R.M. only), default is determined by `uname -m`
    -a,--noarm        don't search the A.R.M when nothing's available in cache
    -i,--noinstalled  don't show [installed] next to installed versions (speed up)

Installation

Install the AUR package here.

Grab the source from my git repo here.

published on 05 Dec 2009, tagged with aur arch linux bash

Display Manager

GDM, KDM, SLiM; they all serve one purpose. Accept a username/password and start X. The below accomplishes the same in the cleanest, simplest, most transparent way I know.

Simply put, if you're logging into the first tty and X isn't already running, start it.

Drop this at the bottom of whatever user's ~/.<shell>rc where you want it to apply:

if [[ $(tty) = /dev/tty1 ]] && [[ -z "$DISPLAY" ]]; then
  exec startx
fi

Make sure you to do put it at the bottom; I made the mistake of not realizing any settings after the startx won't be applied in the X environment that's started (duh).

One added benefit here is that if X dies for any reason, you aren't left logged in on tty1 like you might be using some other display managers. This is since the builtin exec replaces the current process with the one specified.

For a slightly more featureful bash-based login mechanism, be sure to check out CDM.

published on 05 Dec 2009, tagged with arch linux bash

Aurget

About

A simple pacman-like interface to the AUR written in bash.

Aurget is designed to make the AUR convenient and speed up tedious actions. The user can decide to search, download, build, and/or install packages consistently through a config file or dynamically by passing arguments on the commandline.

The user can also choose to edit all or no PKGBUILDs, and enable or disable auto-dependency-resolution through the same means.

Checking dependencies comes with risks because PKGBUILDs need to be sourced. Please, if you're worried about this, be sure to view all PKGBUILDs before proceeding or use the config file or commandline options to disable this check from occurring and remove any associated risk.

You have been warned.

Usage

The screenshot:

Aurget Screenshot 

The help:

usage: aurget [ -v | -h | -S* [ --options ] [ -- ] <arguments> ]
  options:

        -S  <package>   process <package> using your default sync_mode

        -Sd <package>   download <package>
        -Sb <package>   download and build <package>
        -Sy <package>   download, build, and install <package>

        -Su             process available upgrades using your default sync_mode

        -Sdu            download available upgrades
        -Sbu            download and build available upgrades
        -Syu            download, build, and install available upgrades

        -Ss  <term>     search aur for <term>
        -Ssq <term>     search aur for <term>, print only package names
        -Sp  <package>  print the PKGBUILD for <package>
        -Si  <package>  print extended info for <package>

        --rebuild       always rebuild (ignore your cache)

        --devel         only affects -Su, add all development packages

        --deps          resolve dependencies
        --nodeps        don't resolve dependencies

        --edit          prompt to edit all pkgbuilds
        --noedit        don't prompt to edit any pkgbuilds

        --discard       discard source files after building
        --nodiscard     don't discard source files after building

        --nocolor       disable colorized output
        --noconfirm     auto-answer y to all prompts

        --ignore '<package> <package> <...>'
                        add additional packages to be ignored

        --mopt '-opt -opt ...'
                        add additional options to the build command

        --popt '-opt -opt ...'
                        add additional options to the install command

        -v, --version   display version info
        -h, --help      display this

        --option=value  set config <option> as <value> for this run only

The --option=value flag is powerful in that it can greatly customize an aurget command for specific packages that require it (like an nvidia-beta / nvidia-utils-beta upgrade which requires additional pacman and makepkg options to complete). Beware that this command sets the variable it's passed even if that's not a "valid" variable, so it may have unintended consequences (i.e. if you pass --HOME=foo or something silly).

Installation

Install the AUR package here.

Follow development via my git repo here.

Bugs and Features

If you've found a bug or want to request a feature, please let me know via email. If you can implement what you're looking for, please fork my git repo and send me a pull request.

Aurget does not and will not search or install from official repos. This is by design and will not be implemented even if you offer a patch.

Use packer or clyde if this is what you're looking for.

Known Bugs

If you pass an aur package before one of its dependencies as the targets to aurget, it will not reorder the targets and the installation will probably fail on the first package. Accounting for this would require a lot of unneeded code. The makepkg error will tell you the dep is not satisfied and it's easy enough to adjust your targets and run it again.

In a somewhat related way, it is possible, depending on the structure of multi-level dependencies, for aurget to miss a dependency. As an example:

# coding specifically for this scenario:

    pkg
    `-- depends
        |-- foo
        |-- bar
        |   `- depends
        |      `- baz
        `-- baz

# would break this one (and vice versa):

    pkg
    `-- depends
        |-- foo
        |-- baz
        `-- bar
            `- depends
               `- baz

Aurget will filter out the duplicate dependency (baz), but in one of the cases it will be placed behind the package that needs it and makepkg will fail. I consider this improper packaging and have decided to not try and code around it. If you encounter this scenario, I encourage you to post a comment on the aur page of the parent package explaining that baz is unneeded in his depends array because it's pulled in by bar.

Some aur packages report a bad url to their tarball in the JSON interface. Aurget checks the downloaded file, if it's not a valid archive, it will try http://aur.archlinux.org/packages/$package/$package.tar.gz as a fallback. If neither the JSON url nor the fallback url provide a valid archive, well, there's not much I can do.

published on 05 Dec 2009, tagged with aur arch linux bash