Home

arch

Selecting URLs via Keyboard in XTerm

In a recent effort to keep my latest laptop more standard and less customized, I’ve been experimenting with XTerm over my usual choice of rxvt-unicode. XTerm is installed with the xorg group, expected by the template ~/.xinitrc, and is the terminal opened by most window managers’ default keybindings.

The only downside so far has been the inability to select and open URLs via the keyboard. This is trivial to configure in urxvt, but seems impossible in xterm. Last week, not having this became painful enough that I sat down to address it.

UPDATE: After a few weeks of use, discovering and attempting to fix a number of edge-case issues, I’ve decided to stop playing whack-a-mole and just move back to urxvt. Your mileage may vary, and if the setup described here works for you that’s great, but I can no longer fully endorse it.

I should’ve listened to 2009 me.

Step 1: charClass

Recent versions of XTerm allow you to set a charClass value which determines what XTerm thinks are WORDs when doing a triple-click selection. If you do a bit of googling, you’ll find there’s a way to set this charClass such that it considers URLs as WORDs, so you can triple-click on a URL and it’ll select it and only it.

~/.Xresources:

xterm*charClass: 33:48,37-38:48,45-47:48,64:48,58:48,126:48,61:48,63:48,43:48,35:48

I don’t recommend trying to understand what this means.

Step 2: exec-formatted

Now that we can triple-click-select URLs, we can leverage another feature of modern XTerms, exec-formatted, to automatically send the selection to our browser, instead of middle-click pasting it ourselves:

~/.Xresources:

*VT100.Translations: #override \n\
  Alt <Key>o: exec-formatted("chromium '%t'", PRIMARY) select-start() select-end()

Step 3: select-needle

You might be satisfied there. You can triple-click a URL and hit a keybinding to open it, no patching required. However, I despise the mouse, so we need to avoid that triple-click.

Here’s where select-needle comes in. It’s a patch I found on the Arch forums that allows you to, with a keybinding, select the first WORD that includes some string, starting from the cursor or any current selection.

What this means is we can look for the first WORD containing “://” and select it. You can hit the keybinding again to search up for the next WORD, or hit our current exec-formatted keybinding to open it. Just like the functionality present in urxvt.

I immediately found the patch didn’t work in mutt, which is a deal breaker. It seemed to rely on the URL being above screen->cursorp and mutt doesn’t really care about a cursor so it often leaves it at (0, 0), well above any URLs on screen. So I changed the algorithm to instead start at the bottom of the terminal always, regardless of where the cursor is. So far this has been working reliably.

I put the updated patch, along with a PKGBUILD for installing it on GitHub. I’ll eventually post it to the AUR to make this easier, but for now:

git clone https://github.com/pbrisbin/xterm-select-needle
(cd ./xterm-select-needle && makepkg -i)
rm -r ./xterm-select-needle

Then update that ~/.Xresources entry to:

*VT100.Translations: #override \n\
  Alt <Key>u: select-needle("://") select-set(PRIMARY) \n\
  Alt <Key>o: exec-formatted("chromium '%t'", PRIMARY) select-start() select-end()

And that’s it.

17 Dec 2016, tagged with arch

Mocking Bash

Have you ever wanted to mock a program on your system so you could write fast and reliable tests around a shell script which calls it? Yeah, I didn’t think so.

Well I did, so here’s how I did it.

Cram

Verification testing of shell scripts is surprisingly easy. Thanks to Unix, most shell scripts have limited interfaces with their environment. Assertions against stdout can often be enough to verify a script’s behavior.

One tool that makes these kind of executions and assertions easy is cram.

Cram’s mechanics are very simple. You write a test file like this:

The ls command should print one column when passed -1

  $ mkdir foo
  > touch foo/bar
  > touch foo/baz

  $ ls -1 foo
  bar
  baz

Any line beginning with an indented $ is executed (with > allowing multi-line commands). The indented text below such commands is compared with the actual output at that point. If it doesn’t match, the test fails and a contextual diff is shown.

With this philosophy, retrofitting tests on an already working script is incredibly easy. You just put in a command, run the test, then insert whatever the actual output was as the assertion. Cram’s --interactive flag is meant for exactly this. Aces.

Not Quite

Suppose your script calls a program internally whose behavior depends on transient things which are outside of your control. Maybe you call curl which of course depends on the state of the internet between you and the server you’re accessing. With the output changing between runs, these tests become more trouble than they’re worth.

What’d be really great is if I could do the following:

  1. Intercept calls to the program
  2. Run the program normally, but record “the response”
  3. On subsequent invocations, just replay the response and don’t call the program

This means I could run the test suite once, letting it really call the program, but record the stdout, stderr, and exit code of the call. The next time I run the test suite, nothing would actually happen. The recorded response would be replayed instead, my script wouldn’t know the difference and everything would pass reliably and instantly.

In case you didn’t notice, this is VCR.

The only limitation here is that a mock must be completely affective while only mimicking the stdout, stderr, and exit code of what it’s mocking. A command that creates files, for example, which are used by other parts of the script could not be mocked this way.

Mucking with PATH

One way to intercept calls to executables is to prepend $PATH with some controllable directory. Files placed in this leading directory will be found first in command lookups, allowing us to handle the calls.

I like to write my cram tests so that the first thing they do is source a test/helper.sh, so this makes a nice place to do such a thing:

test/helper.sh

export PATH="$TESTDIR/..:$TESTDIR/bin:$PATH"

This ensures that a) the executable in the source directory is used and b) anything in test/bin will take precedence over system commands.

Now all we have to do to mock foo is add a test/bin/foo which will be executed whenever our Subject Under Test calls foo.

Record/Replay

The logic of what to do in a mock script is straight forward:

  1. Build a unique identifier for the invocation
  2. Look up a stored “response” by that identifier
  3. If not found, run the program and record said response
  4. Reply with the recorded response to satisfy the caller

We can easily abstract this in a generic, 12 line proxy:

test/bin/act-like

#!/usr/bin/env bash
program="$1"; shift
base="${program##*/}"

fixtures="${TESTDIR:-test}/fixtures/$base/$(echo $* | md5sum | cut -d ' ' -f 1)"

if [[ ! -d "$fixtures" ]]; then
  mkdir -p "$fixtures"
  $program "$@" >"$fixtures/stdout" 2>"$fixtures/stderr"
  echo $? > "$fixtures/exit_code"
fi

cat "$fixtures/stdout"
cat "$fixtures/stderr" >&2

read -r exit_code < "$fixtures/exit_code"

exit $exit_code

With this in hand, we can record any invocation of anything we like (so long as we only need to mimic the stdout, stderr, and exit code).

test/bin/curl

#!/usr/bin/env bash
act-like /usr/bin/curl "$@"

test/bin/makepkg

#!/usr/bin/env bash
act-like /usr/bin/makepkg "$@"

test/bin/pacman

#!/usr/bin/env bash
act-like /usr/bin/pacman "$@"

Success!

After my next test run, I find the following:

$ tree test/fixtures
test/fixtures
├── curl
│   ├── 008f2e64f6dd569e9da714ba8847ae7e
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── 2c5906baa66c800b095c2b47173672ba
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── c50061ffc84a6e1976d1e1129a9868bc
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── f38bb573029c69c0cdc96f7435aaeafe
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── fc5a0df540104584df9c40d169e23d4c
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   └── fda35c202edffac302a7b708d2534659
│       ├── exit_code
│       ├── stderr
│       └── stdout
├── makepkg
│   └── 889437f54f390ee62a5d2d0347824756
│       ├── exit_code
│       ├── stderr
│       └── stdout
└── pacman
    └── af8e8c81790da89bc01a0410521030c6
        ├── exit_code
        ├── stderr
        └── stdout

11 directories, 24 files

Each hash-directory, representing one invocation of the given program, contains the full response in the form of stdout, stderr, and exit_code files

I run my tests again. This time, rather than calling any of the actual programs, the responses are found and replayed. The tests pass instantly.

24 Aug 2013, tagged with bash, testing, mocks, cram, aurget, arch

Disable All The Caps

If you’re like me and absolutely abhor the Caps Lock key, you’ve probably figured out some way to replace it with a more suitable function. I myself have settled on the following command to make it a duplicate Ctrl key:

$ setxkbmap -option ctrl:nocaps

This works great when placed in ~/.xinitrc and run as X starts, but what about USB keyboards which are plugged in later? Perhaps your pair-programming, or moving your laptop between work and home. This happens frequently enough that I thought it’d be nice to use a udev rule to trigger the command auto-magically whenever a keyboard is plugged in.

The setup is fairly simple in the end, but I found enough minor traps that I thought it was appropriate to document things once I got it working.

It has come to my attention that configuring this via xorg.conf.d actually does affect hot-plugged keyboards.

/etc/X11/xorg.conf.d/10-keyboard.conf

Section "InputClass"
  Identifier "Keyboard Defaults"
  MatchIsKeyboard "yes"
  Option "XkbOptions" "ctrl:nocaps"
EndSection

While this renders the rest of this post fairly pointless, it is a much cleaner approach.

Script

You can’t just place the setxkbmap command directly in a udev rule (that’d be too easy!) since you’ll need enough decoration that a one-liner gets a bit cumbersome. Instead, create a simple script to add this decoration; then we can call it from the udev rule.

Create the file wherever you like, just note the full path since it will be needed later:

~/.bin/fix-caps

#!/bin/bash
(
  sleep 1
  DISPLAY=:0.0 setxkbmap -option ctrl:nocaps
) &

And make it executable:

$ chmod +x ~/.bin/fix-caps

Important things to note:

  1. We sleep 1 in order to give time for udev to finish initializing the keyboard before we attempt to tweak things.
  2. We set the DISPLAY environment variable since the context in which the udev rule will trigger has no knowledge of X (also, the :0.0 value is an assumption, you may need to tweak it).
  3. We background the whole command with & so that the script returns control back to udev immediately while we (wait a second and) do our thing in the background.

Rule

Now that we have a single callable script, we just need to run it (as our normal user) when a particular event occurs.

/etc/udev/rules.d/99-usb-keyboards.rules

SUBSYSTEM=="input", ACTION=="add", RUN+="/bin/su patrick -c /home/patrick/.bin/fix-caps"

Be sure to change the places I’m using my username (patrick) to yours. I had considered putting the su in the script itself, but eventually decided I might use it outside of udev when I’m already a normal user. The additional line-noise in the udev rule is the better trade-off to me.

And again, a few things to note:

  1. I don’t get any more specific than the subsystem and action. I don’t care that this runs more often then actually needed.
  2. We need to use the full path to su, since udev has no $PATH.

Testing

There’s no need to reload anything (that happens automatically). To execute a dry run via udevadm test, you’ll need the path to an input device. This can be copied out of dmesg from when one was connected or you could take an educated guess.

Once that’s known, execute:

# udevadm test --action=add /dev/path/to/whatever/input0
...
...
run: '/bin/su patrick -c /home/patrick/.bin/fix-caps'
...

As long as you see the run bit towards the bottom, you should be all set. At this point, you could unplug and re-plug your keyboard, or tell udev to re-process events for currently plugged in devices:

# udevadm trigger

This command doesn’t need a device path (though I think you can give it one); without it, it triggers events for all devices.

06 May 2013, tagged with arch, linux, udev

Aurget v4

Aurget was one of the first programs I ever wrote. It’s seen decent adoption as far as AUR helpers go and it’s gradually increased its feature set over the past number of years.

The codebase had gotten a bit krufty and hard to follow. I decided to refactor to more isolated functions which didn’t rely on so many global variables. This directed refactor has improved things greatly: pretty much any function can be reasoned about in isolation and the logic flows more understandably during program execution.

Unintentionally, this push to simplify and clarify actually resulted in a number of user-facing and developer-facing improvements. Go figure.

Listen – I get it. I’m a minimalist too. Why do we need 400 lines to do essentially this?

Funny thing is, no matter how hard I tried to chip aurget down closer to those essential curl | tar | makepkg parts, I would inevitably end up with sprawling, spaghetti code after the very first feature beyond what the above script provides. Available upgrades? Bloat. Dependency resolution? Bloat.

So after trying and giving up a number of times, I’ve decided aurget is actually a fairly simple and straight forward implementation of the features it currently provides, despite being so big.

In my opinion, it can’t get much simpler without dropping features, and I actually like the features. Oh well, back to the post…

Stupid Networking

In so many places, aurget would hit the RPC in a per-package way. Refactoring the networking made it obvious when I could use the multiinfo endpoint to query for many packages at once. This made many actions way faster.

Speaking of networking, it’s now consolidated into a single get function. This means I can more easily play with curl options, error handling or even caching.

Pass the Buck

Package installation is now handled by simply passing --install to makepkg. This has a number of positive consequences: Goodbye sudo, and so long to any configuration around the pacman command. Split packages are also handled predictably and you’ll never have a successful build error with “package not found”.

Moar Options

Aurget will now pass any unknown options directly to makepkg (with a warning). This means anything makepkg supports, aurget supports too.

  • Run as root
  • Install a subset of a split package
  • Package signing

Etc…

Zomg DEBUG

One of the most frustrating aspects of working with aurget as its maintainer was troubleshooting. It was always both difficult and annoying to figure out what aurget was doing.

With the code now refactored such that the runtime behavior was more linear and understandable, a useful --debug flag could also be added.

I love this so much:

ossvol screenshot 

Bring on the bugs!

Seriously though, there may be some. I changed a whole lot and didn’t test exhaustively… Enjoy!

23 Apr 2013, tagged with bash, arch, aur

Systemd-User

BIG FAT WARNING

One thing to note, and the reason why I’m no longer using this setup: screen sessions started from within X cannot survive X restarts. If you don’t know what that means, don’t worry about it; if you do, you’ve been warned.

A while back, Arch switched to systemd for its init system. It’s pretty enjoyable from an end-user perspective, unit files are far easier to write and maintain than the old rc-scripts, the process groups are conceptually consistent and robust, and the centralized logging via journalctl is pretty sweet.

With a recent patch to dbus, it’s now possible to run a second, user-specific instance of systemd to manage your login session. In order to describe why we might want to do this, and before we go into detail on how, it’d be useful to first talk about how a minimal graphical login session can be managed without it.

Startx

When my machine first boots, I get a dead simple, tty-based login prompt. I don’t use a display manager and consider this my graphical login.

When I enter my credentials, normal shell initialization proceeds no differently than any other time. When ZSH (my shell) gets to the end of ~/.zshenv it finds the following:

[[ $TTY == /dev/tty1 ]] \
  && (( $UID ))         \
  && [[ -z $DISPLAY ]]  \
  && exec startx

Translation: if I’m logging into the first physical tty, I’m not the root user, and there’s no display already running, then start X.

More specifically, due to the exec there, it replaces itself with X. Without this, someone would find themselves at a logged-in shell if they were to kill X – something you can do even in the presence of most screen locks.

The startx command eventually sources ~/.xinitrc where we find commands for initializing my X environment: wallpaper setting, a few xset commands, starting up urxvtd, etc. After all that, my window manager is started.

This is all well and good, but there are a few improvements we can make by letting systemd manage this process.

First of all, the output of All The Things is hard to find. It used to be that calling startx on tty1 would start an X session on tty7 and the output of starting up X and any applications launched in xinitrc would at least be spammed back on tty1. That seems to no longer be the case and startx starts X right there on tty1 hiding any output from those programs.

It’s also hard to see all your X-related processes together as one cohesive group. Some processes would remain owned by xinit or xmonad (my window manager) but some would fork off and end up a direct child of init. Other “one shot” commands would run (or not) and exit without any visibility about their success or failure.

Using something like systemd can address these points.

Systemd

Using systemd means setting up your own targets and service files under ~/.config/systemd/user just as you do for your main system. With these service files in place, we can simply execute systemd --user and everything will be started up (in parallel) and managed by the new instance of systemd.

We’ll be able to get useful status info about all the processes, manage them like system services, and see any output from them by using journalctl.

Instructions

First, install user-session-units from the AUR, it’ll also pull in xorg-launch-helper. This will provide us with a xorg.target which will handle getting X running.

Now, there’s a bit of a chicken-and-the-egg problem we have to deal with. I ran into it when I first moved to systemd at the system level too. In order to have your services start automatically when you start systemd, you have to enable them. In order to enable them, you need systemd to be running. In this case it’s a bit trickier since the user session can’t start without one of those services we’re going to enable, but we can’t enable it without starting the user session…

The recommended way around this is to (temporarily) add systemd --user & to the top of your current .xinitrc and restart X.

It’s unclear to me if you could get away with just running that command from some terminal right where you are – feel free to try that first.

Now that we’re back with a user session running, we can set up our “services”.

First, we’ll write a target and service file for your window manager. I use XMonad, so mine looks like this:

~/.config/systemd/user/xmonad.target

[Unit]
Description=XMonad
Wants=xorg.target
Wants=xinit.target
Requires=dbus.socket
AllowIsolate=true

[Install]
Alias=default.target

~/.config/systemd/user/xmonad.service

[Unit]
Description=xmonad
After=xorg.target

[Service]
ExecStart=/home/you/.cabal/bin/xmonad
Environment=DISPLAY=:0

[Install]
WantedBy=xmonad.target

You can see we reference xinit.target as a Want, this target will hold all the services we used to start as part of xinitrc. Let’s create the target for now, we’ll worry about the services later:

~/.config/systemd/user/xinit.target

[Unit]
Description=Xinit
Requires=xorg.target

Then, enable our main target:

$ systemctl --user enable xmonad.target

This should drop a symlink at default.target setting that as the target to be run when you execute systemd --user.

At this point, if you were to quit X and run that command, it should successfully start X and load XMonad (or whatever WM you’re using). The next thing we’ll do is write service files for all the stuff you currently have in xinitrc.

Here are some of the ones I’m using as examples:

~/.config/systemd/user/wallpaper.service

[Unit]
Description=Wallpaper setter
After=xorg.target

[Service]
Type=oneshot
ExecStart=/usr/bin/feh --bg-tile %h/Pictures/wallpaper.png
Environment=DISPLAY=:0

[Install]
WantedBy=xinit.target

It appears that we can use %h to represent our home directory, but only in certain ways. The above works, but trying to use %h in the path to the xmonad binary does not. Sigh.

~/.config/systemd/user/synergys.service

[Unit]
Description=Synergy Server
After=xorg.target

[Service]
Type=forking
ExecStart=/usr/bin/synergys --debug ERROR

[Install]
WantedBy=xinit.target

~/.config/systemd/user/urxvtd.service

[Unit]
Description=Urxvt Daemon
After=xorg.target

[Service]
Type=simple
ExecStart=/usr/bin/urxvtd

[Install]
WantedBy=xinit.target

With these in place, you can enable them all. I used the following shortcut:

$ cd .config/systemd/user
$ for s in *.service; do systemctl --user enable $s; done

Now, finally, simply running systemd --user should start X, bring up all your X-related services, and start your window manager.

How you do this going forward is up to you, but in my case I simply updated the last line in my ~/.zshenv:

[[ $TTY == /dev/tty1 ]] \
  && (( $UID ))         \
  && [[ -z $DISPLAY ]]  \
  && exec systemd --user

Benefits

Arguably, we’ve complected what used to be a pretty simple system – so what do we gain?

Well, my OCD loves seeing a nice, isolated process group of everything X-related:

Process group 

We can also now use a consistent interface for working with services at both the system and X level. Included in this interface is the super useful status:

Service status 

Finally, we get the benefits of journalctl – running it as our non-root user will show us all the messages from our X-related processes.

There are probably a number of additional systemd features that we can now leverage for our graphical environment, but I’m still in the process of figuring that all out.

References

Many thanks to gtmanfred for putting this idea in my head and going through the hard work of figuring it out and writing it up. The information has also been added to the Arch wiki.

20 Jan 2013, tagged with arch, systemd

Raury

tl;dr: it’s just like aurget but more stable and faster

Developing aurget was getting cumbersome. Whenever something went wrong, it was very difficult to track down or figure out. The lack of standard tools for things like uri escaping or json parsing was getting a bit annoying, and the structure of the code just annoyed me. There was also a lack of confidence when changes were made, I could only haphazardly test a handful of scenarios so I was never sure if I’d introduced a regression.

I decided to write raury to be exactly as featureful as aurget, but different in the following ways:

  • Solid test coverage

raury coverage 

  • Useful debug output

raury debug output 

  • Clean code

raury code 

I think I’ve managed to hit on all of these with a happy side-effect too: it’s really fast. It takes less than a few seconds to churn through a complex tree of recursive dependencies. The same operation via aurget takes minutes.

Interested?

So anyway, if you’re interested in trying it out, I’d love for some beta testers.

Assuming you’ve got a working ruby environment (and the bundler gem), you can do the following to quickly play with raury:

$ git clone https://github.com/pbrisbin/raury && cd ./raury
$ bundle
$ bundle exec bin/raury --help

If you like it, you can easily install it for real:

$ rake install
$ raury --help

There’s also a simple script which just automates this clone-bundle-rake process for you:

$ curl https://github.com/pbrisbin/raury/raw/master/install.sh | bash

Also, tlvince was kind enough to create a PKGBUILD and even maintain an AUR package for raury. Check that out if it’s your preferred way to install.

30 Aug 2012, tagged with arch, aur, ruby

Dzen

Here’s for a small change of pace…

I’d like to talk about a tool I’ve all but forgotten I’m even using (and that’s a compliment to its stability and unobtrusiveness).

dzen is a great little application from the folks at suckless. It’s one of those do one thing and do it well types of tools. It’s probably not useful at all for anyone with a bloated –ahem, excuse me– featureful desktop environment or window manager (or both).

In my case, I’m using just XMonad with its beautiful simplicity. This means, of course, that there’s no out-of-the box… anything.

I’ve already covered some of this from an XMonad perspective, so this post is more about dzen’s general usefulness.

Volume

First up, a small visual notification when I adjust my volume:

ossvol screenshot 

It fades in (implicitly thanks to xcompmgr) for just a second when I adjust my volume and gives me that nice, unobtrusive indication of the volume level.

The actual volume adjustment can be done in many alsa or oss specific ways; for my implementation, just see the script as it is live. Completely separate of that, however, we can just use dzen to show the notification:

level=$(get_it_from_alsa_or_oss)

# we use a fifo to buffer the repeated commands that are common with 
# volume adjustment
pipe='/tmp/volpipe'

# define some arguments passed to dzen to determine size and color.
dzen_args=( -tw 200 -h 25 -x 50 -y 50 -bg '#101010' )

# similarly for gdbar
gdbar_args=( -w 180 -h 7 -fg '#606060' -bg '#404040' )

# spawn dzen reading from the pipe (unless it's in mid-action already).
if [[ ! -e "$pipe" ]]; then
  mkfifo "$pipe"
  (dzen2 "${dzen_args[@]}" < "$pipe"; rm -f "$pipe") &
fi

# send the text to the fifo (and eventually to dzen). oss reports 
# something like "15.5" on a scale from 0 to 25 so we strip the decimals 
# and send gdbar an optional "upper limit" argument
(echo ${level/.*/} 25 | gdbar "${gdbar_args[@]}"; sleep 1) >> "$pipe"

Pretty easy, and about as light-weight as you can get.

Status bar

Little known fact: you can use the ubiquitous conky to feed a simple statusbar via dzen. This means you can also use dzen escapes in your TEXT block to do cool things:

dzen screenshot 

My statusbar has the following “features”

  • Shows CPU/Mem/Network
  • The time, of course
  • Shows “Now playing” information from MPD
  • The music state (playing/paused) can be clicked to toggle it
  • The track title, when clicked, will advance

And here’s the conkyrc to achieve it:

background no
out_to_console yes
out_to_x no
override_utf8_locale yes
update_interval 1
total_run_times 0
mpd_host 192.168.0.5
mpd_port 6600

TEXT
[ ^ca(1, mpc toggle)${mpd_status}^ca()

  ${if_mpd_playing}- ${mpd_elapsed}/${mpd_length}$endif ]

  ^fg(\#909090)^ca(1, mpc next)${mpd_title}^ca()^fg() by

  ^fg(\#909090)${mpd_artist}^fg() from

  ^fg(\#909090)${mpd_album}^fg()

  Cpu: ^fg(\#909090)${cpu}%^fg()

  Mem: ^fg(\#909090)${memperc}%^fg()

  Net: ^fg(\#909090)${downspeedf eth0} / ${upspeedf eth0}^fg()

  ${time %a %b %d %H:%M}

Line breaks added for clarity.

The most interesting part is the clickable areas: ^ca( ... )some text^ca() defines an area of “some text” that can be clicked. The two arguments inside the first parens are are “which mouse button” and “what command to run”. Pretty simple and damn convenient.

Then all you’ve got to do is call this from your startup script:

$ conky -c ~/path/to/that | dzen2 -p -other -args

The -p option just means “persist” so the dzen will never close.

Wrap-up

This was just two examples of some uses for a simple “pipe some text in and see it” GUI toolkit – there are plenty others.

This echoes one of the great things about open-source: something like this is so small, so simple, it could never have survived marketing meetings, planning sessions or cost-benefit analyses – but here it is, and I find it oh-so-very-useful.

29 Apr 2012, tagged with arch, bash, linux

Dont Do That

I use Arch linux for a number of reasons. Mainly, it’s transparent and doesn’t hold your hand. You’re given simple, powerful tools and along with that comes the ability to shoot yourself in the foot. This extends to the community where we can and should help those newer than ourselves to manage this responsibility intelligently, but without holding their hand or taking any of that power away through obfuscation.

The Problem

There’s always been the potential for a particular command to break your system:

$ pacman -Sy foo

What this command literally means is, “update the local index of available packages and install the package foo”. Misguided users assume this is the correct way to ensure you receive the latest version of foo available. While it’s true that it is one way, it’s not the correct way. Moreover, using this command can easily break your system.

Let’s walk through an example to illustrate the problem:

  • A user has firefox 3.0 and gimp 6.1 installed, both of which depend on libpng>=1.0
  • An update comes out for libpng to version 1.2
  • Arch maintainers release libpng 1.2, firefox 3.0-2, and gimp 6.1-2 (the latter two now depending on libpng>=1.2)
  • An update comes out for firefox to version 3.1
  • Arch maintainers release firefox 3.1 which depends on libpng>=1.2
  • Our user (incorrectly) says pacman -Sy firefox hoping to get this new version
  • pacman (correctly) installs firefox 3.1 and libpng 1.2

There’s nothing here to tell pacman to update gimp since libpng 1.2 is >= 1.0 which meets gimp’s dependency contstraints.

However, our user’s gimp binary is actually linked directly to /usr/lib/libpng.so.1.0 and is now broken. Sadface.

In this example, the outcome is a broken gimp. However, if the shared dependency were instead something like readline and the broken package something like bash, you might be left with an unusable system requiring a rescue disk or reinstall. This of course lead to a lot of unhappy users.

The Solution

There are a few options to avoid this, the two most viable being:

  1. Instruct users to not execute -Sy foo unless they know how foo and its dependencies will affect their system.
  2. Instruct Arch maintainers to use a hard constraint in these cases, so firefox and gimp should depend on libpng==1.0

If we went with option two, the user, upon running pacman -Sy firefox would’ve gotten an error for unresolvable dependencies stating that gimp requires libpng==1.0.

Going this route might seem attractive (especially to users) but it causes a number of repository management headaches dealing with exact version constraints on so many heavily depended-upon packages. The potential headache to the maintainers far out-weighed the level of effort required to educate users on the pitfalls of -Sy.

So, option one it is.

The Wrong Advice

It was decided (using the term loosely) to tell anyone and everyone to always, no matter what, when they want to install foo, execute:

$ pacman -Syu foo

I argue that this advice is so opposite to The Arch Way, that it’s downright evil.

What this command really says is, “update your system and install foo”. Sure, that’s no big deal, it’s not harmful, may or may not be quick and ensures you don’t run into the trouble we’ve just described.

Coincidentally, this is also the correct way to ensure you get the absolute latest version of foo – if and only if foo had a new version released since your last system update.

My issue is not that it doesn’t work. My issue is not that it’s incorrect advice to those with that specific intention. My issue is that, nine times out of ten, that’s not the user’s intention. They simply want to install foo.

You’re now telling someone to run a command that does more than what they intended. It does more than is required. It’s often given out as advice with no explanation and no caveats. “Oh, you want to install something? -Syu foo is how you do that…” No, it really isn’t.

You’ve now wasted network resources, computational resources, the user’s time and you’ve taught them that the command to install foo is -Syu foo. Simplicity and transparency aside, that’s just lying.

If you’ve been given this advice, I’m sorry. You’ve been done a disservice.

The Correct Advice

To update your system:

$ pacman -Syu

To install foo:

$ pacman -S foo

To update your system and install foo:

$ pacman -Syu foo

Simple, transparent, no breakage. That’s the advice you give out.

Sure, by all means, if your true intention is to upgrade the system and install foo, you should absolutely -Syu foo but then, and only then, does that command make any sense.

</rant>

24 Mar 2012, tagged with arch, linux, pacman

Mairix

Mairix is a nice little utility for indexing and searching your emails. Its smooth integration with mutt is also a plus.

I used to use native mutt search, but it’s pretty slow. So far, mairix is giving me a good approximation of the google-powered search available in the web interface and it’s damn fast.

As I go through this setup, keep in mind the example config files are designed to work with my overall mutt setup; one which is described in two other posts here and here.

If you need a little context, checkout my mutt-config repo which has a fully functioning ~/.mutt, example files for the other apps involved (offlineimap, msmtprc, and now mairix), and any scripts the setup needs.

Mairix

First, of course, install mairix:

pacman -S mairix

Then, setup a ~/.mairixrc which defines where your mails are and their type as well as where to store the results and index. Here’s an example:

# where you keep your mail
base=/home/<you>/Mail

# colon separated list of maildirs to index.
#
# I have two accounts each in their own subfolder. the '...' means there 
# are subdirectories to search as well; it's like saying GMail/* and 
# GMX/*
maildir=GMail...:GMX...

# I omit gmail's archive folder so as to pevent duplicate hits
omit=GMail/all_mail

# search results will be copied to base/<this folder> for viewing in 
# mutt
mfolder=mfolder

# and the path to the index itself
database=/home/<you>/Mail/.mairix_database

With that in place, run mairix once to build the initial index. This first run will be slower but in my tests, subsequent rebuilds were almost instant.

In situations like these, I’ll usually add a verbose flag so I can be sure things are working as expected.

At this point, you could actually do some searching right from the commandline:

mairix some search term # search and populate mfolder
mutt -f mfolder         # open it in mutt

This wasn’t the usage I was after however, I’m typically already in mutt when I want to search my mails.

Mutt

My original script for this purpose was pretty simple. It prompted for the search term and ran it. The problem was you then needed a separate keybind to actually view the results.

Thankfully, Scott commented and provided a more advanced script which got around this issue. Many thanks to Scott and whomever wrote the script in the first place.

This version does some manual tty trickery to build its own prompt, read your input, execute the search and open the results. All from just one keybind.

I merged the two scripts together into what you see below. The main changes from Scott’s version are the following:

  1. I kept my clear, purge, search method rather than relying on cron to keep the index up to date.
  2. I removed the append-search functionality; not my use-case.
  3. I removed the <return> from the ^G trap; it was getting executed by mutt and opening the first message in the inbox after a cancelled search.
  4. I fixed it so that backspace works properly in the prompt.

So, here it is:

#!/bin/bash

read_from_config() {
  local key="$1" config="$HOME/.mairixrc"

  sed '/^'"$key"'=\([^ ]*\) *.*$/!d; s//\1/g' "$config"
}

read -r base    < <(read_from_config 'base')
read -r mfolder < <(read_from_config 'mfolder')

# prevent rm / further down...
[[ -z "$base$mfolder" ]] && exit 1

searchdir="$base/$mfolder"

set -f                          # disable globbing.
exec < /dev/tty 3>&1 > /dev/tty # restore stdin/stdout to the terminal,
                                # fd 3 goes to mutt's backticks.
saved_tty_settings=$(stty -g)   # save tty settings before modifying
                                # them

# trap <Ctrl-G> to cancel search
trap '
  printf "\r"; tput ed; tput rc
  printf "/" >&3
  stty "$saved_tty_settings"
  exit
' INT TERM

# put the terminal in cooked mode. Set eof to <return> so that pressing
# <return> doesn't move the cursor to the next line. Disable <Ctrl-Z>
stty icanon echo -ctlecho crterase eof '^M' intr '^G' susp ''

set $(stty size) # retrieve the size of the screen
tput sc          # save cursor position
tput cup "$1" 0  # go to last line of the screen
tput ed          # clear and write prompt
tput sgr0
printf 'Mairix search for: '

# read from the terminal. We can't use "read" because, there won't be
# any NL in the input as <return> is eof.
search=$(dd count=1 2>/dev/null)

# clear the folder and execute a fresh search
( rm -rf "$searchdir"
  mairix -p
  mairix $search
) &>/dev/null

# fix the terminal
printf '\r'; tput ed; tput rc
stty "$saved_tty_settings"

# to be executed by mutt when we return
printf "<change-folder-readonly>=$mfolder<return>" >&3

A non-trivial macro provides the interface to the script. It sets a variable called my_cmd to the output of the script, which should be the actual change-folder command, then executes it.

macro generic ,s "<enter-command>set my_cmd = \`$HOME/.mutt/msearch\`<return><enter-command>push \$my_cmd<return>" "search messages"

I’ve gotten used to “comma-keybinds” from setting that as my localleader in vim. It’s nice because it very rarely conflicts with anything existing and it’s quite fast to type.

One downside which I’ve been unable to fix (and believe me, I’ve tried!) is that if you press ^G to cancel a search but you’ve typed a few letters into the prompt, mutt will read those letters as commands (via the push) and execute them.

The only thing I could do is prefix those characters with something. I’ve decided to use /. That makes mutt see it as a normal search which you can execute or ^G again to cancel. Annoying, but better than mutt flailing around executing rando commands…

I haven’t had the time yet to learn all the tricks, but here are some of the more useful-looking searches from man mairix:

Useful searches

   t:word                             Match word in the To: header.

   c:word                             Match word in the Cc: header.

   f:word                             Match word in the From: header.

   s:word                             Match word in the Subject: header.

   m:word                             Match word in the Message-ID: 

                                      header.

   b:word                             Match word in the message body 
                                      (text or html!)

   d:[start-datespec]-[end-datespec]  Match messages with Date: headers 
                                      lying in the specific range.

Multiple body parts may be grouped together, if a match in any of them 
is sought.

   tc:word  Match word in either the To: or Cc: headers (or both).

   bs:word  Match word in either the Subject: header or the message body 
            (or both).

   The a: search pattern is an abbreviation for tcf:; i.e. match the 
   word in the To:, Cc: or From: headers.  ("a" stands for "address" in 
   this case.)

The "word" argument to the search strings can take various forms.

   ~word        Match messages not containing the word.

   word1,word2  This matches if both the words are matched in the 
                specified message part.

   word1/word2  This matches if either of the words are matched in the 
                specified message part.

   substring=   Match any word containing substring as a substring

   substring=N  Match any word containing substring, allowing up to N 
                errors in the match.

   ^substring=  Match any word containing substring as a substring, with 
                the requirement that substring occurs at the beginning 
                of the matched word.

Happy searching!

03 Jul 2011, tagged with arch, bash, linux, mutt

Pacprune

A fairly long time ago, there was a thread on the Arch forums about clearing your pacman cache.

Pacman’s normal -Sc will remove all versions of any packages that are no longer installed and -Scc will clear that plus old versions of packages that are still installed.

The poster wanted a way to run -Scc but also keep the last 1 or 2 versions back from installed. There was no support for this in pacman directly, so a bit of a bash-off ensued.

I wrote a pretty crappy script which I posted there, it laid around in my ~/.bin collecting dust for a while, but I recently rewrote it. I’m pretty proud of the result for its effectiveness and succinctness, so I think it deserves a little discussion.

The methodology of the two versions is the same, but this new version leans heavily on good ol’ unix shell-scripting principles to provide the exact same functionality in way less code, memory, and time.

Approach

The first approach discussed on the thread was to parse filenames for package and version, then do a little sort-grepping to figure out which versions to keep and which versions to discard. This method is fast, but provably inaccurate if a package name contains numbers on the end.

I went a different way.

For each package, pull the .PKGINFO file out of the archive, parse the pkgname and pkgversion variables out of it, then do the same sort-grepping to figure out what to discard.

My first implementation of this algorithm was really bad. I’d parse and write pkgname|pkgversion to a file in /tmp. Then I’d grep unique package names using -m to return at most the number of versions you want to keep (of each package) and store that in another file. I’d then walk those files and rm the packages.

Ick.

Needs moar unix

The aforementioned ugliness, plus some configuration and error checking weighed in at 162 lines of code, used two files, and was dirt slow. I decided to re-attack the problem with a unix mindset.

In a nutshell: write small units that do one thing and communicate via simple text streams.

The first unit this script needs is a parser. It should accept a list of packages (relative file paths) on stdin, parse and output two space-separated values on stdout: name and path. The path will be needed by the next unit down the line, so we need to pass it through.

parse() {
  local package opt

  while read -r package; do
    case "$package" in
      *gz) opt='-qxzf' ;;
      *xz) opt='-qxJf' ;;
    esac

    bsdtar -O $opt "$package" .PKGINFO |\
        awk -v package="$package" '/^pkgname/ { printf("%s %s\n", $3, package) }'
  done
}

11 lines and damn fast. Thank god for bsdtar’s -q option. It tells the extraction to stop after finding the file I’ve requested. Since the .PKGINFO file is usually the first thing in the archive, we barely do any work to get the values.

It’s also done completely in RAM by piping tar directly to awk.

Step two would be the actual pruning. Accept that same space-separated list on stdin and for any package versions beyond the ones we want to keep (the 3 most recent), echo the full path to the package file on stdout.

prune() {
  local name package last_seen='' num_seen=0

  while read -r name package; do
    [[ -n "$last_seen" ]] && [[ "$last_seen" != "$name" ]] && num_seen=0

    num_seen=$((num_seen+1))

    # print full path
    [[ $num_seen -gt $versions_to_keep ]] && readlink -f "$package"

    last_seen="$name"
  done
}

Just watch the list go by and count the number of packages for each name. I’m ensuring that the list is coming in reverse sorted already, so once we see the number of packages we want to keep, any same-named packages after that should be printed.

So simple.

This function can get away with being simple because it doesn’t take into account what’s actually installed on your system. It just keeps the most recent 3 versions of each unique package in the cache. Therefore, to do a full clean, run pacman -Sc first to remove all versions of uninstalled software. Then use this script to clear all but installed plus the two previous versions. This assumes the highest version in the cache is the installed version which may or may not be true in all cases.

All that’s left is to make that reverse sorted list and pipe it through.

find ./ -maxdepth 1 -type f -name '*.pkg.tar.[gx]z' | LC_ALL='C' sort -r | parse | prune

So the whole script (new version) weighs in at ~30 lines (with whitespace) and I claim it is exactly as feature-rich as the first version.

I know what you’re saying: there’s no definition of the cache, no optional safe-list vs actual-removing behavior, there’s no removing at all!

Well, you’re just not thinking unix.

$ cd /some/cache/of/packages
$ pacprune                  # as a normal user, just print the list that 
                            # should be removed -- totally safe.
$ pacprune | sudo xargs rm  # then do the actual removal

You’re free to get as fancy as you’d like too…

$ archiveit() { sudo mv "$@" ~/pkg_archive/; }
$ pacprune | xargs archiveit

And the only configuration is setting the versions_to_keep variable at the top of the script.

The script can be found in my scripts repo.

11 Jun 2011, tagged with linux, bash, arch

Web Preview

Recently, I made the switch (again) away from Uzbl as my main browser. Jumanji is a really nice browser in that it’s as light as Uzbl but feels more polished. It provides almost all of the features I had to build into Uzbl myself right out of the box. The tab-completion on the commands and urls is incredibly useful and negates the need for all the external history and bookmark scripts that I was using with Uzbl. The only part I really miss is (obviously) the controllability and configurability.

I only ever used this controllability for one thing: previewing web pages as I write them. I had a nice little script that would go out and ask each Uzbl instance what its URI was, and if it matched the URI version of the filename I was currently editing it would send the reload command to this browser.

You just cannot do something like this in any other browser.

So I figured, if I relegate Uzbl to this one single simple use, it’s configurability could be leveraged such that I could strip out anything that didn’t serve this one purpose and the browser would be incredibly responsive.

In the end, I’m actually amazed at how well this worked out. During my testing, I actually spent a good ten minutes troubleshooting a nonexistent bug because the page was reloading so fast that I thought nothing was happening.

This works nicely for me because my desktop is my web server. All I have to do is vim /srv/http/pages/foo.html and I’m editing http://localhost/pages/foo.html directly.

I’m not saying it’s impossible to pull this off with a remote server, this just makes things easier. It’s up to you to port my script for use in a remote server setting.

First thing you’ll need is my script, download the raw version into your $PATH.

Adjust the in-script variables srv_dir and srv_url to match your environment. These variables are used to turn a filename like /srv/http/pages/foo.html into a url like http://localhost/pages/foo.html.

Recently the script has changed slightly to work with my new framework; I now just define file_url as a direct modification of $2.

Make sure you’ve got uzbl installed and uzbl-core is also in your $PATH.

Add the following uber simplistic configuration file for uzbl at ~/.config/uzbl/config:

set socket_dir         = /tmp
set status_background  = #303030
set uri_section        = \@[\@uri]\@
set status_format      = <span font_family="Verdana" foreground="#909090">@uri_section</span>
set title_format_short = Uzbl - \@TITLE
set title_format_long  = @title_format_short

This just makes sure a socket is placed in /tmp and makes the status bar a little more pleasing on the eyes.

Only the socket_dir declaration is actually needed for the script to function.

Finally, add the following to your ~/.vimrc:

command! Open :! webpreview --open %
command! Reload :! webpreview --reload %

au BufWritePost /srv/http/pages/* silent Reload

This defines an Open and Reload command to be used directly within vim and also sets up an auto command to fire whenever I hit :w on a page I’m editing.

In your ~/.vimrc you could make these conditional for html and php filetypes and, as you can see, the auto-refresh only happens if I’m editing a file under my server’s pages directory. You’ll want to do something similar so that the script doesn’t run for all files all the time.

That’s all that’s needed. Fire up your favorite text editor and give it a try.

26 Jul 2010, tagged with arch, uzbl, website

HTPC

I’ve recently finished work on an HTPC. The goal was to run a media center WM on a box that looked appropriate in my cabinet by my TV using a remote. That much I’ve done; all that’s left is tweaking the remote functions and adding to the collection.

Hardware

The first thing I got was the case; I wanted one with a built in remote and a low enough profile to fit in my TV cabinet and not look out of place.

Enter Lian Li’s PC-C39. Let me say, it’s a great case. It’s small, quiet, and looks great. One problem, the remote is garbage.

It doesn’t work more than 2 feet away from the sensor. The remote is RF (another flaw IMO) and the sensor is actually over-shielded by the case itself. Solution? Slide open the top of the case (even just an inch), your range will increase tenfold. I did this for a while but wanted something better – more on that later; anyone reading this should buy the PC-C37B which is the same case but sans the trash remote (and $50 bucks).

Next, I stopped in at MicroCenter to pick up the internal components. I knew I wanted to spend five to six hundred bucks and get a decently powered machine; one that could keep up with whatever HD content I wanted to run without getting too hot.

Here’s what I ended up with:

  • Intel DP55WB mATX 129.99
  • ASUS GF210 47.99
  • Intel Core i5 650 139.99
  • OCZ 4GB DDR3 1600 CL8 119.99
  • OCZ 600W Stealth 69.99
  • WD 1TB SATA 99.99
  • LG 22x Burner 29.99
  • Total 639.93

After the usual mail-in-rebates, It’ll be just over $550. You could definitely achieve a great system for less, but I wanted something more high-end (and I had just gotten my tax return), so I probably spent a little more than I had to.

So now that I’ve got a fully functioning box, it’s time to fix my remote situation.

Enter Logitech’s Harmony 300. I originally bought this thinking it was primarily a PC Media Center remote and would come with its own USB IR receiver. It did not. I was pissed.

In the end, I’m really glad I made that mistake because the remote’s awesome. You configure it by plugging it into a computer and using an in-browser control panel (luckily it’s mac+firefox compatible), just add devices by Manufacturer number, and that’s it.

To get it working with the computer was a bit more involved, but not much.

First, I had to get my own USB IR Receiver. Luckily, amazon had a Dell RC6 receiver for like $18 bucks, sold. Then it was just a matter of adding its MFR# to the harmony setup and starting lirc.

If you’re on Arch, it’s like this:

pacman -S lirc
cp /usr/share/lirc/remotes/mceusb/lircd.conf.mceusb /etc/lirc/lircd.conf
/etc/rc.d/lircd start

You can test it by typing irw and pressing some buttons.

You’ll want to add lirc_mceusb2 to MODULES and lircd to DAEMONS in /etc/rc.conf.

If you find on reboot that your remote’s not working, check if /dev/lirc0 exists (it needs to); if this happens, try a different USB port, that solved it for me

Now I’ve got just one remote that runs my whole living room. The girlfriend was pleased. There was much rejoicing.

Software

I went with XBMC. Once installed, I set up an autologin by editing /etc/inittab (assuming xbmc is your default username):

## Only one of the following two lines can be uncommented!
# Boot to console
#id:3:initdefault:
# Boot to X11
id:5:initdefault:

# snip...

x:5:respawn:/bin/su xbmc -l -c "/bin/bash --login -c startx >/dev/null 2>&1"

And then adding the following to that user’s ~/.xinitrc:

exec /usr/bin/ck-launch-session /usr/bin/dbus-launch --exit-with-session /usr/bin/xbmc --standalone -fs

Most of the above is out of date now. I defer to the Arch wiki for details on setting up XBMC.

I share my media from the main desktop PC using samba, so I just added the shares in XBMC.

Once added, XBMC scans your sources using some filename regexps that caught pretty much everything I threw at it. It downloaded plot summaries and fanart for all my movies and TV shows, and it of course uses your music collection’s tags (which I’m a bit OCD about anyway).

The result is an instantly full and beautiful library. Here are some screenshots:

HTPC Shot  HTPC Shot  HTPC Shot  HTPC Shot 

Remote configuration

XBMC found and used a hotplugged keyboard, the case’s built-in RF remote, and my lirc controlled mceusb remote all without issue right out of the box using default button mappings. I was impressed.

If you’d like to customize your remote behavior, there are two files involved: ~/.xbmc/userdata/Lircmap.xml and ~/.xbmc/userdata/keymaps/remote.xml. Defaults can be found in /opt/xbmc/system on an Arch install; just copy them and start editing.

Lircmap.xml will translate the device/button (as reported by irw) to an XBMC button string. Through this file, you can make it so that ... OK mceusb will register as “select”. Then, in remote.xml you can actually map select to an XBMC action, like “Select”.

It’s all explained here and here.

The last little issue I noticed was that after playing a DVD, I couldn’t eject. This was fixed by adding the following line to the file /etc/sysctrl.conf:

sys.dev.cdrom.lock = 0

A reboot is required for the change to take effect.

With the update to the 2.6.34 kernel, alsa now has support for audio over hdmi with my chipset (Asus/Nvidia GF210).

It wasn’t exactly trivial to get it working though. Basically it took some trial and error to figure out that the audio out I needed was card 1 device 7, so plughw:1,7.

Sadly, specifying this plughw as a custom output device in XBMC’s audio setup meant no dmix, which meant no crossfading (two sounds at once).

Thanks to Themaister on the arch forums though, I actually got around this quite quickly.

Save the following as /etc/asound.conf:

pcm.dmixer {
  type dmix
  ipc_key 2048
  slave {
    pcm "hw:1,7"
    period_size 512
    buffer_size 4096
    rate 48000
    format S16_LE
  }
  bindings {
    0 0
    1 1
  }
}

pcm.!default {
  type plug
  slave.pcm dmixer
}

pcm:iec958 {
  type plug
  slave.pcm dmixer
}

Reboot.

In the XBMC audio setup, specify default as the output device and iec958 as the passthrough device.

That’s it!

01 May 2010, tagged with arch, linux, home theater

Controlling MPlayer

MPlayer

MPlayer is an extremely versatile media player, I’ve begun to use it for absolutely any media that I’m not already piping through mpd. One day while going through my XMonad config, I decided it’d be convenient to bind my media keys to control MPlayer. I already had them bound to control volume/mpd, but I figured Meta + key combinations could be the MPlayer equivalents.

A bit of googling later and I had the solution: a fifo!

Fifos

Fifos (for file in file out) are two way files on your system that can be used for communication; kind of a poor man’s socket. You can play with them like this to get the idea:

# in one terminal:
mkfifo ./fifo
tail -f ./fifo

# and in some other terminal:
echo some text > ./fifo

MPlayer setup

The MPlayer manpage states that it can read commands out of a fifo by using the input flag. Combine that with the fact that MPlayer will read any flags from ~/.mplayer/config and we’re 90% there.

mkfifo ~/.mplayer_fifo
vim ~/.mplayer/config

Add the following in that file:

input = file=/home/username/.mplayer_fifo

Now fire up a movie. Go to some other terminal and do the following:

echo pause > ~/.mplayer_fifo

If MPlayer didn’t pause, double check the above. It works for me.

Keybinds

Now it’s really up to you if you want to run these via a wrapper script, or send the commands directly from your keybind configuration. Here’s an example wrapper script if you decide to go this way:

#!/bin/bash

fifo="$HOME/.mplayer_fifo"
command="$*"

echo $command > "$fifo" &>/dev/null

Place it in your $PATH, chmod +x it, and bind some keys to script 'play', script 'pause', etc.

Personally, I put a simple function (of basically the above) in my xmonad.hs, then call that from the keybinds. Here’s the relevant section of my config:

myKeys = [ ...

         -- Mod+ to control MPlayer
         , ("M-<XF86AudioPlay>", mPlay "pause"   ) -- play/pause mplayer
         , ("M-<XF86AudioStop>", mPlay "stop"    ) -- stop mplayer
         , ("M-<XF86AudioPrev>", mPlay "seek -10") -- seek back 10 seconds
         , ("M-<XF86AudioNext>", mPlay "seek 10" ) -- seek forward 10 seconds

         , ...
         ] 

         where

           mPlay s = spawn $ "echo " ++ s ++ " > $HOME/.mplayer_fifo"

I’m using EZConfig notation in my keybindings.

I’ll leave it up to you to figure out your WM’s keybind configuration or use some generic tool like xbindkeys.

08 Apr 2010, tagged with arch, bash, linux

Irssi

Irssi is an IRC client. If that sentence made no sense, then read no further. This post outlines my current irssi setup as I think it’s quite nice and others may wish to copy it.

Note: I’ve since moved to weechat. If anyone’s interested, that config can be found here.

Screenshot

Irssi Screenshot 

Config

For the longest time I didn’t really touch ~/.irssi/config except to set up auto connections etc. Then I started using awl.pl (which I’ll describe in the scripts section). This meant I no longer had a use for one of the statusbars. So for the sake of completeness, here is the change I made to get the statusbar look you see in the screenshot:

statusbar = {

    # <snip>

    default = {
      window = {

        # disable the default bar containing window list
        disabled = "yes";

        # window, root
        type = "window";
        # top, bottom
        placement = "bottom";
        # number
        position = "0";
        # active, inactive, always
        visible = "active";

        # list of items in statusbar in the display order
        items = {
          barstart = { priority = "100"; };
          time = { };
          user = { };
          window = { };
          window_empty = { };
          lag = { priority = "-1"; };
          more = { priority = "-1"; alignment = "right"; };
          barend = { priority = "100"; alignment = "right"; };
          active = { };
          act = { };
        };
      };

      # <snip>

      prompt = {
        type = "root";
        placement = "bottom";
        # we want to be at the bottom always
        position = "100";
        visible = "always";
        items = {
          barstart = { priority = "100"; };
          time = { };

          user = { }; # added my current nick here b/c it was the only useful
                      # item in the disabled bar

          prompt = { priority = "-1"; };
          prompt_empty = { priority = "-1"; };
          # treated specially, this is the real input line.
          input = { priority = "10"; };
        };

      };

      # <snip>

    };
  };

My full config (sans passwords) can be downloaded here.

Theme

The theme I currently use was originally generane.theme; I’ve gradually hacked away at it until, at this point, it’s entirely unlike that theme. I just call it pbrisbin.theme and it can be found with the above dotfiles. It’s a really grey theme to go with my overall desktop. Messages from me are a bright-ish grey, with messages to me as bright yellow. Actions (/me stuff) are magenta and offset to the left which I really like.

Bitlbee

Bitlbee is a killer app. It sets up a small-footprint IRC server on your local machine, hooks into your various chat protocols (gchat, aim, facebook, twitter), and let’s you /join or /query them as if they were any other #channel.

This is great for someone like me who’s gotten used to /exec -o foo and other tricks that aren’t possible in a normal chat client.

There are a lot of guides online for setting this up so I’m just going to list out a few facts that it took me a minute to figure out or get used to:

  • In the &bitlbee channel, any text not prefixed with a buddy’s nick is interpreted as a command to bitlebee itself.

  • If you decide to chat with buddies by sending nick-prefixed messages within the main &bitlbee channel, it’s not a chatroom and they can’t see things you send to other nicks.

  • Whether you decide to talk to a buddy via a nick-prefixed message or a query, bitlbee remembers this and any future conversations initiated by them will come in the same way by default.

Scripts

And the best part, the scripts. All of these can be easily googled for so I won’t provide links; the versions on my box could even be out of date anyway.

cap_sasl.pl - in an effort to streamline my dotfiles management, I was looking for ways to get plaintext passwords out of dotfiles. One such way is to use SASL for authentication to freenode. After getting the script, setup can be done via in-irssi commands as many existing how-tos outline. I got gummed up however because I fudged up the server name (freenode vs Freenode) when setting up sasl compared to when I had initially setup the connection…

This is why I prefer to do direct, in-file configuration. So, here are the portions of .irssi/config to support this:

servers = (
  {
    address = "irc.freenode.net";
    chatnet = "freenode";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_capath = "/etc/ssl/certs/";
    autoconnect = "yes";
  },

  ...

And place a file as ~/.irssi/sasl.auth with the following contents:

freenode	<primary nick>	<password>	DH-BLOWFISH

It’s important that you use your primary nick or it won’t work. For instance, I always talk as brisbin but that’s just a secondary nick associated with my primary brisbin33, so I had to use brisbin33 in the sasl setup.

nm.pl - this handles random/unique nick coloring and nick alignment. Personally, I /set neat_maxlength 13.

awl.pl - the advanced window list (sometimes called adv_windowlist.pl). This gives that nice statusbar with the channel names and numbers. Channels turn bright white when active and magenta if I’m highlighted. Personally, I use /set awl_display_key "%w$N.$H$C$S" and awl_maxlines 1.

trackbar.pl - this puts a dashed mark in the buffer at the last point you viewed the conversation. I really like this script, it’s simple but affective. If you hop around between windows this is a great little addition to your .irssi/scripts/autorun.

screen_away.pl - thank you rson for turning me onto this. Once I started using irssi exclusively in screen (as outlined here) this script really started coming in handy. It just auto-sets you as away when you detach your screen session and brings you back when you reattach. This means Ctrl-a d logs me off, and when I do reattach I’ve got all my messages waiting for me right there in window 1.

queryresume.pl - now that I’m using bitlbee as my main IM client, I’m spending a lot of time in queries. This script gives you a little bit of context by printing the last few lines of your most recent query with this person that you’ve just started a new query with.

hilightwin.pl - this script captures any text that matches your /hilight rules, whether it’s nick or keyword-based. Anything you’ve set up as a hilight will be captured in a dedicated window. Couple this with a smart layout where your hilightwin is dedicated to the top 8 lines of your client, and you can always see who’s talking at you, no matter what you’re doing. Any google search for this script will not only give you the source, but also the commands required to setup the smart layout to go along with it.

link_titles.pl - this is a script that I recently wrote as a learning exercise in perl. It watches the conversation for urls. When it finds one, it visits that page and prints the title element back to the window where the link was sent. Most actual channels I’m in will have a bot that does this, but I wanted to print titles for links sent to me in a query via gchat or aim. The source for this is on my github, hopefully more scripts will show up there soon.

20 Mar 2010, tagged with arch, irc, linux

Automounting

It seems as users (myself inclusive) progress through the stages of using a distribution like Arch linux, they reach certain stages. Like when you realize how amazing find -exec is. Or crossing over from god, vim is a pain in the ass! to jesus, why doesn’t everyone use this?

I find one well-known stage is how can I automount my USB drives? This usually comes early on as a new Arch user ditches GNOME or KDE in favor of something lighter, something more minimalistic, something they can actually be proud to show off in the screenshot thread. Well, ditch the DE and you lose all those nifty little automagical tools, like gnome-volume-manager and the like.

So what do you do? hal should take care of it. Some ck-launch-session black magic might do the trick. Edit some *.fdi file to get it going?

No. Udev does just fine.

Udev

Udev has a little folder called /etc/udev/rules.d. In this folder, are ‘rules files’ each named 10-some-crap.rules. They are processed one by one each time some udev ‘event’ occurs, like, say, plugging in a flashdrive.

Go google udev rules, there’s a lot out there for all sorts of nifty things.

Someone smarter than I added a handful of useful rules to the Arch udev wiki page. The one I use is as follows:

# adjust this line to skip any persistent drives
# i.e. KERNEL!="sd[d-z][0-9]", ...
KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_auto_mount_end"

# Global mount options
ACTION=="add", ENV{mount_options}="relatime,users"

# Filesystem specific options
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id --label %N", ENV{dir_name}="%c"
ACTION=="add", PROGRAM!="/lib/initcpio/udev/vol_id --label %N", ENV{dir_name}="usbhd-%k"
ACTION=="add", RUN+="/bin/mkdir -p /media/%E{dir_name}", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}"
ACTION=="remove", ENV{dir_name}=="?*", RUN+="/bin/umount -l /media/%E{dir_name}", RUN+="/bin/rmdir /media/%E{dir_name}"
LABEL="media_by_label_auto_mount_end"

This file defines how udev reacts to usb drives (/dev/sda1, etc) being added and removed. You plug in a flashdrive, if it has a label, it’s mounted at /media/<label>; if not, it’s mounted at /media/usbhd_sda1 (for example). umount and remove the drive, and that directory under /media is removed. It’s a beautiful thing.

Automount

One problem I found with this is that it works really well. When a device is added it is mounted, period. So whenever I tried to partition a drive, as soon as the partition was initialized it would get mounted, and the partitioning tool would fail with drive is mounted.

For this reason, I had to write a script. I always have to write a script.

What this does is simply write the above rules file or remove it. This effectively turns automounting on or off. So there you go, simple handling of usb flash drive with nothing but udev required.

DVDs and CDs

Just a bit about optical media. The above won’t solve any issues related to that. I’ll just say this though, if I need to do anything related to CDs or DVDs, I can just reference /dev/sr0 directly. Burning images, playing DVDs, it all works just fine using /dev directly. And when I need to mount it, I’ll do it manually. I think a line in fstab will get /dev/sr0 to mount to /media/dvd if that’s what your after.

12 Jan 2010, tagged with arch, linux, bash

Backups

This post is very out of date. The scripts which are its subject no longer exist as I now use two much simpler scripts which can be found in my scripts repo.

Backups are extremely important. In linux, with a little effort and hardrive space, one can easily come up with a fully automated backup solution to suit any needs. Here, I’d like to outline my setup. Feel free to take it and adapt to your needs.

I’ll go through what’s required, how and why I do it the way I do, as well as the shortcomings of how I’m doing it.

Requirements

My main box runs on one 500G hardrive. So far, this has suited me well even with my extensive movies and music collection. I decided I wanted to have a daily backup and a monthly backup and only one copy of each, so I went out and got a 1TB hardrive, split it, and now use that for both.

All you need is space, so whether you use an internal drive like me, an external USB, or some off-site scp/rsync situation is up to you; you’ll just have to modify my below script(s) to suit your setup.

How I do it

The first is a backup script that runs via cron daily and monthly. It can be downloaded from my git repo.

The script defines an array of files to include and another to exclude:

includes=( /srv/http /home/patrick /etc /usr /var /boot )
excludes=( Downloads lost+found )

It takes those directories and just rsyncs them with the backup location:

/mnt/backup/daily/
|-- boot
|-- etc
|-- http
|-- patrick
|-- usr
`-- var

/mnt/backup/monthly/
|-- boot
|-- etc
|-- http
|-- patrick
|-- usr
`-- var

It also creates two text files: one that lists all your installed packages less those that are foreign (from the AUR) and another that lists those foreign packages.

These lists can be used to quickly reinstall everything you had installed at the time of the backup.

pacman -Qqe | grep -Fvx "$(pacman -Qqm)" > "$backup_dir/paclog"
pacman -Qqm > "$backup_dir/aurlog"

Another script I use constantly is retrieve which will take the filenames passed on the commandline and look for them in your backups. If found, the files are retrieved and re-inserted into you live system.

This is great if you’ve seriously screwed up your xorg.conf (something not in git) and you want to just roll back to what you had yesterday.

The only trick to it is that it has to handle the fact that my backup stores patrick/ at top level even though it’s /home/patrick/ on the live system.

retrieve is also no longer available in my git repo.

The last script that I have, I haven’t had to use –knocks on wood–. This restore script is intended to be used after a crash and clean re install to restore your system back from the directories made by my backup script.

You guessed it, restore is also no longer in the repo.

Why mine sucks

This solution works for me, but it has its shortcomings. Here are a few things to be aware of if you decided to implement something like what I have.

Not off-site, or even out-of-box.

If my apartment burns down, my backups are useless. To mitigate this, I’ve started taking manual copies of my monthly backup and storing them on a separate drive in a fireproof box.

Backups are not rolling

This isn’t so bad for the dailies, but my monthly backup occurs every month on the first; this means if you have an issue that’s more then two days old, and you happen to notice on the 2nd, you don’t have a backup old enough to fix it.

Untested

I’ve never had to use restore, though I do use retrieve all the time. Anyone will tell you, an untested backup solution is no solution at all. Guess I’m just too lazy to hose my install to test it. Worse comes to worst, I know the backed up data is good; if my restore script fails I can always manually copy everything over. I pretty much did this last time I installed a new Arch box; as I tend to reuse configs, just grabbing them off of my main box’s backups really sped up the process.

03 Jan 2010, tagged with arch, linux, bash

Wifi Pipe

So the other day when I was using wifi-select (awesome tool) to connect to a friends hot-spot, I realized, “hey! This would be great as an openbox pipe menu!”

I’m fairly decent in bash and I knew both netcfg and wifi-select were in bash so why not rewrite it that way?

Wifi-Pipe

A simplified version of wifi-select which will scan for networks and populate an openbox right-click menu item with available networks. Displays security type and signal strength. Click on a network to connect via netcfg the same way wifi-select does it.

Zenity is used to ask for a password and notify of a bad connection. One can optionally remove the netcfg profile if the connection fails.

Requirements

  • netcfg
  • zenity
  • A NOPASSWD entry in sudoers for this script
  • An entry in your menu.xml

The script now has its own github repo so it doesn’t fall victim to bitrot. Please head there for more installation details and a copy of the source.

05 Dec 2009, tagged with arch, linux, bash, openbox

Text From CLI

This is a short but extensible script to allow text messaging (to verizon customers) straight from the commandline.

Setup requires simply a means to send email from the commandline along with a small script to pass the message off to <number>@vtext.com.

If you already have a CLI mailing solution you can just copy the script and go ahead and change the mail command to mutt, ssmtp, mailx, or whatever you’re using.

Email from CLI

I use msmtp to send mails in mutt so it was easy for me to adapt that into a CLI mailing solution.

Here’s a ~/.msmtprc for gmail:

# msmtp config file

# gmail
account gmail
host smtp.gmail.com
port 587
protocol smtp
auth on
from username@gmail.com
user username@gmail.com
password gmail_password
tls on
tls_nocertcheck

account default : gmail

Right now, as-is, it’s possible for you to echo "Some text" | msmtp someone@somewhere.com and it’ll email just fine. I’d like to make things a little more flexible.

By dropping a file in ~/.mailrc we can change the mail command to use whatever binary we want instead of the default /usr/bin/sendmail. It should have the following contents:

set sendmail=/usr/bin/msmtp

Now, anytime your system mails anything on your behalf, it’ll use msmtp.

The Script

The script started out very simply, here it is in its original form:

#!/bin/bash

if [[ $# -lt 2 ]]; then
  echo "usage: $0 [number] [some message]"
  exit 1
fi

number="$1"; shift

echo "$*" | mail "$number@vtext.com"

With this little sendtext.sh script in your back pocket, you can send yourself texts from remind, cron, rtorrent, or any other script to notify you (or other people) of whatever you want.

sendtext.sh 1234567890 'This is a test text, did it work?'

Sure did.

Now, at some point, Ghost1227 got bored again.

He took my sendtext script and ran with it. Added loads of carriers and some new option handling.

I took his update of my script and re-updated it myself. Mainly syntactical changes and minor options handling, just to tailor it to my needs.

The new version with my and ghost’s changes can be downloaded from my git repo.

I also added simple phone book support. When sending a message to someone, pass -s <number> <name> and the contact will be saved to a text file. After that, you can just sendtext <name> and the most recent match out of this text file will be used. The service is saved as well (either the default or the one passed as an argument at the time of -s).

05 Dec 2009, tagged with arch, linux, bash

Screen Tricks

Hopefully, if you’re a CLI junky, you’ve heard of GNU/screen. And if you’ve heard of it, chances are you’re using it.

Screen is a terminal multiplexer. This means that you can start screen in one terminal (say, your SSH connection) and open any number of terminals inside that terminal. This lets me have mutt, ncmpcpp, and a couple of spare shells all open inside my single PuTTY window at work.

This is a great use of screen, but the benefits don’t have to end there. When I’m not at work but at home, I can use screen to run applications which I don’t want to end if I want to change terminals, log in and out, or even if all of X comes crashing down around me.

See, screen can detach (default binding: C-a d). Better still, It will auto-detach if the terminal it’s in crashes or you logout. You can then re-attach it later, from any other ssh session, tty, or X terminal.

This is great for apps like rtorrent and irssi, it’s also great for not losing any work if your ssh connection gets flaky. Just re-connect and re-attach.

So now I have a dilemma. When I’m at work, I want to start screen and get a few fresh tabs set up as I’ve defined in ~/.screenrc: mutt, ncmpcpp, and three shells. But at home I don’t want those things to load, I instead want only rtorrent or only irssi to load up in the new screen window.

Furthermore, if rtorrent or irssi are already running in some detached screen somewhere, I don’t want to create an entirely new session, I’d rather grab that one and re-attach it here.

The goal was to achieve this without changing the commands I run day to day, affecting any current keybinds, or using any overly complicated scripts.

So, how do I do this as simply and easily as possible? Environment variables.

How to do it

First we set up one main ~/.screenrc which is always called. Then we set up a series of “screenrc extensions” which only load the apps in the screen session via a stanza of screen -t <name> <command> lines.

Next, we dynamically choose which “screenrc extension” to source from the main ~/.screenrc via two environment variables which are either exported from ~/.bashrc (the default) or explicitly set when running the command (the specialized cases).

So, set up a ~/.screenrc like this:

# screen config file; ~/.screenrc

# put all our main screen settings like
# term, shell, vbell, hardstatus whatever
#
# then add this:

# sources environment-specific apps
source "$SCREEN_CONF_DIR/$SCREEN_CONF"

# you can even add some tabs you'll always
# open no matter what

# then always open some terms
screen -t bash $SHELL
screen -t bash $SHELL
screen -t bash $SHELL

Now, how does screen know what “screenrc extension” to source? By setting those variables up in ~/.bashrc:

# dynamically choose which tabs load in screen
export SCREEN_CONF_DIR="$HOME/.screen/configs"
export SCREEN_CONF="main"

In a clean environment, screen will source that default ~/.screen/configs/main, which will:

# example: screen -t [name] [command]
screen -t mail mutt
screen -t music ncmpcpp

Why is this useful? Because, now I can do something like this:

SCREEN_CONF=rtorrent screen

And screen will instead source that explicitly set ~/.screen/configs/rtorrent which yields:

# example: screen -t [name] [command]
screen -t torrents rtorrent 

Et viola, no mutt or ncmpcpp, but rtorrent instead (same thing happens with irssi).

Oh, but it gets better! Now we’ll add some aliases to ~/.bashrc to complete the whole thing:

alias irssi='SCREEN_CONF=irssi screen -S irssi -D -R irssi'
alias rtorrent='SCREEN_CONF=rtorrent screen -S rtorrent -D -R rtorrent'

Oh how beautiful, how simple, how easy. I type rtorrent, what happens?

Screen checks for any running screens with session-name “rtorrent” and re-attaches here and now. If none are found, screen opens a new screen (using the rtorrent file) and names the session “rtorrent” so we can -D -R it explicitly thereafter.

All of this happens for irssi too, and can be used for any app (or multi-app setup) you want.

Pretty KISS if I do say so.

05 Dec 2009, tagged with arch, linux, screen, bash

Goodsong

If you’re like me, (which you’re probably not…) you enjoy listening to your music with the great music playing daemon known as mpd. You also have your entire collection on shuffle.

Occasionally, I’ll fall into a valley of bad music and end up hitting next far too much to get to a good song. For this reason, I wrote goodsong.

What is it?

Essentially, you press one key command to say the currently playing song is good; then press a different key to say play me a good song.

Goodsong accomplishes exactly that. It creates a playlist file which you can auto-magically add the currently playing song to with the command goodsong. Subsequently, running goodsong -p will play a random track from that same list.

Here’s the --help:

usage: goodsong [ -p | -ls ]

options:
      -p,--play   play a random good song
      -ls,--list  print your list with music dir prepended

      none        note the currently playing song as good

Installation

Goodsong is available in its current form in my git repo.

Usage

Using goodsong is easy. You can always just run it from CLI, but I find it’s best when bound to keys. I’ll leave the method for that up to you; xbindkeys is a nice WM-agnostic way to bind some keys, or you can use your a WM-specific configuration to do so.

Personally, I keep Alt-g as goodsong and Alt-Shift-g as goodsong -p.

You’re going to have to spend some time logging songs as “good” before the -p option becomes useful.

I recently received a patch from a reader for this script. It adds a few features which I’ve happily merged in.

  • Various methods are employed to try and determine exactly what mpd.conf you’re currently running with at the time
  • The goodsong list is now a legitimate playlist file stored in your playlist_directory as specified in mpd.conf

05 Dec 2009, tagged with arch, bash, linux

Dvdcopy

Do not use this for bad things, m’kay?

What it looks like

Dvdcopy Shot 

Usage

usage: dvdcopy [ --option(=<argument>) ] [...]

~/.dvdcopy.conf will be read first if it's found (even if --config
is passed). for syntax, see the help entry for the --config option.
commandline arguments will overrule what's defined in the config.

invalid options are ignored.

options:

  --config=<file>               read any of the below options from a
                                file, note that you must strip the
                                '--' and set any argument-less
                                options specifically to either true
                                or false

                                there is no error if <file> doesn't
                                exist

  --directory=<directory>       set the working directory, default
                                is ./dvdcopy

  --keep_files                  keep all intermediate files; note
                                that they will be removed the next
                                time dvdcopy is run regardless of
                                this option

  --device=<file>               set the reader/burner, default is
                                /dev/sr0

  --title=<number>              set the title, default is longest

  --size=<number>               set the desired output size in KB, 
                                default is 4193404

  --limit=<number>              set the number of times to attempt a
                                read/burn before giving up, default
                                is 15

  --mpeg_only                   stop after transcoding the mpeg
  --dvd_only                    stop after authoring the dvd
  --iso_only                    stop after generating the iso

  --mpeg_dir=<directory>        set a save location for the
                                intermediate mpeg file, default is
                                blank -- don't save it

  --dvd_dir=<directory>         set a save location for the
                                intermediate vob folder, default is
                                blank -- don't save it

  --iso_dir=<directory>         set a save location for the
                                intermediate iso file, default is
                                blank -- don't save it

  --mencoder_options=<options>  pass additional arbitrary arguments
                                to mencoder, multiple options should
                                be quoted and there is no validation
                                on these; you'll need to know what
                                you're doing. the options are placed
                                after '-dvd-device <device>' but
                                before all others

  --quiet                       be quiet
  --verbose                     be verbose

  --force                       disable any options validation,
                                useful if ripping from an image file

  --help                        print this

What’s it do?

Pop in a standard DVD9 (~9GB) and type dvdcopy. The script will calculate the video bitrate required to create an ISO under 4.3GB (standard DVD5). It will then use mencoder to create an authorable image and burn it back to a disc playable on any standard player.

Defaults are sane (IMO), but can be adjusted through the config file or the options passed at runtime (or both). I’ve now added a lot of cool features as described in the help.

How to get it

Install the AUR package here.

Grab the source from my git repo here.

05 Dec 2009, tagged with aur, arch, bash, linux

Downgrade

Downgrade eases downgrading packages in Arch Linux.

Examples

Downgrade some packages, checking both local cache and the A.R.M.:

$ downgrade foo bar

Downgrade a package, looking in only local cache:

$ NOARM=1 downgrade foo

Downgrade a package, looking in only the A.R.M.:

$ NOCACHE=1 downgrade foo

Downgrade a package, looking only in local cache, and favoring su over sudo even when sudo is available:

$ NOARM=1 NOSUDO=1 downgrade foo

Installation

Install the AUR package here.

For more details, reporting Issues, etc, see the GitHub project.

05 Dec 2009, tagged with aur, arch, linux, bash

Display Manager

GDM, KDM, SLiM; they all serve one purpose: accept a username/password and start X. The below accomplishes the same in the cleanest, simplest, most transparent way I know.

# Note: a $SHELL of either bash or zsh is assumed

if [[ $TTY == /dev/tty1 ]] && [[ -z $DISPLAY ]]; then
  exec startx
fi

These are the last lines of my ~/.zprofile, but they would work as well in ~/.bashrc if that’s your preferred shell.

One added benefit here is that if X dies for any reason, you aren’t left logged in on tty1 like you might be using some other display managers. This is since the built-in exec replaces the current process with the one specified.

05 Dec 2009, tagged with arch, linux, bash

Aurget

A simple pacman-like interface to the AUR written in bash.

About

Aurget is designed to make the AUR convenient and speed up tedious actions. The user can decide to search, download, build, and/or install packages consistently through a configuration file or dynamically by passing arguments on the command-line.

Sourcing user-created PKGBIULDs comes with risks. Please, if you’re worried about this, be sure to view all PKGBUILDs before proceeding.

You have been warned.

Installation

Study the Arch wiki, then manually build and install aurget.

Follow development via GitHub.

Usage

See aurget --help, man 1 aurget, and man 5 aurgetrc.

Reporting Bugs

If you’ve found a bug or want to request a feature, please let me know via GitHub Issues. If you can implement what you’re looking for, please open a Pull Request, preferably including tests.

Aurget does not and will not search or install from the official repositories. This is by design and will not be implemented even if you offer a patch. Use another AUR Helper if this is what you’re looking for.

05 Dec 2009, tagged with aur, arch, linux, bash