Home

arch

Selecting URLs via Keyboard in XTerm

In a recent effort to keep my latest laptop more standard and less customized, I’ve been experimenting with XTerm over my usual choice of rxvt-unicode. XTerm is installed with the xorg group, expected by the template ~/.xinitrc, and is the terminal opened by most window managers’ default keybindings.

The only downside so far has been the inability to select and open URLs via the keyboard. This is trivial to configure in urxvt, but seems impossible in xterm. Last week, not having this became painful enough that I sat down to address it.

UPDATE: After a few weeks of use, discovering and attempting to fix a number of edge-case issues, I’ve decided to stop playing whack-a-mole and just move back to urxvt. Your mileage may vary, and if the setup described here works for you that’s great, but I can no longer fully endorse it.

I should’ve listened to 2009 me.

Step 1: charClass

Recent versions of XTerm allow you to set a charClass value which determines what XTerm thinks are WORDs when doing a triple-click selection. If you do a bit of googling, you’ll find there’s a way to set this charClass such that it considers URLs as WORDs, so you can triple-click on a URL and it’ll select it and only it.

~/.Xresources:

xterm*charClass: 33:48,37-38:48,45-47:48,64:48,58:48,126:48,61:48,63:48,43:48,35:48

I don’t recommend trying to understand what this means.

Step 2: exec-formatted

Now that we can triple-click-select URLs, we can leverage another feature of modern XTerms, exec-formatted, to automatically send the selection to our browser, instead of middle-click pasting it ourselves:

~/.Xresources:

*VT100.Translations: #override \n\
  Alt <Key>o: exec-formatted("chromium '%t'", PRIMARY) select-start() select-end()

Step 3: select-needle

You might be satisfied there. You can triple-click a URL and hit a keybinding to open it, no patching required. However, I despise the mouse, so we need to avoid that triple-click.

Here’s where select-needle comes in. It’s a patch I found on the Arch forums that allows you to, with a keybinding, select the first WORD that includes some string, starting from the cursor or any current selection.

What this means is we can look for the first WORD containing “://” and select it. You can hit the keybinding again to search up for the next WORD, or hit our current exec-formatted keybinding to open it. Just like the functionality present in urxvt.

I immediately found the patch didn’t work in mutt, which is a deal breaker. It seemed to rely on the URL being above screen->cursorp and mutt doesn’t really care about a cursor so it often leaves it at (0, 0), well above any URLs on screen. So I changed the algorithm to instead start at the bottom of the terminal always, regardless of where the cursor is. So far this has been working reliably.

I put the updated patch, along with a PKGBUILD for installing it on GitHub. I’ll eventually post it to the AUR to make this easier, but for now:

git clone https://github.com/pbrisbin/xterm-select-needle
(cd ./xterm-select-needle && makepkg -i)
rm -r ./xterm-select-needle

Then update that ~/.Xresources entry to:

*VT100.Translations: #override \n\
  Alt <Key>u: select-needle("://") select-set(PRIMARY) \n\
  Alt <Key>o: exec-formatted("chromium '%t'", PRIMARY) select-start() select-end()

And that’s it.

17 Dec 2016, tagged with arch

Disable All The Caps

If you’re like me and absolutely abhor the Caps Lock key, you’ve probably figured out some way to replace it with a more suitable function. I myself have settled on the following command to make it a duplicate Ctrl key:

$ setxkbmap -option ctrl:nocaps

This works great when placed in ~/.xinitrc and run as X starts, but what about USB keyboards which are plugged in later? Perhaps your pair-programming, or moving your laptop between work and home. This happens frequently enough that I thought it’d be nice to use a udev rule to trigger the command auto-magically whenever a keyboard is plugged in.

The setup is fairly simple in the end, but I found enough minor traps that I thought it was appropriate to document things once I got it working.

It has come to my attention that configuring this via xorg.conf.d actually does affect hot-plugged keyboards.

/etc/X11/xorg.conf.d/10-keyboard.conf

Section "InputClass"
  Identifier "Keyboard Defaults"
  MatchIsKeyboard "yes"
  Option "XkbOptions" "ctrl:nocaps"
EndSection

While this renders the rest of this post fairly pointless, it is a much cleaner approach.

Script

You can’t just place the setxkbmap command directly in a udev rule (that’d be too easy!) since you’ll need enough decoration that a one-liner gets a bit cumbersome. Instead, create a simple script to add this decoration; then we can call it from the udev rule.

Create the file wherever you like, just note the full path since it will be needed later:

~/.bin/fix-caps

And make it executable:

$ chmod +x ~/.bin/fix-caps

Important things to note:

  1. We sleep 1 in order to give time for udev to finish initializing the keyboard before we attempt to tweak things.
  2. We set the DISPLAY environment variable since the context in which the udev rule will trigger has no knowledge of X (also, the :0.0 value is an assumption, you may need to tweak it).
  3. We background the whole command with & so that the script returns control back to udev immediately while we (wait a second and) do our thing in the background.

Rule

Now that we have a single callable script, we just need to run it (as our normal user) when a particular event occurs.

/etc/udev/rules.d/99-usb-keyboards.rules

SUBSYSTEM=="input", ACTION=="add", RUN+="/bin/su patrick -c /home/patrick/.bin/fix-caps"

Be sure to change the places I’m using my username (patrick) to yours. I had considered putting the su in the script itself, but eventually decided I might use it outside of udev when I’m already a normal user. The additional line-noise in the udev rule is the better trade-off to me.

And again, a few things to note:

  1. I don’t get any more specific than the subsystem and action. I don’t care that this runs more often then actually needed.
  2. We need to use the full path to su, since udev has no $PATH.

Testing

There’s no need to reload anything (that happens automatically). To execute a dry run via udevadm test, you’ll need the path to an input device. This can be copied out of dmesg from when one was connected or you could take an educated guess.

Once that’s known, execute:

# udevadm test --action=add /dev/path/to/whatever/input0
...
...
run: '/bin/su patrick -c /home/patrick/.bin/fix-caps'
...

As long as you see the run bit towards the bottom, you should be all set. At this point, you could unplug and re-plug your keyboard, or tell udev to re-process events for currently plugged in devices:

# udevadm trigger

This command doesn’t need a device path (though I think you can give it one); without it, it triggers events for all devices.

06 May 2013, tagged with arch

Aurget v4

Aurget was one of the first programs I ever wrote. It’s seen decent adoption as far as AUR helpers go and it’s gradually increased its feature set over the past number of years.

The codebase had gotten a bit krufty and hard to follow. I decided to refactor to more isolated functions which didn’t rely on so many global variables. This directed refactor has improved things greatly: pretty much any function can be reasoned about in isolation and the logic flows more understandably during program execution.

Unintentionally, this push to simplify and clarify actually resulted in a number of user-facing and developer-facing improvements. Go figure.

Listen – I get it. I’m a minimalist too. Why do we need 400 lines to do essentially this?

Funny thing is, no matter how hard I tried to chip aurget down closer to those essential curl | tar | makepkg parts, I would inevitably end up with sprawling, spaghetti code after the very first feature beyond what the above script provides. Available upgrades? Bloat. Dependency resolution? Bloat.

So after trying and giving up a number of times, I’ve decided aurget is actually a fairly simple and straight forward implementation of the features it currently provides, despite being so big.

In my opinion, it can’t get much simpler without dropping features, and I actually like the features. Oh well, back to the post…

Stupid Networking

In so many places, aurget would hit the RPC in a per-package way. Refactoring the networking made it obvious when I could use the multiinfo endpoint to query for many packages at once. This made many actions way faster.

Speaking of networking, it’s now consolidated into a single get function. This means I can more easily play with curl options, error handling or even caching.

Pass the Buck

Package installation is now handled by simply passing --install to makepkg. This has a number of positive consequences: Goodbye sudo, and so long to any configuration around the pacman command. Split packages are also handled predictably and you’ll never have a successful build error with “package not found”.

Moar Options

Aurget will now pass any unknown options directly to makepkg (with a warning). This means anything makepkg supports, aurget supports too.

  • Run as root
  • Install a subset of a split package
  • Package signing

Etc…

Zomg DEBUG

One of the most frustrating aspects of working with aurget as its maintainer was troubleshooting. It was always both difficult and annoying to figure out what aurget was doing.

With the code now refactored such that the runtime behavior was more linear and understandable, a useful --debug flag could also be added.

I love this so much:

ossvol screenshot 

Bring on the bugs!

Seriously though, there may be some. I changed a whole lot and didn’t test exhaustively… Enjoy!

23 Apr 2013, tagged with arch

Systemd-User

BIG FAT WARNING

One thing to note, and the reason why I’m no longer using this setup: screen sessions started from within X cannot survive X restarts. If you don’t know what that means, don’t worry about it; if you do, you’ve been warned.

A while back, Arch switched to systemd for its init system. It’s pretty enjoyable from an end-user perspective, unit files are far easier to write and maintain than the old rc-scripts, the process groups are conceptually consistent and robust, and the centralized logging via journalctl is pretty sweet.

With a recent patch to dbus, it’s now possible to run a second, user-specific instance of systemd to manage your login session. In order to describe why we might want to do this, and before we go into detail on how, it’d be useful to first talk about how a minimal graphical login session can be managed without it.

Startx

When my machine first boots, I get a dead simple, tty-based login prompt. I don’t use a display manager and consider this my graphical login.

When I enter my credentials, normal shell initialization proceeds no differently than any other time. When ZSH (my shell) gets to the end of ~/.zshenv it finds the following:

Translation: if I’m logging into the first physical tty, I’m not the root user, and there’s no display already running, then start X.

More specifically, due to the exec there, it replaces itself with X. Without this, someone would find themselves at a logged-in shell if they were to kill X – something you can do even in the presence of most screen locks.

The startx command eventually sources ~/.xinitrc where we find commands for initializing my X environment: wallpaper setting, a few xset commands, starting up urxvtd, etc. After all that, my window manager is started.

This is all well and good, but there are a few improvements we can make by letting systemd manage this process.

First of all, the output of All The Things is hard to find. It used to be that calling startx on tty1 would start an X session on tty7 and the output of starting up X and any applications launched in xinitrc would at least be spammed back on tty1. That seems to no longer be the case and startx starts X right there on tty1 hiding any output from those programs.

It’s also hard to see all your X-related processes together as one cohesive group. Some processes would remain owned by xinit or xmonad (my window manager) but some would fork off and end up a direct child of init. Other “one shot” commands would run (or not) and exit without any visibility about their success or failure.

Using something like systemd can address these points.

Systemd

Using systemd means setting up your own targets and service files under ~/.config/systemd/user just as you do for your main system. With these service files in place, we can simply execute systemd --user and everything will be started up (in parallel) and managed by the new instance of systemd.

We’ll be able to get useful status info about all the processes, manage them like system services, and see any output from them by using journalctl.

Instructions

First, install user-session-units from the AUR, it’ll also pull in xorg-launch-helper. This will provide us with a xorg.target which will handle getting X running.

Now, there’s a bit of a chicken-and-the-egg problem we have to deal with. I ran into it when I first moved to systemd at the system level too. In order to have your services start automatically when you start systemd, you have to enable them. In order to enable them, you need systemd to be running. In this case it’s a bit trickier since the user session can’t start without one of those services we’re going to enable, but we can’t enable it without starting the user session…

The recommended way around this is to (temporarily) add systemd --user & to the top of your current .xinitrc and restart X.

It’s unclear to me if you could get away with just running that command from some terminal right where you are – feel free to try that first.

Now that we’re back with a user session running, we can set up our “services”.

First, we’ll write a target and service file for your window manager. I use XMonad, so mine looks like this:

~/.config/systemd/user/xmonad.target

[Unit]
Description=XMonad
Wants=xorg.target
Wants=xinit.target
Requires=dbus.socket
AllowIsolate=true

[Install]
Alias=default.target

~/.config/systemd/user/xmonad.service

[Unit]
Description=xmonad
After=xorg.target

[Service]
ExecStart=/home/you/.cabal/bin/xmonad
Environment=DISPLAY=:0

[Install]
WantedBy=xmonad.target

You can see we reference xinit.target as a Want, this target will hold all the services we used to start as part of xinitrc. Let’s create the target for now, we’ll worry about the services later:

~/.config/systemd/user/xinit.target

[Unit]
Description=Xinit
Requires=xorg.target

Then, enable our main target:

$ systemctl --user enable xmonad.target

This should drop a symlink at default.target setting that as the target to be run when you execute systemd --user.

At this point, if you were to quit X and run that command, it should successfully start X and load XMonad (or whatever WM you’re using). The next thing we’ll do is write service files for all the stuff you currently have in xinitrc.

Here are some of the ones I’m using as examples:

~/.config/systemd/user/wallpaper.service

[Unit]
Description=Wallpaper setter
After=xorg.target

[Service]
Type=oneshot
ExecStart=/usr/bin/feh --bg-tile %h/Pictures/wallpaper.png
Environment=DISPLAY=:0

[Install]
WantedBy=xinit.target

It appears that we can use %h to represent our home directory, but only in certain ways. The above works, but trying to use %h in the path to the xmonad binary does not. Sigh.

~/.config/systemd/user/synergys.service

[Unit]
Description=Synergy Server
After=xorg.target

[Service]
Type=forking
ExecStart=/usr/bin/synergys --debug ERROR

[Install]
WantedBy=xinit.target

~/.config/systemd/user/urxvtd.service

[Unit]
Description=Urxvt Daemon
After=xorg.target

[Service]
Type=simple
ExecStart=/usr/bin/urxvtd

[Install]
WantedBy=xinit.target

With these in place, you can enable them all. I used the following shortcut:

$ cd .config/systemd/user
$ for s in *.service; do systemctl --user enable $s; done

Now, finally, simply running systemd --user should start X, bring up all your X-related services, and start your window manager.

How you do this going forward is up to you, but in my case I simply updated the last line in my ~/.zshenv:

Benefits

Arguably, we’ve complected what used to be a pretty simple system – so what do we gain?

Well, my OCD loves seeing a nice, isolated process group of everything X-related:

Process group 

We can also now use a consistent interface for working with services at both the system and X level. Included in this interface is the super useful status:

Service status 

Finally, we get the benefits of journalctl – running it as our non-root user will show us all the messages from our X-related processes.

There are probably a number of additional systemd features that we can now leverage for our graphical environment, but I’m still in the process of figuring that all out.

References

Many thanks to gtmanfred for putting this idea in my head and going through the hard work of figuring it out and writing it up. The information has also been added to the Arch wiki.

20 Jan 2013, tagged with arch

Raury

tl;dr: it’s just like aurget but more stable and faster

Developing aurget was getting cumbersome. Whenever something went wrong, it was very difficult to track down or figure out. The lack of standard tools for things like uri escaping or json parsing was getting a bit annoying, and the structure of the code just annoyed me. There was also a lack of confidence when changes were made, I could only haphazardly test a handful of scenarios so I was never sure if I’d introduced a regression.

I decided to write raury to be exactly as featureful as aurget, but different in the following ways:

  • Solid test coverage

raury coverage 

  • Useful debug output

raury debug output 

  • Clean code

raury code 

I think I’ve managed to hit on all of these with a happy side-effect too: it’s really fast. It takes less than a few seconds to churn through a complex tree of recursive dependencies. The same operation via aurget takes minutes.

Interested?

So anyway, if you’re interested in trying it out, I’d love for some beta testers.

Assuming you’ve got a working ruby environment (and the bundler gem), you can do the following to quickly play with raury:

$ git clone https://github.com/pbrisbin/raury && cd ./raury
$ bundle
$ bundle exec bin/raury --help

If you like it, you can easily install it for real:

$ rake install
$ raury --help

There’s also a simple script which just automates this clone-bundle-rake process for you:

$ curl https://github.com/pbrisbin/raury/raw/master/install.sh | bash

Also, tlvince was kind enough to create a PKGBUILD and even maintain an AUR package for raury. Check that out if it’s your preferred way to install.

30 Aug 2012, tagged with arch, ruby

Dont Do That

I use Arch linux for a number of reasons. Mainly, it’s transparent and doesn’t hold your hand. You’re given simple, powerful tools and along with that comes the ability to shoot yourself in the foot. This extends to the community where we can and should help those newer than ourselves to manage this responsibility intelligently, but without holding their hand or taking any of that power away through obfuscation.

The Problem

There’s always been the potential for a particular command to break your system:

$ pacman -Sy foo

What this command literally means is, “update the local index of available packages and install the package foo”. Misguided users assume this is the correct way to ensure you receive the latest version of foo available. While it’s true that it is one way, it’s not the correct way. Moreover, using this command can easily break your system.

Let’s walk through an example to illustrate the problem:

  • A user has firefox 3.0 and gimp 6.1 installed, both of which depend on libpng>=1.0
  • An update comes out for libpng to version 1.2
  • Arch maintainers release libpng 1.2, firefox 3.0-2, and gimp 6.1-2 (the latter two now depending on libpng>=1.2)
  • An update comes out for firefox to version 3.1
  • Arch maintainers release firefox 3.1 which depends on libpng>=1.2
  • Our user (incorrectly) says pacman -Sy firefox hoping to get this new version
  • pacman (correctly) installs firefox 3.1 and libpng 1.2

There’s nothing here to tell pacman to update gimp since libpng 1.2 is >= 1.0 which meets gimp’s dependency contstraints.

However, our user’s gimp binary is actually linked directly to /usr/lib/libpng.so.1.0 and is now broken. Sadface.

In this example, the outcome is a broken gimp. However, if the shared dependency were instead something like readline and the broken package something like bash, you might be left with an unusable system requiring a rescue disk or reinstall. This of course lead to a lot of unhappy users.

The Solution

There are a few options to avoid this, the two most viable being:

  1. Instruct users to not execute -Sy foo unless they know how foo and its dependencies will affect their system.
  2. Instruct Arch maintainers to use a hard constraint in these cases, so firefox and gimp should depend on libpng==1.0

If we went with option two, the user, upon running pacman -Sy firefox would’ve gotten an error for unresolvable dependencies stating that gimp requires libpng==1.0.

Going this route might seem attractive (especially to users) but it causes a number of repository management headaches dealing with exact version constraints on so many heavily depended-upon packages. The potential headache to the maintainers far out-weighed the level of effort required to educate users on the pitfalls of -Sy.

So, option one it is.

The Wrong Advice

It was decided (using the term loosely) to tell anyone and everyone to always, no matter what, when they want to install foo, execute:

$ pacman -Syu foo

I argue that this advice is so opposite to The Arch Way, that it’s downright evil.

What this command really says is, “update your system and install foo”. Sure, that’s no big deal, it’s not harmful, may or may not be quick and ensures you don’t run into the trouble we’ve just described.

Coincidentally, this is also the correct way to ensure you get the absolute latest version of foo – if and only if foo had a new version released since your last system update.

My issue is not that it doesn’t work. My issue is not that it’s incorrect advice to those with that specific intention. My issue is that, nine times out of ten, that’s not the user’s intention. They simply want to install foo.

You’re now telling someone to run a command that does more than what they intended. It does more than is required. It’s often given out as advice with no explanation and no caveats. “Oh, you want to install something? -Syu foo is how you do that…” No, it really isn’t.

You’ve now wasted network resources, computational resources, the user’s time and you’ve taught them that the command to install foo is -Syu foo. Simplicity and transparency aside, that’s just lying.

If you’ve been given this advice, I’m sorry. You’ve been done a disservice.

The Correct Advice

To update your system:

$ pacman -Syu

To install foo:

$ pacman -S foo

To update your system and install foo:

$ pacman -Syu foo

Simple, transparent, no breakage. That’s the advice you give out.

Sure, by all means, if your true intention is to upgrade the system and install foo, you should absolutely -Syu foo but then, and only then, does that command make any sense.

</rant>

24 Mar 2012, tagged with arch

Pacprune

A fairly long time ago, there was a thread on the Arch forums about clearing your pacman cache.

Pacman’s normal -Sc will remove all versions of any packages that are no longer installed and -Scc will clear that plus old versions of packages that are still installed.

The poster wanted a way to run -Scc but also keep the last 1 or 2 versions back from installed. There was no support for this in pacman directly, so a bit of a bash-off ensued.

I wrote a pretty crappy script which I posted there, it laid around in my ~/.bin collecting dust for a while, but I recently rewrote it. I’m pretty proud of the result for its effectiveness and succinctness, so I think it deserves a little discussion.

The methodology of the two versions is the same, but this new version leans heavily on good ol’ unix shell-scripting principles to provide the exact same functionality in way less code, memory, and time.

Approach

The first approach discussed on the thread was to parse filenames for package and version, then do a little sort-grepping to figure out which versions to keep and which versions to discard. This method is fast, but provably inaccurate if a package name contains numbers on the end.

I went a different way.

For each package, pull the .PKGINFO file out of the archive, parse the pkgname and pkgversion variables out of it, then do the same sort-grepping to figure out what to discard.

My first implementation of this algorithm was really bad. I’d parse and write pkgname|pkgversion to a file in /tmp. Then I’d grep unique package names using -m to return at most the number of versions you want to keep (of each package) and store that in another file. I’d then walk those files and rm the packages.

Ick.

Needs moar unix

The aforementioned ugliness, plus some configuration and error checking weighed in at 162 lines of code, used two files, and was dirt slow. I decided to re-attack the problem with a unix mindset.

In a nutshell: write small units that do one thing and communicate via simple text streams.

The first unit this script needs is a parser. It should accept a list of packages (relative file paths) on stdin, parse and output two space-separated values on stdout: name and path. The path will be needed by the next unit down the line, so we need to pass it through.

11 lines and damn fast. Thank god for bsdtar’s -q option. It tells the extraction to stop after finding the file I’ve requested. Since the .PKGINFO file is usually the first thing in the archive, we barely do any work to get the values.

It’s also done completely in RAM by piping tar directly to awk.

Step two would be the actual pruning. Accept that same space-separated list on stdin and for any package versions beyond the ones we want to keep (the 3 most recent), echo the full path to the package file on stdout.

Just watch the list go by and count the number of packages for each name. I’m ensuring that the list is coming in reverse sorted already, so once we see the number of packages we want to keep, any same-named packages after that should be printed.

So simple.

This function can get away with being simple because it doesn’t take into account what’s actually installed on your system. It just keeps the most recent 3 versions of each unique package in the cache. Therefore, to do a full clean, run pacman -Sc first to remove all versions of uninstalled software. Then use this script to clear all but installed plus the two previous versions. This assumes the highest version in the cache is the installed version which may or may not be true in all cases.

All that’s left is to make that reverse sorted list and pipe it through.

So the whole script (new version) weighs in at ~30 lines (with whitespace) and I claim it is exactly as feature-rich as the first version.

I know what you’re saying: there’s no definition of the cache, no optional safe-list vs actual-removing behavior, there’s no removing at all!

Well, you’re just not thinking unix.

You’re free to get as fancy as you’d like too…

And the only configuration is setting the versions_to_keep variable at the top of the script.

The script can be found in my scripts repo.

11 Jun 2011, tagged with arch

Downgrade

Downgrade eases downgrading packages in Arch Linux.

Examples

Downgrade some packages, checking both local cache and the A.R.M.:

$ downgrade foo bar

Downgrade a package, looking in only local cache:

$ NOARM=1 downgrade foo

Downgrade a package, looking in only the A.R.M.:

$ NOCACHE=1 downgrade foo

Downgrade a package, looking only in local cache, and favoring su over sudo even when sudo is available:

$ NOARM=1 NOSUDO=1 downgrade foo

Installation

Install the AUR package here.

For more details, reporting Issues, etc, see the GitHub project.

05 Dec 2009, tagged with arch

Aurget

A simple pacman-like interface to the AUR written in bash.

About

Aurget is designed to make the AUR convenient and speed up tedious actions. The user can decide to search, download, build, and/or install packages consistently through a configuration file or dynamically by passing arguments on the command-line.

Sourcing user-created PKGBIULDs comes with risks. Please, if you’re worried about this, be sure to view all PKGBUILDs before proceeding.

You have been warned.

Installation

Study the Arch wiki, then manually build and install aurget.

Follow development via GitHub.

Usage

See aurget --help, man 1 aurget, and man 5 aurgetrc.

Reporting Bugs

If you’ve found a bug or want to request a feature, please let me know via GitHub Issues. If you can implement what you’re looking for, please open a Pull Request, preferably including tests.

Aurget does not and will not search or install from the official repositories. This is by design and will not be implemented even if you offer a patch. Use another AUR Helper if this is what you’re looking for.

05 Dec 2009, tagged with arch