Automated Unit Testing in Haskell

Hspec is a BDD library for writing Rspec-style tests in Haskell. In this post, I’m going to describe setting up a Haskell project using this test framework. What we’ll end up with is a series of tests which can be run individually (at the module level), or all together (as part of packaging). Then I’ll briefly mention Guard (a Ruby tool) and how we can use that to automatically run relevant tests as we change code.

Project Layout

For any of this to work, our implementation and test modules must follow a particular layout:

├── src
│   └── Text
│       ├── Liquid
│       │   ├── Context.hs
│       │   ├── Parse.hs
│       │   └── Render.hs
│       └── Liquid.hs
└── test
    ├── SpecHelper.hs
    ├── Spec.hs
    └── Text
        └── Liquid
            ├── ParseSpec.hs
            └── RenderSpec.hs

Notice that for each implementation module (under ./src) there is a corresponding spec file at the same relative path (under ./test) with a consistent, conventional name (<ModuleName>Spec.hs). For this post, I’m going to outline the first few steps of building the Parse module of the above source tree which happens to be my liquid library, a Haskell implementation of Shopify’s template system.

Hspec Discover

Hspec provides a useful function called hspec-discover. If your project follows the conventional layout above, you can simply create a file like so:


{-# OPTIONS_GHC -F -pgmF hspec-discover #-}

And when that file is executed, all of your specs will be found and run together as a single suite.


I like to create a central helper module which gets imported into all specs. It simply exports our test framework and implementation code:


module SpecHelper
    ( module Test.Hspec
    , module Text.Liquid.Parse
    ) where

import Test.Hspec
import Text.Liquid.Parse

This file might not seem worth it now, but as you add more modules, it becomes useful quickly.

Baby’s First Spec


module Text.Liquid.ParseSpec where

import SpecHelper

spec :: Spec
spec = do
    describe "Text.Liquid.Parse" $ do
        context "Simple text" $ do
            it "parses exactly as-is" $ do
                let content = "Some simple text"

                parseTemplate content `shouldBe` Right [TString content]

main :: IO ()
main = hspec spec

With this first spec, I’ve already made some assumptions and design decisions.

The API into our module will be a single parseTemplate function which returns an Either type (commonly used to represent success or failure). The Right value (conventionally used for success) will be a list of template parts. One such part can be constructed with the TString function and is used to represent literal text with no interpolation or logic. This is the simplest template part possible and is therefore a good place to start.

The spec function is what will be found by hspec-discover and rolled up into a project-wide test. I’ve also added a main function which just runs said spec. This allows me to easily run the spec in isolation, which you should do now:

$ runhaskell -isrc -itest test/Text/Liquid/ParseSpec.hs

The first error you should see is an inability to find Test.Hspec. Go ahead and install it:

$ cabal install hspec

You should then get a similar error for Text.Liquid.Parse then some more about functions and types that are not yet defined. Let’s go ahead and implement just enough to get past that:


module Text.Liquid.Parse where

type Template = [TPart]

data TPart = TString String

parseTemplate :: String -> Either Template String
parseTemplate = undefined

The test should run now and give you a nice red failure due to the attempted evaluation of undefined.

Since implementing Parse is not the purpose of this post, I won’t be moving forward in that direction. Instead, I’m going to show you how to set this library up as a package which can be cabal installed and/or cabal tested by end-users.

For now, you can pass the test easily like so:


parseTemplate :: String -> Either Template String
parseTemplate str = Right [TString str]

For TDD purists, this is actually the correct thing to do here: write the simplest implementation to pass the test (even if you “know” it’s not going to last), then write another failing test to force you to implement a little more. I don’t typically subscribe to that level of TDD purity, but I can see the appeal.


We’ve already got Spec.hs which, when executed, will run all our specs together:

$ runhaskell -isrc -itest test/Spec.hs

We just need to wire that into the Cabal packaging system:


name:          liquid
version:       0.0.0
license:       MIT
copyright:     (c) 2013 Pat Brisbin
author:        Pat Brisbin <pbrisbin@gmail.com>
maintainer:    Pat Brisbin <pbrisbin@gmail.com>
build-type:    Simple
cabal-version: >= 1.8

  hs-source-dirs: src

  exposed-modules: Text.Liquid.Parse

  build-depends: base == 4.*

test-suite spec
  type: exitcode-stdio-1.0

  hs-source-dirs: test

  main-is: Spec.hs

  build-depends: base  == 4.*
               , hspec >= 1.3
               , liquid

With this in place, testing our package is simple:

$ cabal configure --enable-tests
$ cabal build
$ cabal test
Building liquid-0.0.0...
Preprocessing library liquid-0.0.0...
In-place registering liquid-0.0.0...
Preprocessing test suite 'spec' for liquid-0.0.0...
Linking dist/build/spec/spec ...
Running 1 test suites...
Test suite spec: RUNNING...
Test suite spec: PASS
Test suite logged to: dist/test/liquid-0.0.0-spec.log
1 of 1 test suites (1 of 1 test cases) passed.


Another thing I like to setup is the automatic running of relevant specs as I change code. To do this, we can use a tool from Ruby-land called Guard. Guard is a great example of a simple tool doing one thing well. All it does is watch files and execute actions based on rules defined in a Guardfile. Through plugins and extensions, there are a number of pre-built solutions for all sorts of common needs: restarting servers, regenerating ctags, or running tests.

We’re going to use guard-shell which is a simple extension allowing for running shell commands and spawning notifications.

$ gem install guard-shell

Next, create a Guardfile:


# Runs the command and prints a notification
def execute(cmd)
  if system(cmd)
    n 'Build succeeded', 'hspec', :success
    n 'Build failed', 'hspec', :failed

def run_all_tests
  execute %{
    cabal configure --enable-tests &&
    cabal build && cabal test

def run_tests(mod)
  specfile = "test/#{mod}Spec.hs"

  if File.exists?(specfile)
    files = [specfile]
    files = Dir['test/**/*.hs']

  execute "ghc -isrc -itest -e main #{files.join(' ')}"

guard :shell do
  watch(%r{.*\.cabal$})          { run_all_tests }
  watch(%r{test/SpecHelper.hs$}) { run_all_tests }
  watch(%r{src/(.+)\.hs$})       { |m| run_tests(m[1]) }
  watch(%r{test/(.+)Spec\.hs$})  { |m| run_tests(m[1]) }

Much of this Guardfile comes from this blog post by Michael Xavier. His version also includes cabal sandbox support, so be sure to check it out if that interests you.

If you like to bundle all your Ruby gems (and you probably should) that can be done easily, just see my main liquid repo as that’s how I do things there.

In one terminal, start guard:

$ guard

Finally, simulate an edit in your module and watch the test automatically run:

$ touch src/Text/Liquid/Parse.hs

And there you go, fully automated unit testing in Haskell.

01 Dec 2013, tagged with testing, haskell, cabal, hunit, ruby, guard


Once I start my new job at thoughtbot, I’ll be working on a variety of ruby and rails projects at the same time. This, combined with the current 2.0 transition, means I once again need a ruby version management tool.

Chruby is the third (by my count) “new hotness” when it comes to these python-inspired virtualenv clones. First there was rvm which has a ton of features, then came rbenv which aimed to be simpler, finally we have chruby which is billed as the simplest of them all. So far, I’m a big fan.

For detailed instructions and usage please see the README files in the previously linked project pages. This post might gloss over some details and focuses more on my opinion of the tools than their usage. For Arch users, there are AUR PKGBUILDs for all of these.


Last time I required this feature, rbenv was just coming onto the scene, so I went with rvm. It is by far the most complex of these tools, and that is a downside itself. Overwriting cd (to allow auto-switching) is a concern for some people. The fact that it both installs and manages versions strikes others as a breach of Unix.

One feature commonly touted as the reason to use rvm is its gemsets which isolate sets of gems into groups and thus prevent gem-hell. Now that bundler is ubiquitous, this problem no longer exists.

Aside from rvm, the other major choices are rbenv and chruby. Looking at the rbenv project page, it still seems to do a number of things I don’t need or want. I’m also not a fan of it introducing a bunch of shims.

At its core, all such a manager needs to do is modify some environment variables so that the correct binary and set of libraries are loaded. Coincidentally, that’s about all chruby does.


Paraphrasing from the project page, changing rubies via chruby will:

  1. Update $PATH so the correct ruby and any gem executables are directly available.
  2. Set a proper $GEM_HOME and $GEM_PATH so any gem related commands and tools (including bundler) will Just Work.
  3. Set some other ruby-related environment variables.
  4. Call hash -r for you (required when mucking with $PATH).

No shims, no crazy options or features bloating up the script which itself weighs in at less than 90 lines of very simple and readable shell.

If you choose, chruby can also do automatic switching. To opt in, you just have to source an additional (and equally simple) script. Once enabled, you will automatically change rubies when you enter a directory containing a .ruby-version file. This is done cleanly via a pre-prompt command and not by hijacking cd.

When auto-switching is enabled, be sure to define a “default” by dropping a .ruby-version in $HOME too.

Here are the entries in my ~/.zshenv (the same should work in bash):

if [[ -e /usr/share/chruby ]]; then
  source /usr/share/chruby/chruby.sh
  source /usr/share/chruby/auto.sh
  chruby $(cat ~/.ruby-version)

The AUR PKGBUILD installs into /usr/share while the chruby README prescribes /usr/local/share. This may be a packaging bug that will eventually be fixed so be sure to verify and use the appropriate paths for your install.

So far, I’m a huge fan. The tool does what it advertises exactly and simply. The small feature-set is also exactly and only the features I need. As a bonus, setting the GEM_ variables is something I always seemed to need to do manually anyway, so it’s nice to no longer need that.


Since chruby is just a “changer” you do need to install rubies via some other tool. Ruby-build makes that super easy:

$ ruby-build 1.9.3-p392 ~/.rubies/ruby-1.9.3-p392
$ ruby-build 2.0.0-p0 ~/.rubies/ruby-2.0.0-p0

Chruby will look for rubies installed in one of two places by default: /opt/rubies/ or ~/.rubies/. I prefer the latter.

Since ruby-build is actually a sub-tool of rbenv, it’s quite spartan. You’re required to type the desired version exactly (as read from ruby-build --definitions) and you need to give the full installation path, even though it could be determined easily by convention. rbenv install owns those niceties, apparently.

After this post was written, the author of chruby actually released a ruby-build competitor called ruby-install. It’s feature-set is very much the same and it allows fuzzy commands like ruby-install ruby 1.9. I very much recommend it.

One last bit…

Some time ago, while still using both oh-my-zsh and rvm, I noticed that most of the prompts used yet-another rvm feature to read the currently active ruby and insert it into the prompt.

This seems a bit odd for a tool to provide this feature. There are also a great many if statements out there doing something different for rvm or rbenv. Will they all add a clause for chruby now?

Well, in a bout of insane cleverness, I found the following non-obvious way to get the currently active ruby version:

$ ruby --version

If you’d like to use this in your prompt, feel free to bogart from mine.

07 Apr 2013, tagged with ruby

Easy Change

for each desired change, make the change easy (warning: this may be hard), then make the easy change

— Kent Beck, September 25, 2012

Here’s some code from our main application helper. It provides a small method for redirecting the user based on a goto parameter. It uses two helpers itself to append google analytics parameters to the url before redirecting.

Originally, it was uncommented. I’ve added a few here to highlight what goes through my head when first reading it.

def get_ga_params
  # Nice use of Explaining Temporary Variable to avoid a Magic Number 
  # situation, but this list of keys seems generally useful and would 
  # rarely change. Why not a real constant?
  analytics_keys = %w(utm_campaign utm_source utm_medium)

  # minor, but return is unneeded
  return params.reject { |k,v| !analytics_keys.include? k }

def append_ga_params(url)
  # warning: shadowing outer variable, url
  returning(url) do |url|
    # Why treat p like an array when it's a hash, should use k,v here. 
    # Also, I prefer map vs collect.
    query_string = get_ga_params.collect {|p| "#{p[0]}=#{p[1]}"}

    # Should use string interpolation. Also, I'd prefer "if present?" 
    # to "unless blank?". Finally, I'd place the check ahead of building 
    # the query string both to be (slightly) more efficient and to get 
    # it higher up in the method so I don't need to think about it while 
    # deciphering the string building here.
    url << "?" << query_string.join("&") unless query_string.blank?

def redirect_to_latest_or_goto
  goto = params[:goto]

  unless goto =~ %r[^/]
    goto = latest_events_path


So the methods are somewhat smelly, but not enough to warrant refactoring when you don’t need to make a change in this area.

Fortunately, Business has decided that they would like to append the BuyId parameter to the redirect url in much the same was the analytics parameters are currently.

Our first instinct might be to just add the param inside the append_ga_params method. This would be incorrect; since BuyId is not a google analytics parameter, the name of the method would be misleading.

Alternatively, we could just plop the param onto the end of the url directly in redirect_to_latest_or_goto. Adding some string building into that method might be considered mixing layers of abstraction. It also does nothing to explain what we’re doing the way append_ga_params does.

Make the Change Easy

It’d be really nice if we had a generic append_params helper available to add our BuyId. This is basically what append_ga_params is doing, except that it’s over specified.

Let’s tease that logic out into a separate method and call it from our original. At the same time we can clean up some of the smells we noticed earlier.

def append_params(url, new_params)
  # Quick guard statement
  return url if new_params.empty?

  # Treats the hash like a hash
  query_str = new_params.map { |k,v| "#{k}=#{v}" }.join('&')

  # This switch could be done a number of ways, I'm not yet sure which I 
  # prefer.
  if url.include?('?')

# Promoted to a constant
ANALYTICS_KEYS = %w[ utm_campaign utm_source utm_medium ]

def get_ga_params
  # Now one line
  params.reject { |k,v| !ANALYTICS_KEYS.include? k }

def append_ga_params(url)
  # Now one line
  append_params(url, get_ga_params)

def redirect_to_latest_or_goto
  goto = params[:goto]

  unless goto =~ %r[^/]
    goto = latest_events_path


Notice that we keep the original methods’ interfaces exactly as they were. This should allow any existing tests to pass without modification and give us confidence that we’ve gotten it right.

In my case, append_ga_params was not marked private. If it were I’d probably do all this a bit differently. For now, we decide to play it safe and leave the class interface alone.

With tests passing, we commit our code and shift gears from Refactor to Feature.

Make the Easy Change

+ BUY_ID = 123

 def redirect_to_latest_or_goto
   goto = params[:goto]

   unless goto =~ %r[^/]
     goto = latest_events_path

-  redirect_to(append_ga_params(goto))
+  url = append_ga_params(goto)
+  redirect_to(append_params(url, 'BuyId' => BUY_ID))


This was definitely a simple example, but it’s nice to see how this two-step process works on something realistic. It’s not difficult to extrapolate this up to something larger.

28 Nov 2012, tagged with ruby, refactoring


My latest bash-to-ruby rewrite was dvdcopy to dvd2iso. I changed the name both to disambiguate, and because my primary use case was no longer to duplicate disc to disc, but to just generate the ISO. It’s very simple to just burn that ISO back to disc if I feel like it.

The benefits of the new script are:

  1. Less and simpler code
  2. Coded at a higher level
  3. Easier to use

There are far less options than dvdcopy; you can choose the device and output file, that’s it. Though, the script is easy enough to configure further by simply editing the source.

usage: dvd2iso [options]
    -i, --input DEVICE
    -o, --output FILE

The output file can have a %s in it which will be replaced by the downcased, underscored version of the DVD name.

When the script runs, all subcommands have their output redirected to a log file which you’re told to consult if there’s some error. Instead, what you get as output is actually every command the script runs in copy-pastable format.

This makes it very easy to rerun any or all of the script’s actions if you want to tweak or debug something.

Actual script output:

$ dvd2iso -o 'rips/%s.iso'
# Ripping SOME_DVD
#   Title 1, 29 Chapters
mkdir -p ./dvd2iso_tmp
mencoder \
  dvd://1 \
  -dvd-device '/dev/sr0' \
  -mc 0 \
  -of mpeg \
  -mpegopts format=dvd:tsaf \
  -oac copy \
  -ovc lavc \
  -vf scale=720:480,pullup,softskip,harddup \
  -lavcopts vcodec=mpeg2video:vrc_buf_size=1835:vrc_maxrate=9800:vbitrate=5824:keyint=18:vstrict=0:aspect=16/9:ilme:ildct \
  -ofps 24000/1001 \
  -o './dvd2iso_tmp/movie.mpeg'
dvdauthor \
  -t \
  -c 00:00:00.000,00:05:54.533,00:10:20.433,00:16:17.533,00:19:56.799,00:24:11.266,00:31:35.866,00:36:28.600,00:37:53.700,00:41:07.067,00:43:30.367,00:47:22.067,00:50:41.700,00:52:27.966,00:55:32.433,00:57:28.100,01:01:05.300,01:03:35.234,01:05:46.634,01:09:14.700,01:11:13.133,01:11:59.299,01:16:17.266,01:19:36.100,01:21:59.533,01:23:34.467,01:26:57.100,01:28:13.767,01:33:49.667 \
  -o './dvd2iso_tmp/MOVIE' \
dvdauthor \
  -T \
  -o './dvd2iso_tmp/MOVIE'
mkisofs \
  -dvd-video \
  -o './dvd2iso_tmp/movie.iso' \
mv ./dvd2iso_tmp/movie.iso rips/some_dvd.iso
rm -r ./dvd2iso_tmp
# Success!

Anyway, you can find it in my bin. Enjoy.

15 Nov 2012, tagged with ruby, scripts

Extension by Module

Ruby’s open classes are great for adding behavior to existing objects. Though it’s a language feature, there to be used, I’d argue that the majority of times it is used, Open classes weren’t the most appropriate tool.

First of all, you may be setting yourself (and other developers) up for confusion. Not knowing where methods come from or why a method behaves oddly can be a problem. In the majority of cases, I find you’ve got an instance of some object, and you just want to add behavior to it.

In these cases, opening up a class and adding behavior to all instances –past, present, and future– is a bit over-kill. It’d be more appropriate to add behavior to just that instance.

Open classes

If you’re intention is to make Strings greppable, opening up the String class might look appealing

I’m aware that Enumerable already provides this functionality. It’s just an example.

class String
  def grep(regex)
    lines = self.split("\n")
    lines.select { |s| s =~ regex }

title_info = `vobcopy -I '#{device}' 2>&1`

@title    = title_info.grep(/Most chapters/).first.split(' ')[5]
@dvd_name = title_info.grep(/Name of the dvd/).first.split(' ')[5]

Works great.


The same thing can be accomplished with a module.

module Grep
  def grep(regex)
    lines = self.split("\n")
    lines.select { |s| s =~ regex }

title_info = `vobcopy -I '#{device}' 2>&1`

@title    = title_info.grep(/Most chapters/).first.split(' ')[5]
@dvd_name = title_info.grep(/Name of the dvd/).first.split(' ')[5]

The main benefits here are that a) the addition of behavior is made explicit and b) you only change the one instance you’re working with rather than affecting every String throughout the entire system.

One interesting implication of learning this is realizing that using extend inside a class definition, though conceptually different, is technically identical to the above.

class MyClass
  extend MyModule


De-sugared, this is actually MyClass.extend(MyModule) which is analogous to my_string.extend(Grep). The former adds methods from MyModule onto MyClass just as the latter adds Greps methods onto my_string

At its core, ruby is a very simple language. It takes core Object-oriented concepts (like “extending” some object) and abides by them at each layer of the abstraction stack.

This allows a little bit of knowledge about the internals of the language to pay substantial dividends in actual implementations.

27 Oct 2012, tagged with ruby, metaprogramming

Fake S3

Fake S3 is a gem designed to run a small server on localhost that will correctly handle AWS/S3 GETs and PUTs (among others) while using the local filesystem as the storage backend. This has a number of benefits for anyone working on an application with AWS integration.

Probably the biggest benefit is offline development. There’s also a lower configuration burden as there’s no longer special bucket names or keys the manage, the development environment just expects localhost to work. Finally, there’s a very real cost savings. Originally, we paid for two buckets per developer just to support the sample data for dev and test. Now we need none.


$ gem install fakes3


$ fakes3 -r /mnt/s3 -p 4567


For example:

  bucket_name: development

  # doesn't matter
  access_key_id:     123
  secret_access_key: abc

  # does matter
  server: localhost
  port:   4567

Assuming you’ve got some rake tasks which load up S3 with sample data, just go ahead and run them, you’ll find files created directly under /mnt/s3/development and the requests your local instance makes will just return that data.

All with no cloud-access needed.

Init script

In case you didn’t notice, the fakes3 command is a bit verbose, which may be OK, but if you’re like me and just want to run this as a background service, the following (naive) init script should do the job:


if [[ ! -d /mnt/s3 ]]; then
  mkdir /mnt/s3 || return 1
  chown -R vagrant:vagrant /mnt/s3 # make it user-writable

case $1 in
    fakes3 -r /mnt/s3 -p 4567 &>/dev/null &
    echo $! > /var/run/fakes3.pid
    kill -9 `cat /var/run/fakes3.pid`
    $0 stop
    sleep 3
    $0 start

23 Oct 2012, tagged with web, work, ruby, aws

Be Assertive with Sane Exception Handling

I’m a big fan of Avdi Grimm’s thoughts about writing confident ruby. I think it’s important to not clutter things with a bunch of nil-checks or exception handling. When you’re focused in at the method level you should trust that your objects are valid and the methods you’re calling behave.

In this post I use the term “assertive” to mean much the same thing as Avdi’s “confident”. I think I like this better because the definition of assertive contains both “confident” and “forceful”.

Your code needs to be both confident that the objects it deals with behave and forceful that the objects dealing with it behave.

One way to give yourself this freedom is to lean on sane exception handling at a higher level of abstraction. This is a concept I’m just now formulating myself, but it’s really starting to pay dividends.

I believe you should have a single place at each major layer of abstraction where exceptions are handled. You should also err on the side of less error handling and let any exceptions propagate up as high in the abstraction stack as possible until you absolutely have to do something about them to prevent a poor user experience.

If you rescue exceptions within the internals of your application, you’re hiding valuable information about why and how something failed. Over handling also leads to cluttered, complicated code.

Commandline Apps

One example where I use this approach is in commandline applications. I force myself to have a single rescue statement in the entire app:

module ImperativeShell
  class Main
    class << self
      def run!(argv)

        # program logic using all kinds of internal classes which may 
        # raise exceptions or call library code that may itself raise 
        # exceptions

      rescue => ex
        if debug?
          # collect what you need as a developer

        exit 1

I’ll regularly git grep rescue and if I can’t 100% justify anything I see, I take it out.

Rails Controllers

Another place where I’ve really seen benefit to this approach is in a set of controllers I had to write for a new API layer we’re building at work.

I knew that no matter what happened within our controller logic, I wanted to give the client a valid JSON response with a proper status code. Supporting this sort of catch all behavior was pretty easy with a rescue_from in the base controller:

module Api
  class Base < ActionController::Base

    rescue_from Exception do |ex|
      logger.error("api error: #{ex}")

      error = {
        :code        => 500,
        :name        => "#{ex.class}",
        :description => "Something's gone horribly wrong!"

      # a simple render :json helper
      respond_error(500, error)


With this one simple catch-all, I can now be sure that no matter what happens in my controllers, things will behave gracefully.

I realize now that this isn’t a design that only makes sense in this particular scenario, there are tons of places through all the apps I work on where, when I’m deep in some class or method, I don’t want to deal with and/or hide the exceptions that might be thrown by the various libraries and methods I’m calling.

Exception handling is a feature and it should be treated as such. This means it needs to be well thought out and the logic needs to exist in the right place and do the right thing. Moreover, that should not be my concern when I’m working on some small send_forgot_password_email method on the User model. If the mail client throws an exception, I’m not the guy that should be handling that. Whoever called me probably wants to know about it. And if you follow the line of callers up the stack there should be someone somewhere who can turn that into a pretty message to tell the user who originally asked to have their password reset that something’s gone wrong. If any one of these callers gets greedy, the whole thing turns into a kludge.

def send_forgot_password_email
  if mailer = UserMailer.new(self)
    unless mailer.deliver_forgot_password_email rescue nil
      return false


This is obfuscated code. Don’t use nil or false as a valid return value to hide what really happened and signify generic failure. You’re destroying valuable information about said failure.

Whether the mailer raises an exception or not is that object’s concern. How that exception is conveyed to the end-user is your caller’s concern. When you have a chance, at any layer of abstraction, to reduce your own number of concerns, do it.

def send_forgot_password_email
  UserMailer.new(self).deliver_forgot_password_email # assertive!

OK, end rant. Onto more uses of this pattern…

Cleaner Routes

With a similar rescue for ActionController::UnknownAction we can implicitly handle the case of an API client calling a method we don’t support and return the proper 501 - NotImplemented.

rescue_from ActionController::UnknownAction do |ex|
  error = {
    :code        => 501,
    :name        => "NotImplemented"
    :description => "#{ex}"

  respond_error(501, error)

We even get a free description from the error. Printing the exception shows something like “No action for foo. Available: bar, baz.” Which is exactly the behavior the HTTP spec dictates. These are the things rails does well. Follow the conventions, use the out of the box features to write less code yourself.

With this in place, you can take a routes file like this:

namespace(:api) do |api|
  api.resource :user, :conditions => [:index, :show] do |user|
    user.resource :cart, :conditions => [:create, :update]

And strip out the conditions:

namespace(:api) do |api|
  api.resource :user do |user|
    user.resource :cart

This might seem like a small matter of aesthetics (and even if it was, I still like it), but it’s also more agile. We know any undefined methods will return the proper response. As requirements inevitably change, we only have to make the single change of adding or removing methods; we don’t then also have to go update the routes file. Win.

Cleaner Actions

We can take this further still. How many times have you come across an action like this:

def show
  if params[:id].blank?
    # return some specific error response

  unless m = Model.find_by_id(params[:id])
    # return some other specific error response

  # actual logic


By explicitly adding ActiveRecord::RecordNotFound to our list of rescues we can remove all that cruft.

rescue_from ActiveRecord::RecordNotFound do |ex|
  error = {
    :code        => 404,
    :name        => "NotFound"
    :description => "#{ex}"

  respond_error(404, error)

Again, we get a free description. We can now clean up the action to something much more assertive and simple like:

def show
  m = Model.find(params[:id])

  # actual logic


And both invalid states lead to the correct error descriptions of “Can’t find Model without an ID” or “Can’t find Model with ID=42” respectively. Thank you again, Mr Rails.

Cleaner Everything

Once you get used to this method of exception handling and assertive code, it’s easy to take this even further and define your own custom exception-rescue_from-raise scenarios for when your controllers get into various (exceptional) states where they can’t and shouldn’t continue.

No need to and return or return render or wrap everything in if/unless etc. When shit goes wrong, just raise the appropriate exception. All you have to do is trust (or dictate) that the level of abstraction(s) above you are written to do the Right Thing, which is a useful quality even if you’re not following this pattern.

19 Sep 2012, tagged with ruby, rails


tl;dr: it’s just like aurget but more stable and faster

Developing aurget was getting cumbersome. Whenever something went wrong, it was very difficult to track down or figure out. The lack of standard tools for things like uri escaping or json parsing was getting a bit annoying, and the structure of the code just annoyed me. There was also a lack of confidence when changes were made, I could only haphazardly test a handful of scenarios so I was never sure if I’d introduced a regression.

I decided to write raury to be exactly as featureful as aurget, but different in the following ways:

  • Solid test coverage

raury coverage 

  • Useful debug output

raury debug output 

  • Clean code

raury code 

I think I’ve managed to hit on all of these with a happy side-effect too: it’s really fast. It takes less than a few seconds to churn through a complex tree of recursive dependencies. The same operation via aurget takes minutes.


So anyway, if you’re interested in trying it out, I’d love for some beta testers.

Assuming you’ve got a working ruby environment (and the bundler gem), you can do the following to quickly play with raury:

$ git clone https://github.com/pbrisbin/raury && cd ./raury
$ bundle
$ bundle exec bin/raury --help

If you like it, you can easily install it for real:

$ rake install
$ raury --help

There’s also a simple script which just automates this clone-bundle-rake process for you:

$ curl https://github.com/pbrisbin/raury/raw/master/install.sh | bash

Also, tlvince was kind enough to create a PKGBUILD and even maintain an AUR package for raury. Check that out if it’s your preferred way to install.

30 Aug 2012, tagged with arch, aur, ruby

Console TDD with String IO

If you write console based applications in ruby, chances are you’re going to want to get some test coverage on that eventually. StringIO is a great class to use when you want to assert that your application outputs the correct stuff to the screen.

We can modify the global variable $stdout to be an instance of StringIO for the duration of our tests. Any method that outputs text on stdout (like puts and print) will be sending their text to this object. After we’re done, we can ask it what it’s got and make assertions on it.

Here’s an rspec example:

require 'stringio'

describe StringIO do
  before do
    $stdout = StringIO.new

  after do
    # always clean up after yourself!
    $stdout = STDOUT

  it "should help capture standard output" do
    puts "foo"
    puts "bar"

    $stdout.string.should == "foo\nbar\n"

Not a bad bit of TDD if I don’t say so!

Similar tricks could be used with $stderr or $stdin to get solid end-to-end test coverage on a wide variety of console-based applications.

25 Aug 2012, tagged with ruby, stringio, tdd

Maybe In Ruby

Sometimes it’s fun to do something completely useless.

Recently, I wrote a post about how awesome the Maybe type is in Haskell. In the post, I talked about Functors and Monads and how Maybe can help us understand them.

Shortly thereafter, I was bored on the train one day and decided to implement Maybe and its functor instance in ruby.

In this post I’ll be relying on the fact that obj.(args) is translated to obj.call(args) in newer rubies. I find it makes the example read better.


So we need an object that can represent “Just something” or “Nothing”. Ruby already has the concept of nil, so we’ll piggy back on that and just wrap it all in some sugar.

class Maybe
  def initialize(value)
    @value = value

  def nothing?

  def just?

  def value
    if just?
      raise "Can't get something from nothing."

  # we'll need this to prove the laws
  def ==(other)
    if just? && other.just?
      return value == other.value

    nothing? && other.nothing?

def Just(x)
  raise "Can't make something from nothing." if x.nil?


Nothing = Maybe.new(nil)


We can’t map functions to methods because methods need targets, they can’t stand on their own. As an example, take id (which we’ll be using later on). One might be tempted to define it like this:

def id(x)

This won’t work for our purposes since that method (defined on the global object Object) can’t be passed around, partially applied or composed.

It’s more convenient to do it like this:

# ruby 1.9
id = ->(x) { x }

# ruby 1.8
id = lambda { |x| x }

Now you’ve got an isolated, callable id object which you can pass around.

Partial Application

Functions need to be partially applied. That means you can give a function a few of the arguments it expects and get back another function which you can then pass around and eventually call with the additional arguments given at that later point:

class Partial
  def initialize(f, *args)
    @f, @args = f, args

  def call(*args)
    new_args = @args + args


def partial(f, *args)
  Partial.new(f, *args)

max = ->(x,y) { x >= y ? x : y }

max.(4, 5) # => 5

max5 = partial(max, 5)

max5.(6) # => 6
max5.(4) # => 5

[4, 5, 6].map { |i| max5.(i) } # => [5, 5, 6]


Two functions, when composed together, return a new function which represents the first being applied to the result of the second being applied to the argument given.

class Compose
  def initialize(f, g)
    @f, @g = f, g

  def call(x)
    @f.( @g.( x ) )

def compose(f, g)
  Compose.new(f, g)

get_len = ->(s) { s.length   }
add_bar = ->(s) { s + "_bar" }

get_len_with_bar = compose(get_len, add_bar)

get_len_with_bar.("foo") # => 7

This is all so much easier in Haskell…


Now that we can define functions, partially apply them and compose them together, we can finally prove the Functor laws for our new Maybe class.

Let’s start by defining fmap, just as it is in Haskell:

# fmap f (Just x) = Just (f x)
# fmap _ Nothing  = Nothing
fmap = ->(f, x) do
  if x.just?

Strictly speaking, fmap’s behavior is type-dependant. So a real implementation (for some definition of “real”) would probably make a method on Object which needs to be overridden by any classes that are proper Functors. We won’t worry about that here…

First law, the identity operation must behave the same when it’s fmapped.

id = ->(x) { x }

fmap_id = partial(fmap, id)

# fmap id = id
fmap_id.(Nothing)     == id.(Nothing)     # => true
fmap_id.(Just("foo")) == id.(Just("foo")) # => true

So far so good.

Second law, fmapping a composed function is no different than composing the result of each function fmapped separately.

f = ->(s) { s + "_bar" }
g = ->(s) { s.length   }

f_g = compose(f, g)

fmap_f_g = partial(fmap, f_g)

fmap_f = partial(fmap, f)
fmap_g = partial(fmap, g)

fmap_f_fmap_g = compose(fmap_f, fmap_g)

# fmap (f . g) == fmap f . fmap g
fmap_f_g.(Nothing)     == fmap_f_fmap_g.(Nothing)    # => true
fmap_f_g.(Just("foo")) == fmap_f_fmap_g.(Just("foo") # => true

As suspected, our new Ruby-Maybe is a proper Functor.


Is our class a Monad?

# >>=
f = ->(ma, f) do
  if ma.just?

# >>
f = ->(ma, mb) do
  if ma.just?

# return
f = ->(x) do

# fail
f = -> do

Proving the laws is left as an exercise to the reader…

01 May 2012, tagged with ruby, haskell

Beware Never Expectations

Mocha expectations are incredibly useful for ruby unit testing. You can stub out all kinds of functionality you depend on, specify exactly what values those dependencies return, and validate that the object under test behaves exactly as you want it to, right down to the methods it does or doesn’t call.

Unfortunately, I’ve bumped up against one glaring case where this can get you into trouble. To make matters worse, the symptom of this situation is that your tests just always pass.

Expects never

Take a simple class like this:

class Foo
  def a_method

  rescue => ex

  # ...


Say we want to update it so that another_method is only called if some condition is met.

Let’s play hardcore TDD here; we’ll write and run the test first:

class FooTest < Test::Unit::TestCase
  def test_a_method
    foo = Foo.new

    # condition met

    # should call it

    # condition not met

    # should not call it

Simple, easy to follow – you should absolutely fail with “Unexpected invocation” on that second call due to the never expectation you’ve set. There’s no reason to run this test now right? You know it’s going to fail, right?

You run the test anyway, and it… Passes. Um, wat?

I know what you might be saying here, it’s as simple as a single postfix unless some_condition?. So why am I insisting on figuring this out, wasting time just to see this test fail before I implement?

Well, in actuality I didn’t do things in this order. I wrote the implementation, then ran the test, saw it pass and moved on. It was only later that I regressed, broke the implementation and didn’t find out for a good while because the test never started failing.

Luckily, it hadn’t gone to production, but this scenario makes a strong case for writing and running your tests before your implementations – it’s the only chance you have to ensure your test actually covers what it should.


Let me save you the frustration of debugging this. What’s happening here is that when the method gets (incorrectly) called, Mocha raises an ExpectationError which is (by design) promptly rescued and logged.

I’d personally like to see Mocha not use this approach; rather count the number of calls and compare that number against what was expected later outside of your (possibly rescued) logic. This is how not-called-enough is implemented, why not let called-too-much be handled the same way?

There are a couple of ways we can work around this limitation though. One approach could be to re-raise the error when testing:

  # ...

rescue => ex

  raise ex if Rails.env.test?

That’s only moderately smelly and might suit you in most cases. In my case, I couldn’t do this because swallowing all errors was by-design and (of course) backed up with test coverage, so those would start failing if I re-raise in that environment.

That and I hate modifying implementation code specifically to support some testing-related concern.

Another option might be to specifically handle the Mocha exception:

  # ...

rescue Mocha::ExpectationException => ex
  raise ex
rescue => ex

That exception class is not in scope when you’re running in production, so that wouldn’t be fun. And I’d be very against requiring the Mocha gem in non-test environments.

Rewrite never

Anyway, here’s the solution we ended up with: redefine the method to increment a class-level counter, then assert that it was never called by checking that counter afterwards.

class FooTest < Test::Unit::TestCase
  def test_a_method
    foo = Foo.new

    assert_not_called(foo, :another_method) do


  # Note: not thread-safe
  def assert_not_called(obj, method, &block)
    # set a class level counter
    @@counter = 0

    # redefine the method so, if called, it increments that counter
    obj.instance_eval %{
      def #{method}(*args)
        #{self.class}.instance_eval "@@counter += 1"

    # run your code

    # see if it was ever called
    assert_equal 0, @@counter, "#{obj}.#{method}: unexpected invocation."

Now, do yourself a favor and run this test before you write the implementation. It’s the only way to be sure the test works and regressions will be caught down the line.

08 Apr 2012, tagged with mocha, rails, ruby

Escalate Your Scripts

Anyone who knows me knows I love the shell. I got my “start” in bash and still have a plethora of scripts lying around doing all sorts of useful and fun things for me. Recently, however, I tackled a task that I had attempted many times in shell script always to be met with frustration. How did I finally figure it out? I made it a rake task and did it in ruby.

Ask me last month what I thought the best tool for this job would’ve been, and 99 times out of 100 I would’ve said “shell script”. But guess what, I couldn’t do it – just never worked out. Now, after having written quite a nice little Rakefile, I can say confidently that I wish I had thought to do this sooner – and I hope I’ll think to do it again.

I want to write about this exercise mainly because I found the process to be quite enjoyable. When I needed to do imperative flow control, call system commands, and move things about the file system, I felt no resistance. More importantly, I could use all of the higher-level features to keep the code clear and clean.

And this is not just praise for ruby (though it does a good job), I’m more recommending that when presented with a task that makes sense as a shell script – think for a second if it might not be possible to do in a higher-level language, you might be surprised.

The Problem

I’ve got a repo (as a lot of you probably do) that contains my main dotfiles. It’s a collection of files that are usually scattered throughout my home directory which I’ve centralized into one folder and placed under version control. The normal approach with this is to symlink these files from the central location out into the proper places under $HOME.

I wanted to automate this process. I wanted to be able to setup a new box by cloning this repo and running a single script. After that script completes, I want as much of my environment as is generally applicable to be fully configured.

The challenges here were that not all of the files in the repo made sense on every machine, some required parent directories to exist and, of course, I had to be careful not to clobber anything already present.

Nothing about this is insurmountable; the (albeit self-imposed) challenge is to do it as simply and maintainably as possible.


The interesting thing about this script is what parts are higher level and what parts are not. So first, here are all of the higher-level bits with the scriptier parts left out:

require 'fileutils'

module Dotfiles
  def self.each(&block)
    ].each do |file|
      yield Dotfile.new(file)

  class Dotfile
    include FileUtils

    attr_reader :dotfile
    attr_accessor :source, :target

    def initialize(dotfile)
      @dotfile = dotfile
      @source  = File.join(pwd, dotfile)
      @target  = File.join(ENV['HOME'], dotfile)

    def install!

      # ...


desc "updates all submodules"
task :submodules do

  # ...


desc "installs all dotfiles into the proper places"
task :install => [:submodules] do

  # ...


task :default => :install

This shows the pattern I most often follow when scripting in ruby (which is very different than programming in ruby): one, top-level module to hold any script-wide logic or constants as well as classes to represent the data your working with.

With an overall module and a clean API of classes and methods, you provide yourself a useful set of commands above and beyond the flow control and backtick-interpolation you would normally lean on.

You’ll also notice, in that each method, something I’m calling a Parallel Good Decision. I decided to hardcode the list of dotfile paths relative to the repo. This solved a number of problems that were leading to very smelly code. I could’ve used git ls-files or a normal glob-and-blacklist approach, but simply hardcoding this list allows finer control over what files are linked and if they are treated as files or directories.

Had I made this decision in isolation, it might have been enough for me to get that shell script approach working – but I didn’t. For some reason, only when cleaning up everything else and approaching the problem from a (slightly) higher level did I see that a simple list of relative file paths made the most sense here.

Script It Out

Now that the skeleton-slash-library code is in place, we can fill in the gaps:

require 'fileutils'

module Dotfiles
  def self.each(&block)
    # ... 

  class Dotfile
    # ...

    def install!
      puts "--> installing #{dotfile} as #{target}..."
      if File.exists?(target)
        if File.symlink?(target)
          rm target, :verbose => true
          mv target, "#{target}.backup", :verbose => true

      ln_s source, target, :verbose => true

desc "updates all submodules"
task :submodules do
  unless system('git submodule update --init --recursive')
    raise 'error initializing submodules'

desc "installs all dotfiles into the proper places"
task :install => [:submodules] do

  vimrc = Dotfiles::Dotfile.new('.vimrc')
  vimrc.source = File.join(ENV['HOME'], '.vim', 'vimrc')

task :default => :install

The stuff that’s easy is easy, the stuff that’s hard is easier and overall, the code is very clean and maintainable.

Oh, and I guess it’s nice that it works.

24 Mar 2012, tagged with bash, ruby

Anonymous Classes In Ruby

Often times, I find myself wanting something anonymous. This occurs quite frequently in code when you need to define, pass or call some functionality which is usually very short and only useful in this moment. Many languages provide anonymous functions (usually called lambdas) for this sort of thing: haskell has \x y -> x + y and ruby has lambda {|x,y| x + y}, Proc.new and the new ->(x,y) syntax which I’m actually not very fond of.

Sometimes, in ruby, I find myself wanting an anonymous class for much the same reasons. At first, this seemed like a silly thing to do, so I didn’t expect it to be clean or easy – but in fact, it is. Ruby itself uses anonymous classes for all sorts of things, and the syntax we’ll use to do it is almost comically obvious.


Sometimes if you’re writing a test for a module, you need to include or extend it into something to accurately test it. Here’s one approach to doing that:

# assume MyModule is defined as a module, which you want to test

class ModuleTest < Test::Unit::TestCase
  def test_the_thing
    klass = MyClass.new

    # assert something about klass now that it's included your module


  class MyClass
    include MyModule

    # ...


This is fairly contrived, but I think we all agree that sometimes you need a new class to test something (like modules). Putting in some private subclass for the purposes of testing seems fairly appropriate, albeit pretty smelly.

Let’s see how an anonymous class can help:

class ModuleTest < Test::Unit::TestCase
  def test_the_thing
    klass = Class.new do
      include MyModule

      # ...


    # assert something about klass

Not only is this a bit shorter, but I’d say it’s clearer too now that the object under test is made more prominent.

Rake tasks

I like to write rake tasks to do useful things. Sometimes one of those tasks wants to move files around. FileUtils is great for this, and it’s best used when mixed into a class.

I won’t bore you with the non-anonymous version, so here’s the one using Class.new, hopefully you can imagine it with more boilerplate:

require 'fileutils'

desc "do the damn thing"
task :run do
  Class.new do
    include FileUtils

    def run!
      mv this, that

      cp here, there

      rm the_thing

So short!

This really speaks to ruby’s flexibility when it comes to “everything is an object” and hopefully illustrates that if you understand the benefits of anonymous functions, why not start thinking about how to use anonymous classes too?

24 Mar 2012, tagged with ruby

Implicit Scope

No one can deny that rails likes to do things for you. The term “auto-magically” comes to mind. This can be a blessing and a curse.

For the most part, rails tries to give you “outs” – a few hoops here and there that, if jumped though, will let you do things in different or more manual ways. Sometimes though, it doesn’t.

Find In Batches

One of the many ORM helpers provided by rails is find_in_batches. It will repeatedly query the database with a limit and offset, handing you chunks of records to work through in sequence. Perfect for processing a very large result set in constant memory.

Order.find_in_batches(:batch_size => 10) do |orders|
  orders.length # => 10

  orders.each do |order|

    # yay order!


The problem is that any conditions you add to find_in_batches are inherited by any and all sql performed within its block. This is called “implicit scope” and there’s no way around it.

Why is this an issue? I’m glad you asked, here’s a real life example:

# SELECT * from orders
# WHERE orders.status = 'pending'
# LIMIT 0, 10;
# adjusting LIMIT each time round
Order.find_in_batches(:batch_size => 10,
                      :conditions => ['status = ?', 'pending']) do |orders|

  orders.each do |order|
    # UPDATE orders SET orders.status = 'timing_out'
    # WHERE orders.id     = ?
    #   AND orders.status = 'pending'; <-- oh-hey implicit scope
    order.update_attribute(:status, 'timing_out')

    # some long-running logic to actually "time out" the order...

    # UPDATE orders SET orders.status = 'timed_out'
    # WHERE orders.id     = ?
    #   AND orders.status = 'pending';
    order.update_attribute(:status, 'timed_out')

Do you see the problem? The second update fails because it can’t find the order due to the implicit scope. The first update was only successful due to coincidence.


I would love to find a simple remove_implicit_scope macro that can get around this issue, but it’s just not there.

I even went so far as to put the update logic in a Proc or lambda hoping to bring in a binding without the implicit scope – no joy.

I had to resort to simply not using find_in_batches.

At the time, I just rewrote that piece of the code to use a while true loop. Thinking about it now, I realize I could’ve factored it out into my own find_in_batches; also, I could put it in a module so you can extend it in your model to have the better (IMO) behavior…

module FindNoScope

  def find_in_batches(options)
    limit = options.delete(:batch_size)
    options.merge!(:limit => limit)

    offset = 0

    while true
      chunk = all(options.merge(:offset => offset))

      break if chunk.empty?

      yield chunk

    offset += limit


class Order < ActiveRecord::Base
  extend FindNoScope

  # ...


Note that the above was written blind, is completely untested, and will likely not work

28 Oct 2011, tagged with ruby, rails, work

Ruby Eval

Ruby’s intance_eval and class_eval are awesome tricks of the language that can really cut down on redundant code or let you do truly dynamic things that you’d have never thought possible.

There’s one piece of confusion around these methods that each book I’ve read goes about explaining in a slightly different way. None of them really clicked for me, so why not write my own?

The two entirely accurate but seemingly paradoxical statements are this:

Use class_eval to define instance methods

Use instance_eval to define class methods

The reason for the backwards-ness is often explained something like this:

x.class_eval treats x as a Class, so any methods you create will be instance methods.

x.intance_eval treats x as an instance, so any methods you create will be class methods.

Well that’s clear as mud…

My take

Here’s how I think about it:

Any methods you define inside of x.instance_eval will be as if you had defined them on the instance x.

Any methods you define inside of x.class_eval will be as if you had written it in the Class x.

Examples should help…


Here’s an example of class_eval

class MyClass
  def my_method

MyClass.class_eval do
  def my_other_method

c = MyClass.new
=> "bar"

This is exactly as if you had done the following:

class MyClass
  def my_method

  # oh... the files are /inside/ the computer!
  def my_other_method

c = MyClass.new
=> "bar"

So we used class_eval to define an instance method. Just like the book said.

Funny thing is, you can easily use class_eval to define class methods too.

class MyClass

MyClass.class_eval do
  def self.foo

=> "foo"

So I think that whole mindset is incorrect. It’s about the context your code is evaluated in, not what you’re intending that matters.


Similarly, here’s how I think when I’m writing something with instance_eval:

c = MyClass.new

# notice we act *on* an instance
c.instance_eval do
  def my_other_other_method

=> "baz"

# we've written that method *on* c, so it only exists for that 
# *instance*...
d = MyClass.new
=> Error...

This code is identical to

c = MyClass.new

# definition on c
def c.my_other_other_method

=> "baz"

In the second form, it’s clearer that the method only exists on that specific instance.

One other way to look at it is this:

Methods defined with class_eval will be available to every instance of that class (making them instance methods).

Methods defined with instance_eval will only be available to that specific instance; why they’re called “class methods”, I do not know.

Anyway, hope this helps…

25 Oct 2011, tagged with ruby

Test Driven Development

With my recent job shift, I’ve found myself in a much more sophisticated environment than I’m used to with respect to Software Engineering.

At my last position, there wasn’t much existing work in the X++ realm; We were breaking new ground, no one cared about elegance; if you got the thing working – more power to you.

Here, it’s slightly different.

People here are working in a sane, documented, open-source world; and they’re good. Everyone is acutely aware of what’s good design and what’s not. There’s a focus on elegant code, industry standards, solid OOP principles, and most importantly, we practice Test Driven Development.

I’m completely new to this method for development, and I gotta say, it’s quite nice.

Now, I’m not going to say that this is the be-all-end-all of development styles (I’m a functional, strictly-typed, compiler-checked code guy at heart), but I do find it quite interesting – and effective.

So why not do a write-up on it?

Test Framework

The prerequisite for doing anything in TDD is a good test framework. Luckily, ruby is pretty strong in this area. The way it works is the following:

You subclass Test::Unit and define methods that start with test_ where you execute system logic and make assertions about certain results; and then you run that class.

Ruby looks for those methods named as test_whatever and runs them “as tests”. Running a method as a test means that errors and failures (any of your assert methods returning false) will be logged and displayed at the end as part of the “test report”.

All of these test classes can be run automatically by a build-bot and (depending on your test coverage) you get good visibility into what’s working and what’s not.

This is super convenient and empowering in its own right. In a dynamic language like ruby, tests are the only way you have any level of confidence that your most recent code change doesn’t blow up in production.

So now that you’ve got this ability to write and run tests against your code base, here’s a wacky idea, write the tests first.

Test Driven

It’s amazing what this approach does to the design process.

I’ve always been the type that just starts coding. I’m completely comfortable throwing out 6 hours worth of code and starting over. I know my “first draft” isn’t going to be right (though it will be useful). I whole-heartedly believe in refactorings, etc. But most importantly, I need to code to sketch things out. It’s how I’ve always worked.

TDD is sort of the same thing. You do a “rough sketch” of the functionality you’ll add simply by writing tests that enforce that functionality.

You think of this opaque object – a black box. You don’t know how it does what it does, but you’re trying to test it doing it.

This automatically gives you an end-user perspective. You now focus solely on the interface, the input and the output.

This is a wise position to design from.

You also tend to design small self-contained pieces of functionality. Methods that don’t care about state, return the same output for a given input, and generally do one simple thing. Of course, you do this because these are the easiest kind of methods to test.

So, out of sheer laziness, you design a cohesive, easy to use, and completely simple interface, an API.

Now you just have to “plumb it up”. Hack until the tests pass, and you’re done. That might be an over-simplification, but it’s not off by much…

Come to think of it, this is exactly the type of design Haskell favors. With gratuitous use of undefined, the super-high-level logic of a Haskell program can be written out with named functions to “do the heavy lifting”. If you make these functions simple enough and give them descriptive enough names, they practically write themselves.

So that’s TDD (at least my take on it). So far, I like it.

02 Oct 2011, tagged with linux, ruby, work, tdd