pbrisbindotcom

Deleting Git Tags with Style

Deleting Git tags that have already been pushed to your remote is something I have to google literally every time I do it; why the invocation is so arcane, I don’t know. Finally, I decided to automate it with a custom sub-command:

~/.local/bin/git-delete-tag

#!/bin/sh
for tag; do
  git tag -d "$tag"
  git push origin :refs/tags/"$tag"
done

With this script present on $PATH, I can just invoke git delete-tag TAG, .... This is great, but I soon noticed that typing git dele<tab> wouldn’t complete this command (or any custom sub-commands for that matter). After a little digging in the _git completion file, I found the relevant zstyle needed to get this working:

.zshrc

zstyle ':completion:*:*:hub:*' user-commands ${${(M)${(k)commands}:#git-*}/git-/}

Since I’m actually invoking hub, a git wrapper with added functionality for interacting with GitHub, I had to use :hub: in place of :git:, which is what the documentation shows.

I also wanted git delete-tag <tab> to complete with the current tags for the repository. Again, the extension points in the Zsh tab-completion system shine, and it only took a little _git- completion function to make it happen:

.zshrc

_git-delete-tag() { compadd "$@" $(git tag) }

Hopefully this short post will come in useful for Git and Zsh users who, like myself, can never remember how to delete Git tags. As always, you can find the described configuration “in the wild” by way of my dotfiles repo. These items will be within the scripts or zsh tags.

24 Jun 2016, tagged with git, zsh

tee-io Lessons Learned

A while back, I launched a side project called tee-io. It’s sort of like a live pastebin. You use its API to create a command and then send it buffered output, usually a line at a time. Creating the command gives you a URL where you can watch the output come in in real time. We use it at work to monitor the commands run by our bot instead of waiting for the (potentially long) command to finish and report all the output back to us at once.

tee-io-in-action
While working on this project, which is built with Yesod, I started to settle on some conventions for things I’ve not seen written up in the wild. I’d like to collect my thoughts here, both for myself and in case these conventions are useful to others.

Worker

One thing tee-io does that I think is common but under-served in the tutorial space is background work. In addition to the main warp-based binary, it’s often necessary to run something on a schedule and do periodic tasks. In tee-io’s case, I want to archive older command output to S3 every 10 minutes.

My approach is to define a second executable target:

executable              tee-io-worker
    if flag(library-only)
        buildable:      False

    main-is:            main-worker.hs
    hs-source-dirs:     app
    build-depends:      base
                      , tee-io

    ghc-options:        -Wall -Werror -threaded -O2 -rtsopts -with-rtsopts=-N

This is basically a copy-paste of the existing executable, and the implementation is also similar:

import Prelude (IO)
import Worker (workerMain)

main :: IO ()
main = workerMain

workerMain uses the “unsafe” handler function to run a Handler action as IO:

workerMain :: IO ()
workerMain = handler $ do
    timeout <- appCommandTimeout . appSettings <$> getYesod
    archiveCommands timeout

archiveCommands :: Second -> Handler ()
archiveCommands timeout = runDB $ -- ...

Making the heavy lifting a Handler () means I have access to logging, the database, and any other configuration present in a fully-inflated App value. It’s certainly possible to write this directly in IO, but the only real downside to Handler is that if I accidentally try to do something request or response-related, it won’t work. In my opinion, pragmatism outweighs principle in this case.

Logging

One of the major functional changes I make to a scaffolded Yesod project is around AppSettings, and specifically logging verbosity.

I like to avoid the #define DEVELOPMENT stuff as much as possible. It’s required for template-reloading and similar settings because there’s no way to give the functions that need to know those settings an IO context. For everything else, I prefer environment variables.

In keeping with that spirit, I replace the compile-time, logging-related configuration fields with a single, env-based log-level:

Settings.hs

instance FromJSON AppSettings where
    parseJSON = withObject "AppSettings" $ \o -> do
        let appStaticDir = "static"
        appDatabaseConf <- fromDatabaseUrl
            <$> o .: "database-pool-size"
            <*> o .: "database-url"
        appRoot <- o .: "approot"
        appHost <- fromString <$> o .: "host"
        appPort <- o .: "port"
        appIpFromHeader <- o .: "ip-from-header"
        appCommandTimeout <- fromIntegral
            <$> (o .: "command-timeout" :: Parser Integer)
        S3URL appS3Service appS3Bucket <- o .: "s3-url"
        appMutableStatic <- o .: "mutable-static"

        appLogLevel <- parseLogLevel <$> o .: "log-level"
        -- ^ here

        return AppSettings{..}

      where
        parseLogLevel :: Text -> LogLevel
        parseLogLevel t = case T.toLower t of
            "debug" -> LevelDebug
            "info" -> LevelInfo
            "warn" -> LevelWarn
            "error" -> LevelError
            _ -> LevelOther t

config/settings.yml

approot: "_env:APPROOT:http://localhost:3000"
command-timeout: "_env:COMMAND_TIMEOUT:300"
database-pool-size: "_env:PGPOOLSIZE:10"
database-url: "_env:DATABASE_URL:postgres://teeio:teeio@localhost:5432/teeio"
host: "_env:HOST:*4"
ip-from-header: "_env:IP_FROM_HEADER:false"
log-level: "_env:LOG_LEVEL:info"
mutable-static: "_env:MUTABLE_STATIC:false"
port: "_env:PORT:3000"
s3-url: "_env:S3_URL:https://s3.amazonaws.com/tee.io"

I don’t use config/test-settings.yml and prefer to inject whatever variables are appropriate for the given context directly. To make that easier, I load .env files through my load-env package in the appropriate places.

.env (development)

COMMAND_TIMEOUT=5
LOG_LEVEL=debug
MUTABLE_STATIC=true
S3_URL=https://s3.amazonaws.com/tee.io.development

.env.test

DATABASE_URL=postgres://teeio:teeio@localhost:5432/teeio_test
LOG_LEVEL=error
S3_URL=http://localhost:4569/tee.io.test

Now I can adjust my logging verbosity in production with a simple heroku config:set, whereas before I needed a compilation and deployment to do that!

Yesod applications log in a few different ways, so there are a handful of touch-points where we need to check this setting. To make that easier, I put a centralized helper alongside the data type in Settings.hs:

allowsLevel :: AppSettings -> LogLevel -> Bool
AppSettings{..} `allowsLevel` level = level >= appLogLevel

The first place to use it is the shouldLog member of the Yesod instance:

shouldLog App{..} _source level = appSettings `allowsLevel` level

Second is the logging middleware. It’s a little tricky to get the right behavior here because, with the default scaffold, this logging always happens. It has no concept of level and wasn’t attempting to make use of shouldLog in any way.

The approach I landed on was to change the destination to (basically) /dev/null if we’re not logging at INFO or lower. That’s equivalent to if these messages were tagged INFO and respected our configured level, which seems accurate to me. The big win here is they no longer mess up my test suite output.

makeLogWare foundation = mkRequestLogger def
    { outputFormat = if appSettings foundation `allowsLevel` LevelDebug
        then Detailed True
        else Apache apacheIpSource
    , destination = if appSettings foundation `allowsLevel` LevelInfo
        then Logger $ loggerSet $ appLogger foundation
        else Callback $ \_ -> return ()
    }

One last thing, specific to tee-io, is that I can use this setting to turn on debug logging in the AWS library I use:

logger <- AWS.newLogger (if appSettings foundation `allowsLevel` LevelDebug
    then AWS.Debug
    else AWS.Error) stdout

It’s pretty nice to set LOG_LEVEL=debug and start getting detailed logging for all AWS interactions. Kudos to amazonka for having great logging too.

REPL-Driven-Development

DevelMain.hs has quickly become my preferred way to develop Yesod applications. This file ships with the scaffold and defines a module for starting, stopping, or reloading an instance of your development server directly from the REPL:

stack repl --ghc-options="-DDEVELOPMENT -O0 -fobject-code"
λ> :l DevelMain
DevelMain.update
Devel application launched: http://localhost:3000

The big win here in my opinion is that, in addition to viewing changes in your local browser, you naturally fall into a REPL-based workflow. It’s not something I was actively missing in Yesod projects, but now that I’m doing it, it feels really great.

I happen to have a nice Show instance for my settings, which I can see with handler:

λ> appSettings <$> handler getYesod
log_level=LevelDebug host=HostIPv4 port=3000 root="http://localhost:3000"
  db=[user=teeio password=teeio host=localhost port=5432 dbname=teeio]
  s3_bucket=tee.io.development command_timeout=5s

(Line breaks added for readability, here and below.)

And I can investigate or alter my local data easily with db:

λ> db $ selectFirst [] [Desc CommandCreatedAt]
Just (Entity
      { entityKey = CommandKey
          { unCommandKey = SqlBackendKey {unSqlBackendKey = 1097} }
      , entityVal = Command
          { commandToken = Token {tokenUUID = e79dae2c-020e-48d4-ac0b-6d9c6d79dbf4}
          , commandDescription = Just "That example command"
          , commandCreatedAt = 2016-02-11 14:50:19.786977 UTC
          }
      })
λ>

Finally, this makes it easy to locally test that worker process:

λ> :l Worker
λ> workerMain
16/Apr/2016:14:08:28 -0400 [Debug#SQL]
  SELECT "command"."id", ...
    FROM "command"
  LEFT OUTER JOIN "output"
     ON ("command"."id" = "output"."command")
    AND ("output"."created_at" > ?)
  WHERE (("command"."created_at" < ?)
    AND ("output"."id" IS NULL))
  ; [ PersistUTCTime 2016-04-16 18:08:23.903484 UTC
    , PersistUTCTime 2016-04-16 18:08:23.903484 UTC
    ]
16/Apr/2016:14:08:28 -0400 [Info] archive_commands count=1
  @(main:Worker /home/patrick/code/pbrisbin/tee-io/src/Worker.hs:37:7)
[Client Request] {
  host      = s3.amazonaws.com:443
  secure    = True
  method    = PUT
  target    = Nothing
  timeout   = Just 70000000
  redirects = 0
  path      = /tee.io.development/b9a74a98-0b16-4a23-94f1-5df0a01667d0
  query     = 
  headers   = ...
  body      = ...
}
[Client Response] {
  status  = 200 OK
  headers = ...
}
16/Apr/2016:14:08:28 -0400 [Debug#SQL] SELECT "id", "command", ...
16/Apr/2016:14:08:28 -0400 [Debug#SQL] DELETE FROM "output" WHERE ...
16/Apr/2016:14:08:28 -0400 [Debug#SQL] DELETE FROM "command" WHERE ...
16/Apr/2016:14:08:28 -0400 [Info] archived token=b9a74a98-0b16-4a23-94f1-5df0a01667d0
  @(main:Worker /home/patrick/code/pbrisbin/tee-io/src/Worker.hs:59:7)
λ>

Since I run with DEBUG in development, and that was picked up by the REPL, we can see all the S3 and database interactions the job goes through.

The console was one of the features I felt was lacking when first coming to Yesod from Rails. I got used to not having it, but I’m glad to see there have been huge improvements in this area while I wasn’t paying attention.

Deployment

I’ve been watching the deployment story for Yesod and Heroku change drastically over the past few years. From compiling on a VM, to a GHC build pack, to Halcyon, the experience hasn’t exactly been smooth. Well, it seems I might have been right in the conclusion of that last blog post:

Docker […] could solve these issues in a complete way by accident.

We now have a Heroku plugin for using Docker to build a slug in a container identical to their Cedar infrastructure, then extracting and releasing it via their API.

Everything we ship at work is Docker-based, so I’m very comfortable with the concepts and machine setup required (which isn’t much), so using this release strategy for my Yesod applications has been great. Your mileage may vary though: while I do feel it’s the best approach available today, there may be some bumps and yaks for those not already familiar with Docker – especially if on an unfortunate operating system, like OS X.

Thanks to the good folks at thoughtbot, who are maintaining a base image for releasing a stack-based project using this Heroku plugin, making tee-io deployable to Heroku looked like this:

% cat Procfile
web: ./tee-io

% cat app.json
{
  "name": "tee.io",
  "description": "This is required for heroku docker:release"
}

% cat docker-compose.yml
# This is required for heroku docker:release
web:
  build: .

% cat Dockerfile
FROM thoughtbot/heroku-haskell-stack:lts-5.12
MAINTAINER Pat Brisbin <pbrisbin@gmail.com>

And I just run:

heroku docker:release

And that’s it!

If you’re interested in seeing any of the code examples here in the context of the real project, checkout the tee-io source on GitHub.

16 Apr 2016, tagged with haskell, yesod

Regular Expression Evaluation via Finite Automata

What follows is a literate haskell file runnable via ghci. The raw source for this page can be found here.

While reading Understanding Computation again last night, I was going back through the chapter where Tom Stuart describes deterministic and non-deterministic finite automata. These simple state machines seem like little more than a teaching tool, but he eventually uses them as the implementation for a regular expression matcher. I thought seeing this concrete use for such an abstract idea was interesting and wanted to re-enforce the ideas by implementing such a system myself – with Haskell, of course.

Before we get started, we’ll just need to import some libraries:

> import Control.Monad.State
> import Data.List (foldl')
> import Data.Maybe

Patterns and NFAs

We’re going to model a subset of regular expression patterns.

> data Pattern
>     = Empty                   -- ""
>     | Literal Char            -- "a"
>     | Concat Pattern Pattern  -- "ab"
>     | Choose Pattern Pattern  -- "a|b"
>     | Repeat Pattern          -- "a*"
>     deriving Show

With this, we can build “pattern ASTs” to represent regular expressions:

ghci> let p = Choose (Literal 'a') (Repeat (Literal 'b')) -- /a|b*/

It’s easy to picture a small parser to build these out of strings, but we won’t do that as part of this post. Instead, we’ll focus on converting these patterns into Nondeterministic Finite Automata or NFAs. We can then use the NFAs to determine if the pattern matches a given string.

To explain NFAs, it’s probably easiest to explain DFAs, their deterministic counter parts, first. Then we can go on to describe how NFAs differ.

A DFA is a simple machine with states and rules. The rules describe how to move between states in response to particular input characters. Certain states are special and flagged as “accept” states. If, after reading a series of characters, the machine is left in an accept state it’s said that the machine “accepted” that particular input.

An NFA is the same with two notable differences: First, an NFA can have rules to move it into more than one state in response to the same input character. This means the machine can be in more than one state at once. Second, there is the concept of a Free Move which means the machine can jump between certain states without reading any input.

Modeling an NFA requires a type with rules, current states, and accept states:

> type SID = Int -- State Identifier
> 
> data NFA = NFA
>     { rules         :: [Rule]
>     , currentStates :: [SID]
>     , acceptStates  :: [SID]
>     } deriving Show

A rule defines what characters tell the machine to change states and which state to move into.

> data Rule = Rule
>     { fromState  :: SID
>     , inputChar  :: Maybe Char
>     , nextStates :: [SID]
>     } deriving Show

Notice that nextStates and currentStates are lists. This is to represent the machine moving to, and remaining in, more than one state in response to a particular character. Similarly, inputChar is a Maybe value because it will be Nothing in the case of a rule representing a Free Move.

If, after processing some input, any of the machine’s current states (or any states we can reach via a free move) are in its list of “accept” states, the machine has accepted the input.

> accepts :: NFA -> [Char] -> Bool
> accepts nfa = accepted . foldl' process nfa
> 
>   where
>     accepted :: NFA -> Bool
>     accepted nfa = any (`elem` acceptStates nfa) (currentStates nfa ++ freeStates nfa)

Processing a single character means finding any followable rules for the given character and the current machine state, and following them.

> process :: NFA -> Char -> NFA
> process nfa c = case findRules c nfa of
>     -- Invalid input should cause the NFA to go into a failed state. 
>     -- We can do that easily, just remove any acceptStates.
>     [] -> nfa { acceptStates = [] }
>     rs -> nfa { currentStates = followRules rs }
> 
> findRules :: Char -> NFA -> [Rule]
> findRules c nfa = filter (ruleApplies c nfa) $ rules nfa

A rule applies if

  1. The read character is a valid input character for the rule, and
  2. That rule applies to an available state
> ruleApplies :: Char -> NFA -> Rule -> Bool
> ruleApplies c nfa r =
>     maybe False (c ==) (inputChar r) &&
>     fromState r `elem` availableStates nfa

An “available” state is one which we’re currently in, or can reach via Free Moves.

> availableStates :: NFA -> [SID]
> availableStates nfa = currentStates nfa ++ freeStates nfa

The process of finding free states (those reachable via Free Moves) gets a bit hairy. We need to start from our current state(s) and follow any Free Moves recursively. This ensures that Free Moves which lead to other Free Moves are correctly accounted for.

> freeStates :: NFA -> [SID]
> freeStates nfa = go [] (currentStates nfa)
> 
>   where
>     go acc [] = acc
>     go acc ss =
>         let ss' = filter (`notElem` acc) $ followRules $ freeMoves nfa ss
>         in go (acc ++ ss') ss'

(Many thanks go to Christopher Swenson for spotting an infinite loop here and fixing it by filtering out any states already in the accumulator)

Free Moves from a given set of states are rules for those states which have no input character.

> freeMoves :: NFA -> [SID] -> [Rule]
> freeMoves nfa ss = filter (\r ->
>     (fromState r `elem` ss) && (isNothing $ inputChar r)) $ rules nfa

Of course, the states that result from following rules are simply the concatenation of those rules’ next states.

> followRules :: [Rule] -> [SID]
> followRules = concatMap nextStates

Now we can model an NFA and see if it accepts a string or not. You could test this in ghci by defining an NFA in state 1 with an accept state 2 and a single rule that moves the machine from 1 to 2 if the character “a” is read.

ghci> let nfa = NFA [Rule 1 (Just 'a') [2]] [1] [2]
ghci> nfa `accepts` "a"
True
ghci> nfa `accepts` "b"
False

Pretty cool.

What we need to do now is construct an NFA whose rules for moving from state to state are derived from the nature of the pattern it represents. Only if the NFA we construct moves to an accept state for a given string of input does it mean the string matches that pattern.

> matches :: String -> Pattern -> Bool
> matches s = (`accepts` s) . toNFA

We’ll define toNFA later, but if you’ve loaded this file, you can play with it in ghci now:

ghci> "" `matches` Empty
True
ghci> "abc" `matches` Empty
False

And use it in an example main:

> main :: IO ()
> main = do
>     -- This AST represents the pattern /ab|cd*/:
>     let p = Choose
>             (Concat (Literal 'a') (Literal 'b'))
>             (Concat (Literal 'c') (Repeat (Literal 'd')))
> 
>     print $ "xyz" `matches` p
>     -- => False
> 
>     print $ "cddd" `matches` p
>     -- => True

Before I show toNFA, we need to talk about mutability.

A Bit About Mutable State

Since Pattern is a recursive data type, we’re going to have to recursively create and combine NFAs. For example, in a Concat pattern, we’ll need to turn both sub-patterns into NFAs then combine those in some way. In the Ruby implementation, Mr. Stuart used Object.new to ensure unique state identifiers between all the NFAs he has to create. We can’t do that in Haskell. There’s no global object able to provide some guaranteed-unique value.

What we’re going to do to get around this is conceptually simple, but appears complicated because it makes use of monads. All we’re doing is defining a list of identifiers at the beginning of our program and drawing from that list whenever we need a new identifier. Because we can’t maintain that as a variable we constantly update every time we pull an identifier out, we’ll use the State monad to mimic mutable state through our computations.

I apologize for the naming confusion here. This State type is from the Haskell library and has nothing to with the states of our NFAs.

First, we take the parameterized State s a type, and fix the s variable as a list of (potential) identifiers:

> type SIDPool a = State [SID] a

This makes it simple to create a nextId action which requests the next identifier from this list as well as updates the computation’s state, removing it as a future option before presenting that next identifier as its result.

> nextId :: SIDPool SID
> nextId = do
>     (x:xs) <- get
>     put xs
>     return x

This function can be called from within any other function in the SIDPool monad. Each time called, it will read the current state (via get), assign the first identifier to x and the rest of the list to xs, set the current state to that remaining list (via put) and finally return the drawn identifier to the caller.

Pattern ⇒ NFA

Assuming we have some function buildNFA which handles the actual conversion from Pattern to NFA but is in the SIDPool monad, we can evaluate that action, supplying an infinite list as the potential identifiers, and end up with an NFA with unique identifiers.

> toNFA :: Pattern -> NFA
> toNFA p = evalState (buildNFA p) [1..]

As mentioned, our conversion function, lives in the SIDPool monad, allowing it to call nextId at will. This gives it the following type signature:

> buildNFA :: Pattern -> SIDPool NFA

Every pattern is going to need at least one state identifier, so we’ll pull that out first, then begin a case analysis on the type of pattern we’re dealing with:

> buildNFA p = do
>     s1 <- nextId
> 
>     case p of

The empty pattern results in a predictably simple machine. It has one state which is also an accept state. It has no rules. If it gets any characters, they’ll be considered invalid and put the machine into a failed state. Giving it no characters is the only way it can remain in an accept state.

>         Empty -> return $ NFA [] [s1] [s1]

Also simple is the literal character pattern. It has two states and a rule between them. It moves from the first state to the second only if it reads that character. Since the second state is the only accept state, it will only accept that character.

>         Literal c -> do
>             s2 <- nextId
> 
>             return $ NFA [Rule s1 (Just c) [s2]] [s1] [s2]

We can model a concatenated pattern by first turning each sub-pattern into their own NFAs, and then connecting the accept state of the first to the start state of the second via a Free Move. This means that as the combined NFA is reading input, it will only accept that input if it moves through the first NFAs states into what used to be its accept state, hop over to the second NFA, then move into its accept state. Conceptually, this is exactly how a concatenated pattern should match.

Note that freeMoveTo will be shown after.

>         Concat p1 p2 -> do
>             nfa1 <- buildNFA p1
>             nfa2 <- buildNFA p2
> 
>             let freeMoves = map (freeMoveTo nfa2) $ acceptStates nfa1
> 
>             return $ NFA
>                 (rules nfa1 ++ freeMoves ++ rules nfa2)
>                 (currentStates nfa1)
>                 (acceptStates nfa2)

We can implement choice by creating a new starting state, and connecting it to both sub-patterns’ NFAs via Free Moves. Now the machine will jump into both NFAs at once, and the composed machine will accept the input if either of the paths leads to an accept state.

>         Choose p1 p2 -> do
>             s2 <- nextId
>             nfa1 <- buildNFA p1
>             nfa2 <- buildNFA p2
> 
>             let freeMoves =
>                     [ freeMoveTo nfa1 s2
>                     , freeMoveTo nfa2 s2
>                     ]
> 
>             return $ NFA
>                 (freeMoves ++ rules nfa1 ++ rules nfa2) [s2]
>                 (acceptStates nfa1 ++ acceptStates nfa2)

A repeated pattern is probably hardest to wrap your head around. We need to first convert the sub-pattern to an NFA, then we’ll connect up a new start state via a Free Move (to match 0 occurrences), then we’ll connect the accept state back to the start state (to match repetitions of the pattern).

>         Repeat p -> do
>             s2 <- nextId
>             nfa <- buildNFA p
> 
>             let initMove = freeMoveTo nfa s2
>                 freeMoves = map (freeMoveTo nfa) $ acceptStates nfa
> 
>             return $ NFA
>                 (initMove : rules nfa ++ freeMoves) [s2]
>                 (s2: acceptStates nfa)

And finally, our little helper which connects some state up to an NFA via a Free Move.

>   where
>     freeMoveTo :: NFA -> SID -> Rule
>     freeMoveTo nfa s = Rule s Nothing (currentStates nfa)

That’s It

I want to give a big thanks to Tom Stuart for writing Understanding Computation. That book has opened my eyes in so many ways. I understand why he chose Ruby as the book’s implementation language, but I find Haskell to be better-suited to these sorts of modeling tasks. Hopefully he doesn’t mind me exploring that by rewriting some of his examples.

07 Apr 2014, tagged with haskell

Applicative Functors

Every time I read Learn You a Haskell, I get something new out of it. This most recent time through, I think I’ve finally gained some insight into the Applicative type class.

I’ve been writing Haskell for some time and have developed an intuition and explanation for Monad. This is probably because monads are so prevalent in Haskell code that you can’t help but get used to them. I knew that Applicative was similar but weaker, and that it should be a super class of Monad but since it arrived later it is not. I now think I have a general understanding of how Applicative is different, why it’s useful, and I would like to bring anyone else who glossed over Applicative on the way to Monad up to speed.

The Applicative type class represents applicative functors, so it makes sense to start with a brief description of functors that are not applicative.

Values in a Box

A functor is any container-like type which offers a way to transform a normal function into one that operates on contained values.

Formally:

fmap :: Functor f    -- for any functor,
     => (  a ->   b) -- take a normal function,
     -> (f a -> f b) -- and make one that works on contained values

Some prefer to think of it like this:

fmap :: Functor f -- for any functor,
     => (a -> b)  -- take a normal function,
     -> f a       -- and a contained value,
     -> f b       -- and return the contained result of applying that 
                  -- function to that value

Because (->) is right-associative, we can reason about and use this function either way – with the former being more useful to the current discussion.

This is the first small step in the ultimate goal between all three of these type classes: allow us to work with values with context (in this case, a container of some sort) as if that context weren’t present at all. We give a normal function to fmap and it sorts out how to deal with the container, whatever it may be.

Functions in a Box

To say that a functor is “applicative”, we mean that the contained value can be applied. In other words, it’s a function.

An applicative functor is any container-like type which offers a way to transform a contained function into one that can operate on contained values.

(<*>) :: Applicative f -- for any applicative functor,
      => f (a ->   b)  -- take a contained function,
      -> (f a -> f b)  -- and make one that works on contained values

Again, we could also think of it like this:

(<*>) :: Applicative f -- for any applicative functor,
      => f (a -> b)    -- take a contained function,
      -> f a           -- and a contained value,
      -> f b           -- and return a contained result

Applicative functors also have a way to take an un-contained function and put it into a container:

pure :: Applicative f -- for any applicative functor,
     =>   (a -> b)    -- take a normal function,
     -> f (a -> b)    -- and put it in a container

In actuality, the type signature is simpler: a -> f a. Since a literally means “any type”, it can certainly represent the type (a -> b) too.

pure :: Applicative f => a -> f a

Understanding this is very important for understanding the usefulness of Applicative. Even though the type signature for (<*>) starts with f (a -> b), it can also be used with functions taking any number of arguments.

Consider the following:

:: f (a -> b -> c) -> f a -> f (b -> c)

Is this (<*>) or not?

Instead of writing its signature with b, lets use a question mark:

(<*>) :: f (a -> ?) -> f a -> f ?

Indeed it is: substitute the type (b -> c) for every ?, rather than the simple b in the actual class definition.

One In, One Out

What you just saw was a very concrete example of the benefits of how (->) works. When we say “a function of n arguments”, we’re actually lying. All functions in Haskell take exactly one argument. Multi-argument functions are really single-argument functions that return other single-argument functions that accept the remaining arguments via the same process.

Using the question mark approach, we see that multi-argument functions are actually of the form:

f :: a -> ?
f = -- ...

And it’s entirely legal for that ? to be replaced with (b -> ?), and for that ? to be replaced with (c -> ?) and so on ad infinitum. Thus you have the appearance of multi-argument functions.

As is common with Haskell, this results in what appears to be happy coincidence, but is actually the product of developing a language on top of such a consistent mathematical foundation. You’ll notice that after using (<*>) on a function of more than one argument, the result is not a wrapped result, but another wrapped function – does that sound familiar? Exactly, it’s an applicative functor.

Let me say that again: if you supply a function of more than one argument and a single wrapped value to (<*>), you end up with another applicative functor which can be given to (<*>) yet again with another wrapped value to supply the remaining argument to that original function. This can continue as long as the function needs more arguments. Exactly like normal function application.

A “Concrete” Example

Consider what this might look like if you start with a plain old function that (conceptually) takes more than one argument, but the values that it wants to operate on are wrapped in some container.

-- A normal function
f :: (a -> b -> c)
f = -- ...

-- One contained value, suitable for its first argument
x :: Applicative f => f a
x = -- ...

-- Another contained value, suitable for its second
y :: Applicative f => f b
y = -- ...

How do we pass x and y to f to get some overall result? You wrap the function with pure then use (<*>) repeatedly:

result :: Applicative f => f c
result = pure f <*> x <*> y

The first portion of that expression is very interesting: pure f <*> x. What is this bit doing? It’s taking a normal function and applying it to a contained value. Wait a second, normal functors know how to do that!

Since in Haskell every Applicative is also a Functor, that means it could be rewritten equivalently as fmap f x, turning the whole expression into fmap f x <*> y.

Never satisfied, Haskell introduced a function called (<$>) which is just fmap but infix. With this alias, we can write:

result = f <$> x <*> y

Not only is this epically concise, but it looks exactly like f x y which is how this code would be written if there were no containers involved. Here we have another, more powerful step towards the goal of writing code that has to deal with some context (in our case, still that container) without actually having to care about that context. You write your function like you normally would, then add (<$>) and (<*>) between the arguments.

What’s the Point?

With all of this background knowledge, I came to a simple mental model for applicative functors vs monads: Monad is for series where Applicative is for parallel. This has nothing to do with concurrency or evaluation order, this is only a concept I use to judge when a particular abstraction is better suited to the problem at hand.

Let’s walk through a real example.

Building a User

In an application I’m working on, I’m doing OAuth based authentication. My domain has the following (simplified) user type:

data User = User
    { userFirstName :: Text
    , userLastName  :: Text
    , userEmail     :: Text
    }

During the process of authentication, an OAuth endpoint provides me with some profile data which ultimately comes back as an association list:

type Profile = [(Text, Text)]

-- Example:
-- [ ("first_name", "Pat"            )
-- , ("last_name" , "Brisbin"        )
-- , ("email"     , "me@pbrisbin.com")
-- ]

Within this list, I can find user data via the lookup function which takes a key and returns a Maybe value. I had to write the function that builds a User out of this list of profile values. I also had to propagate any Maybe values by returning Maybe User.

First, let’s write this without exploiting the fact that Maybe is a monad or an applicative:

buildUser :: Profile -> Maybe User
buildUser p =
    case lookup "first_name" p of
        Nothing -> Nothing
        Just fn -> case lookup "last_name" p of
            Nothing -> Nothing
            Just ln -> case lookup "email" p of
                Nothing -> Nothing
                Just e  -> Just $ User fn ln e

Oof.

Treating Maybe as a Monad makes this much, much cleaner:

buildUser :: Profile -> Maybe User
buildUser p = do
    fn <- lookup "first_name" p
    ln <- lookup "last_name" p
    e  <- lookup "email" p

    return $ User fn ln e

Up until a few weeks ago, I would’ve stopped there and been extremely proud of myself and Haskell. Haskell for supplying such a great abstraction for potential failed lookups, and myself for knowing how to use it.

Hopefully, the content of this blog post has made it clear that we can do better.

Series vs Parallel

Using Monad means we have the ability to access the values returned by earlier lookup expressions in later ones. That ability is often critical, but not always. In many cases (like here), we do nothing but pass them all as-is to the User constructor “at once” as a last step.

This is Applicative, I know this.

-- f :: a    -> b    -> c    -> d
User :: Text -> Text -> Text -> User

-- x                  :: f     a
lookup "first_name" p :: Maybe Text

-- y                 :: f     b
lookup "last_name" p :: Maybe Text

-- z             :: f     c
lookup "email" p :: Maybe Text

-- result :: f d
-- result = f <$> x <*> y <*> z
buildUser :: Profile -> Maybe User
buildUser p = User
    <$> lookup "first_name" p
    <*> lookup "last_name" p
    <*> lookup "email" p

And now, I understand when to reach for Applicative over Monad. Perhaps you do too?

30 Mar 2014, tagged with haskell, applicative

Writing JSON APIs with Yesod

Lately at work, I’ve been fortunate enough to work on a JSON API which I was given the freedom to write in Yesod. I was a bit hesitant at first since my only Yesod experience has been richer html-based sites and I wasn’t sure what support (if any) there was for strictly JSON APIs. Rails has a number of conveniences for writing concise controllers and standing up APIs quickly – I was afraid Yesod may be lacking.

I quickly realized my hesitation was unfounded. The process was incredibly smooth and Yesod comes with just as many niceties that allow for rapid development and concise code when it comes to JSON-only API applications. Couple this with all of the benefits inherent in using Haskell, and it becomes clear that Yesod is well-suited to sites of this nature.

In this post, I’ll outline the process of building such a site, explain some conventions I’ve landed on, and discuss one possible pitfall when dealing with model relations.

Note: The code in this tutorial was extracted from a current project and is in fact working there. However, I haven’t test-compiled the examples exactly as they appear in the post. It’s entirely possible there are typos and the like. Please reach out on Twitter or via email if you run into any trouble with the examples.

What We Won’t Cover

This post assumes you’re familiar with Haskell and Yesod. It also won’t cover some important but un-interesting aspects of API design. We’ll give ourselves arbitrary requirements and I’ll show only the code required to meet those.

Specifically, the following will not be discussed:

  • Haskell basics
  • Yesod basics
  • Authentication
  • Embedding relations or side-loading
  • Dealing with created-at or updated-at fields

Getting Started

To begin, let’s get a basic Yesod site scaffolded out. How you do this is up to you, but here’s my preferred steps:

$ mkdir ./mysite && cd ./mysite
$ cabal sandbox init
$ cabal install alex happy yesod-bin
$ yesod init --bare
$ cabal install --dependencies-only
$ yesod devel

The scaffold comes with a number of features we won’t need. You don’t have to remove them, but if you’d like to, here they are:

  • Any existing models
  • Any existing routes/templates
  • Authentication
  • Static file serving

Models

For our API example, we’ll consider a site with posts and comments. We’ll keep things simple, additional models or attributes would just mean more lines in our JSON instances or more handlers of the same basic form. This would result in larger examples, but not add any value to the tutorial.

Let’s go ahead and define the models:

config/models

Post
  title Text
  content Text

Comment
  post PostId
  content Text

JSON

It’s true that we can add a json keyword in our model definition and get derived ToJSON/FromJSON instances for free on all of our models; we won’t do that though. I find these JSON instances, well, ugly. You’ll probably want your JSON to conform to some conventional format, be it jsonapi or Active Model Serializers. Client side frameworks like Ember or Angular will have better built-in support if your API conforms to something conventional. Writing the instances by hand is also more transparent and easily customized later.

Since what we do doesn’t much matter, only that we do it, I’m going to write JSON instances and endpoints to appear as they would in a Rails project using Active Model Serializers.

Model.hs

share [mkPersist sqlSettings, mkMigrate "migrateAll"]
    $(persistFileWith lowerCaseSettings "config/models")

-- { "id": 1, "title": "A title", "content": "The content" }
instance ToJSON (Entity Post) where
    toJSON (Entity pid p) = object
        [ "id"      .= (String $ toPathPiece pid)
        , "title"   .= postTitle p
        , "content" .= postContent p
        ]

instance FromJSON Post where
    parseJSON (Object o) = Post
        <$> o .: "title"
        <*> o .: "content"

    parseJSON _ = mzero

-- { "id": 1, "post_id": 1, "content": "The comment content" }
instance ToJSON (Entity Comment) where
    toJSON (Entity cid c) = object
        [ "id"      .= (String $ toPathPiece cid)
        , "post_id" .= (String $ toPathPiece $ commentPost c)
        , "content" .= commentContent c
        ]

-- We'll talk about this later
--instance FromJSON Comment where

Routes and Handlers

Let’s start with a RESTful endpoint for posts:

config/routes

/posts         PostsR GET POST
/posts/#PostId PostR  GET PUT DELETE

Since our API should return proper status codes, let’s add the required functions to Import.hs, making them available everywhere:

Import.hs

import Network.HTTP.Types as Import
    ( status200
    , status201
    , status400
    , status403
    , status404
    )

Next we write some handlers:

Handlers/Posts.hs

getPostsR :: Handler Value
getPostsR = do
    posts <- runDB $ selectList [] [] :: Handler [Entity Post]

    return $ object ["posts" .= posts]

postPostsR :: Handler ()
postPostsR = do
    post <- requireJsonBody :: Handler Post
    _    <- runDB $ insert post

    sendResponseStatus status201 ("CREATED" :: Text)

You’ll notice we need to add a few explicit type annotations. Normally, Haskell can infer everything for us, but in this case the reason for the annotations is actually pretty interesting. The selectList function will return any type that’s persistable. Normally we would simply treat the returned records as a particular type and Haskell would say, “Aha! You wanted a Post” and then, as if by time travel, selectList would give us appropriate results.

In this case, all we do with the returned posts is pass them to object. Since object can work with any type than can be represented as JSON, Haskell doesn’t know which type we mean. We must remove the ambiguity with a type annotation somewhere.

Handlers/Post.hs

getPostR :: PostId -> Handler Value
getPostR pid = do
    post <- runDB $ get404 pid

    return $ object ["post" .= (Entity pid post)]

putPostR :: PostId -> Handler Value
putPostR pid = do
    post <- requireJsonBody :: Handler Post

    runDB $ replace pid post

    sendResponseStatus status200 ("UPDATED" :: Text)

deletePostR :: PostId -> Handler Value
deletePostR pid = do
    runDB $ delete pid

    sendResponseStatus status200 ("DELETED" :: Text)

I love how functions like get404 and requireJsonBody allow these handlers to be completely free of any error-handling concerns, but still be safe and well-behaved.

Comment Handlers

There’s going to be a small annoyance in our comment handlers which I alluded to earlier by omitting the FromJSON instance on Comment. Before we get to that, let’s take care of the easy stuff:

config/routes

/posts/#PostId/comments            CommentsR GET POST
/posts/#PostId/comments/#CommentId CommentR  GET PUT DELETE

Handlers/Comments.hs

getCommentsR :: PostId -> Handler Value
getCommentsR pid = do
    comments <- runDB $ selectList [CommentPost ==. pid] []

    return $ object ["comments" .= comments]

-- We'll talk about this later
--postCommentsR :: PostId -> Handler ()

For the single-resource handlers, we’re going to assume that a CommentId is unique across posts, so we can ignore the PostId in these handlers.

Handlers/Comment.hs

getCommentR :: PostId -> CommentId -> Handler Value
getCommentR _ cid = do
    comment <- runDB $ get404 cid

    return $ object ["comment" .= (Entity cid comment)]

-- We'll talk about this later
--putCommentR :: PostId -> CommentId -> Handler ()

deleteCommentR :: PostId -> CommentId -> Handler ()
deleteCommentR _ cid = do
    runDB $ delete cid

    sendResponseStatus status200 ("DELETED" :: Text)

Handling Relations

Up until now, we’ve been able to define JSON instances for our model, use requireJsonBody, and insert the result. In this case however, the request body will be lacking the Post ID (since it’s in the URL). This means we need to parse a different but similar data type from the JSON, then use that and the URL parameter to build a Comment.

Helpers/Comment.hs

-- This datatype would be richer if Comment had more attributes. For now 
-- we only have to deal with content, so I can use a simple newtype.
newtype CommentAttrs = CommentAttrs Text

instance FromJSON CommentAttrs where
    parseJSON (Object o) = CommentAttrs <$> o .: "content"
    parseJSON _          = mzero

toComment :: PostId -> CommentAttrs -> Comment
toComment pid (CommentAttrs content) = Comment
    { commentPost    = pid
    , commentContent = content
    }

This may seem a bit verbose and even redundant, and there’s probably a more elegant way to get around this situation. Lacking that, I think the additional safety (vs the obvious solution of making commentPost a Maybe) and separation of concerns (vs putting this in the model layer) is worth the extra typing. It’s also very easy to use:

Handlers/Comments.hs

import Helpers.Comment

postCommentsR :: PostId -> Handler ()
postCommentsR pid = do
    _ <- runDB . insert . toComment pid =<< requireJsonBody

    sendResponseStatus status201 ("CREATED" :: Text)

Handlers/Comment.hs

import Helpers.Comment

putCommentR :: PostId -> CommentId -> Handler ()
putCommentR pid cid = do
    runDB . replace cid . toComment pid =<< requireJsonBody

    sendResponseStatus status200 ("UPDATED" :: Text)

We don’t need a type annotation on requireJsonBody in this case. Since the result is being passed to toComment pid, Haskell knows we want a CommentAttrs and uses its parseJSON function within requireJsonBody

Conclusion

With a relatively small amount of time and code, we’ve written a fully-featured JSON API using Yesod. I think the JSON instances and API handlers are more concise and readable than the analogous Rails serializers and controllers. Our system is also far safer thanks to the type system and framework-provided functions like get404 and requireJsonBody without us needing to explicitly deal with any of that.

I hope this post has shown that Yesod is indeed a viable option for projects of this nature.

22 Feb 2014, tagged with haskell, yesod

Random Numbers without Mutation

In lecture 5A of Structure & Interpretation of Computer Programs, Gerald Sussman introduces the idea of assignments, side effects and state. Before that, they had been working entirely in purely functional Lisp which could be completely evaluated and reasoned about using the substitution model. He states repeatedly that this is a horrible thing as it requires a far more complex view of programs. At the end of the lecture, he shows a compelling example of why we must introduce this horrible thing anyway; without it, we cannot decouple parts of our algorithms cleanly and would be reduced to huge single-function programs in some critical cases.

The example chosen in SICP is estimating π using Cesaro’s method. The method states that the probability that any two random numbers’ greatest common divisor equals 1 is itself equal to 6/π2.

Since I know Ruby better than Lisp (and I’d venture my readers do too), here’s a ported version:

def estimate_pi(trials)
  p = monte_carlo(trials) { cesaro }

  Math.sqrt(6 / p)
end

def cesaro
  rand.gcd(rand) == 1
end

def monte_carlo(trials, &block)
  iter = ->(trials, passed) do
    if trials == 0
      passed
    else
      if block.call
        iter.call(trials - 1, passed + 1)
      else
        iter.call(trials - 1, passed)
      end
    end
  end

  iter.call(trials, 0) / trials.to_f
end

I’ve written this code to closely match the Lisp version which used a recursive iterator. Unfortunately, this means that any reasonable number of trials will exhaust Ruby’s stack limit.

The code above also assumes a rand function which will return different random integers on each call. To do so, it must employ mutation and hold internal state:

def rand
  @x ||= random_init
  @x   = random_update(@x)

  @x
end

Here I assume the same primitives as Sussman does, though it wouldn’t be difficult to wrap Ruby’s built-in rand to return integers instead of floats. The important thing is that this function needs to hold onto the previously returned random value in order to provide the next.

Sussman states that without this impure rand function, it would be very difficult to decouple the cesaro function from the monte_carlo one. Without utilizing (re)assignment and mutation, we would have to write our estimation function as one giant blob:

def estimate_pi(trials)
  iter = ->(trials, passed, x1, x2) do
    if trials == 0
      passed
    else
      x1_ = rand_update(x2)
      x2_ = rand_update(x1_)

      if x1.gcd(x2) == 1
        iter.call(trials - 1, passed + 1, x1_, x2_)
      else
        iter.call(trials - 1, passed, x1_, x2_)
      end
    end
  end

  x1 = rand_init
  x2 = rand_update(x1)

  p = iter.call(trials, 0, x1, x2) / trials.to_f

  Math.sqrt(6 / p)
end

Ouch.

It’s at this point Sussman stops, content with his justification for adding mutability to Lisp. I’d like to explore a bit further: what if remaining pure were non-negotiable? Are there other ways to make decoupled systems and elegant code without sacrificing purity?

RGen

Let’s start with a non-mutating random number generator:

class RGen
  def initialize(seed = nil)
    @seed = seed || random_init
  end

  def next
    x = random_update(@seed)

    [x, RGen.new(x)]
  end
end

def rand(g)
  g.next
end

This allows for the following implementation:

def estimate_pi(trials)
  p = monte_carlo(trials) { |g| cesaro(g) }

  Math.sqrt(6 / p)
end

def cesaro(g)
  x1, g1 = rand(g)
  x2, g2 = rand(g1)

  [x1.gcd(x2) == 1, g2]
end

def monte_carlo(trials, &block)
  iter = ->(trials, passed, g) do
    if trials == 0
      passed
    else
      ret, g_ = block.call(g)

      if ret
        iter.call(trials - 1, passed + 1, g_)
      else
        iter.call(trials - 1, passed, g_)
      end
    end
  end

  iter.call(trials, 0, RGen.new) / trials.to_f
end

We’ve moved out of the single monolithic function, which is a step in the right direction. The additional generator arguments being passed all over the place makes for some readability problems though. The reason for that is a missing abstraction; one that’s difficult to model in Ruby. To clean this up further, we’ll need to move to a language where purity was in fact non-negotiable: Haskell.

In Haskell, the type signature of our current monte_carlo function would be:

monteCarlo :: Int                    -- number of trials
           -> (RGen -> (Bool, RGen)) -- the experiment
           -> Double                 -- result

Within monte_carlo, we need to repeatedly call the block with a fresh random number generator. Calling RGen#next gives us an updated generator along with the next random value, but that must happen within the iterator block. In order to get it out again and pass it into the next iteration, we need to return it. This is why cesaro has the type that it does:

cesaro :: RGen -> (Bool, RGen)

cesaro depends on some external state so it accepts it as an argument. It also affects that state so it must return it as part of its return value. monteCarlo is responsible for creating an initial state and “threading” it though repeated calls to the experiment given. Mutable state is “faked” by passing a return value as argument to each computation in turn.

You’ll also notice this is a similar type signature as our rand function:

rand :: RGen -> (Int, RGen)

This similarity and process is a generic concern which has nothing to do with Cesaro’s method or performing Monte Carlo tests. We should be able to leverage the similarities and separate this concern out of our main algorithm. Monadic state allows us to do exactly that.

RGenState

For the Haskell examples, I’ll be using System.Random.StdGen in place of the RGen class we’ve been working with so far. It is exactly like our RGen class above in that it can be initialized with some seed, and there is a random function with the type StdGen -> (Int, StdGen).

The abstract thing we’re lacking is a way to call those function successively, passing the StdGen returned from one invocation as the argument to the next invocation, all the while being able to access that a (the random integer or experiment outcome) whenever needed. Haskell, has just such an abstraction, it’s in Control.Monad.State.

First we’ll need some imports.

import System.Random
import Control.Monad.State

Notice that we have a handful of functions with similar form.

(StdGen -> (a, StdGen))

What Control.Monad.State provides is a type that looks awfully similar.

data State s a = State { runState :: (s -> (a, s)) }

Let’s declare a type synonym which fixes that s type variable to the state we care about: a random number generator.

type RGenState a = State StdGen a

By replacing the s in State with our StdGen type, we end up with a more concrete type that looks as if we had written this:

data RGenState a = RGenState
    { runState :: (StdGen -> (a, StdGen)) }

And then went on to write all the various instances that make this type useful. By using such a type synonym, we get all those instances and functions for free.

Our first example:

rand :: RGenState Int
rand = state random

We can “evaluate” this action with one of a number of functions provided by the library, all of which require some initial state. runState will literally just execute the function and return the result and the updated state (in case you missed it, it’s just the record accessor for the State type). evalState will execute the function, discard the updated state, and give us only the result. execState will do the inverse: execute the function, discard the result, and give us only the updated state.

We’ll be using evalState exclusively since we don’t care about how the random number generator ends up after these actions, only that it gets updated and passed along the way. Let’s wrap that up in a function that both provides the initial state and evaluates the action.

runRandom :: RGenState a -> a
runRandom f = evalState f (mkStdGen 1)

-- runRandom rand
-- => 7917908265643496962

Unfortunately, the result will be the same every time since we’re using a constant seed. You’ll see soon that this is an easy limitation to address after the fact.

With this bit of glue code in hand, we can re-write our program in a nice modular way without any actual mutable state or re-assignment.

estimatePi :: Int -> Double
estimatePi n = sqrt $ 6 / (monteCarlo n cesaro)

cesaro :: RGenState Bool
cesaro = do
    x1 <- rand
    x2 <- rand

    return $ gcd x1 x2 == 1

monteCarlo :: Int -> RGenState Bool -> Double
monteCarlo trials experiment = runRandom $ do
    outcomes <- replicateM trials experiment

    return $ (length $ filter id outcomes) `divide` trials

  where
    divide :: Int -> Int -> Double
    divide a b = fromIntegral a / fromIntegral b

Even with a constant seed, it works pretty well:

main = print $ estimatePi 1000
-- => 3.149183286488868

And For My Last Trick

It’s easy to fall into the trap of thinking that Haskell’s type system is limiting in some way. The monteCarlo function above can only work with random-number-based experiments? Pretty weak.

Consider the following refactoring:

estimatePi :: Int -> RGenState Double
estimatePi n = do
  p <- monteCarlo n cesaro

  return $ sqrt (6 / p)

cesaro :: RGenState Bool
cesaro = do
  x1 <- rand
  x2 <- rand

  return $ gcd x1 x2 == 1

monteCarlo :: Monad m => Int -> m Bool -> m Double
monteCarlo trials experiment = do
  outcomes <- replicateM trials experiment

  return $ (length $ filter id outcomes) `divide` trials

  where
    divide :: Int -> Int -> Double
    divide a b = fromIntegral a / fromIntegral b

main :: IO ()
main = print $ runRandom $ estimatePi 1000

The minor change made was moving the call to runRandom all the way up to main. This allows us to pass stateful computations throughout our application without ever caring about that state except at this highest level.

This would make it simple to add true randomness (which requires IO) by replacing the call to runRandom with something that pulls entropy in via IO rather than using mkStdGen.

runTrueRandom :: RGenState a -> IO a
runTrueRandom f = do
    s <- newStdGen

    evalState f s

main = print =<< runTrueRandom (estimatePi 1000)

One could even do this conditionally so that your random-based computations became deterministic during tests.

Another important point here is that monteCarlo can now work with any Monad! This makes perfect sense: The purpose of this function is to run experiments and tally outcomes. The idea of an experiment only makes sense if there’s some outside force which might change the results from run to run, but who cares what that outside force is? Haskell don’t care. Haskell requires we only specify it as far as we need to: it’s some Monad m, nothing more.

This means we can run IO-based experiments via the Monte Carlo method with the same monteCarlo function just by swapping out the monad:

What if Cesaro claimed the probability that the current second is an even number is equal to 6/π2? Seems reasonable, let’s model it:

-- same code, different name / type
estimatePiIO :: Int -> IO Double
estimatePiIO n = do
  p <- monteCarlo n cesaroIO

  return $ sqrt (6 / p)

cesaroIO :: IO Bool
cesaroIO = do
  t <- getCurrentTime

  return $ even $ utcDayTime t

monteCarlo :: Monad m => Int -> m Bool -> m Double
monteCarlo trials experiment = -- doesn't change at all!

main :: IO ()
main = print =<< estimatePiIO 1000

I find the fact that this expressiveness, generality, and polymorphism can share the same space as the strictness and incredible safety of this type system fascinating.

09 Feb 2014, tagged with haskell

Automated Unit Testing in Haskell

Hspec is a BDD library for writing Rspec-style tests in Haskell. In this post, I’m going to describe setting up a Haskell project using this test framework. What we’ll end up with is a series of tests which can be run individually (at the module level), or all together (as part of packaging). Then I’ll briefly mention Guard (a Ruby tool) and how we can use that to automatically run relevant tests as we change code.

Project Layout

For any of this to work, our implementation and test modules must follow a particular layout:

Code/liquid/
├── src
│   └── Text
│       ├── Liquid
│       │   ├── Context.hs
│       │   ├── Parse.hs
│       │   └── Render.hs
│       └── Liquid.hs
└── test
    ├── SpecHelper.hs
    ├── Spec.hs
    └── Text
        └── Liquid
            ├── ParseSpec.hs
            └── RenderSpec.hs

Notice that for each implementation module (under ./src) there is a corresponding spec file at the same relative path (under ./test) with a consistent, conventional name (<ModuleName>Spec.hs). For this post, I’m going to outline the first few steps of building the Parse module of the above source tree which happens to be my liquid library, a Haskell implementation of Shopify’s template system.

Hspec Discover

Hspec provides a useful function called hspec-discover. If your project follows the conventional layout above, you can simply create a file like so:

test/Spec.hs

{-# OPTIONS_GHC -F -pgmF hspec-discover #-}

And when that file is executed, all of your specs will be found and run together as a single suite.

SpecHelper

I like to create a central helper module which gets imported into all specs. It simply exports our test framework and implementation code:

test/SpecHelper.hs

module SpecHelper
    ( module Test.Hspec
    , module Text.Liquid.Parse
    ) where

import Test.Hspec
import Text.Liquid.Parse

This file might not seem worth it now, but as you add more modules, it becomes useful quickly.

Baby’s First Spec

test/Text/Liquid/ParseSpec.hs

module Text.Liquid.ParseSpec where

import SpecHelper

spec :: Spec
spec = do
    describe "Text.Liquid.Parse" $ do
        context "Simple text" $ do
            it "parses exactly as-is" $ do
                let content = "Some simple text"

                parseTemplate content `shouldBe` Right [TString content]

main :: IO ()
main = hspec spec

With this first spec, I’ve already made some assumptions and design decisions.

The API into our module will be a single parseTemplate function which returns an Either type (commonly used to represent success or failure). The Right value (conventionally used for success) will be a list of template parts. One such part can be constructed with the TString function and is used to represent literal text with no interpolation or logic. This is the simplest template part possible and is therefore a good place to start.

The spec function is what will be found by hspec-discover and rolled up into a project-wide test. I’ve also added a main function which just runs said spec. This allows me to easily run the spec in isolation, which you should do now:

$ runhaskell -isrc -itest test/Text/Liquid/ParseSpec.hs

The first error you should see is an inability to find Test.Hspec. Go ahead and install it:

$ cabal install hspec

You should then get a similar error for Text.Liquid.Parse then some more about functions and types that are not yet defined. Let’s go ahead and implement just enough to get past that:

src/Text/Liquid/Parse.hs

module Text.Liquid.Parse where

type Template = [TPart]

data TPart = TString String

parseTemplate :: String -> Either Template String
parseTemplate = undefined

The test should run now and give you a nice red failure due to the attempted evaluation of undefined.

Since implementing Parse is not the purpose of this post, I won’t be moving forward in that direction. Instead, I’m going to show you how to set this library up as a package which can be cabal installed and/or cabal tested by end-users.

For now, you can pass the test easily like so:

src/Text/Liquid/Parse.hs

parseTemplate :: String -> Either Template String
parseTemplate str = Right [TString str]

For TDD purists, this is actually the correct thing to do here: write the simplest implementation to pass the test (even if you “know” it’s not going to last), then write another failing test to force you to implement a little more. I don’t typically subscribe to that level of TDD purity, but I can see the appeal.

Cabal

We’ve already got Spec.hs which, when executed, will run all our specs together:

$ runhaskell -isrc -itest test/Spec.hs

We just need to wire that into the Cabal packaging system:

liquid.cabal

name:          liquid
version:       0.0.0
license:       MIT
copyright:     (c) 2013 Pat Brisbin
author:        Pat Brisbin <pbrisbin@gmail.com>
maintainer:    Pat Brisbin <pbrisbin@gmail.com>
build-type:    Simple
cabal-version: >= 1.8

library
  hs-source-dirs: src

  exposed-modules: Text.Liquid.Parse

  build-depends: base == 4.*

test-suite spec
  type: exitcode-stdio-1.0

  hs-source-dirs: test

  main-is: Spec.hs

  build-depends: base  == 4.*
               , hspec >= 1.3
               , liquid

With this in place, testing our package is simple:

$ cabal configure --enable-tests
...
$ cabal build
...
$ cabal test
Building liquid-0.0.0...
Preprocessing library liquid-0.0.0...
In-place registering liquid-0.0.0...
Preprocessing test suite 'spec' for liquid-0.0.0...
Linking dist/build/spec/spec ...
Running 1 test suites...
Test suite spec: RUNNING...
Test suite spec: PASS
Test suite logged to: dist/test/liquid-0.0.0-spec.log
1 of 1 test suites (1 of 1 test cases) passed.

Guard

Another thing I like to setup is the automatic running of relevant specs as I change code. To do this, we can use a tool from Ruby-land called Guard. Guard is a great example of a simple tool doing one thing well. All it does is watch files and execute actions based on rules defined in a Guardfile. Through plugins and extensions, there are a number of pre-built solutions for all sorts of common needs: restarting servers, regenerating ctags, or running tests.

We’re going to use guard-shell which is a simple extension allowing for running shell commands and spawning notifications.

$ gem install guard-shell

Next, create a Guardfile:

Guardfile

# Runs the command and prints a notification
def execute(cmd)
  if system(cmd)
    n 'Build succeeded', 'hspec', :success
  else
    n 'Build failed', 'hspec', :failed
  end
end

def run_all_tests
  execute %{
    cabal configure --enable-tests &&
    cabal build && cabal test
  }
end

def run_tests(mod)
  specfile = "test/#{mod}Spec.hs"

  if File.exists?(specfile)
    files = [specfile]
  else
    files = Dir['test/**/*.hs']
  end

  execute "ghc -isrc -itest -e main #{files.join(' ')}"
end

guard :shell do
  watch(%r{.*\.cabal$})          { run_all_tests }
  watch(%r{test/SpecHelper.hs$}) { run_all_tests }
  watch(%r{src/(.+)\.hs$})       { |m| run_tests(m[1]) }
  watch(%r{test/(.+)Spec\.hs$})  { |m| run_tests(m[1]) }
end

Much of this Guardfile comes from this blog post by Michael Xavier. His version also includes cabal sandbox support, so be sure to check it out if that interests you.

If you like to bundle all your Ruby gems (and you probably should) that can be done easily, just see my main liquid repo as that’s how I do things there.

In one terminal, start guard:

$ guard

Finally, simulate an edit in your module and watch the test automatically run:

$ touch src/Text/Liquid/Parse.hs

And there you go, fully automated unit testing in Haskell.

01 Dec 2013, tagged with testing, haskell, cabal, hunit, ruby, guard

Using Notify-OSD for XMonad Notifications

In my continuing efforts to strip my computing experience of any non-essential parts, I’ve decided to ditch my statusbars. My desktop is now solely a grid of tiled terminals (and a browser). It’s quite nice. The only thing I slightly missed, however, was notifications when one of my windows set Urgency. This used to trigger a bright yellow color for that workspace in my dzen-based statusbar.

A Brief Tangent:

Windows have these properties called “hints” which they can set on themselves at will. These properties can be read by Window Managers in an effort to do the Right Thing. Hints are how a Window tells the Manager, “Hey, I should be full-screen” or, “I’m a dialog, float me on top of everything”. One such hint is WM_URGENT.

WM_URGENT is how windows get your attention. It’s what makes them flash in your task bar or bounce in your dock. If you’re using a sane terminal, it should set WM_URGENT on itself if the program running within it triggers a “bell”.

By telling applications like mutt or weechat to print a bell when I get new email or someone nick-highlights me, I can easily get notifications of these events even from applications that are running within screen, in an ssh session, on some server far, far away. Pretty neat.

Now that I’m without a status bar, I need to be notified some other way. Enter Notify-OSD.

Notify-OSD

Notify-OSD is part of the desktop notification system of GNOME, but it can be installed standalone and used to send notifications from the command-line very easily:

$ notify-send "A title" "A message"

So how do we get XMonad to send a useful notification via notify-send whenever a window sets the WM_URGENT hint? Enter the UrgencyHook.

UrgencyHook

Setting a custom urgency hook is very easy, but not exactly intuitive. What we’re actually doing is declaring a custom data type, then making it an instance of the UrgencyHook typeclass. The single required function to be a member of this typeclass is an action which will be run whenever a window sets urgency. Conveniently, it’s given the window with urgency as an argument. We can use this to format our notification.

First off, add the module imports we’ll need:

import XMonad.Hooks.UrgencyHook
import XMonad.Util.NamedWindows
import XMonad.Util.Run

import qualified XMonad.StackSet as W

Then make that custom datatype and instance:

data LibNotifyUrgencyHook = LibNotifyUrgencyHook deriving (Read, Show)

instance UrgencyHook LibNotifyUrgencyHook where
    urgencyHook LibNotifyUrgencyHook w = do
        name     <- getName w
        Just idx <- fmap (W.findTag w) $ gets windowset

        safeSpawn "notify-send" [show name, "workspace " ++ idx]

Finally, update main like so:

main :: IO ()
main = xmonad
     $ withUrgencyHook LibNotifyUrgencyHook
     $ defaultConfig
        { -- ...
        , -- ...
        }

To test this, open a terminal in some workspace and type:

$ sleep 3 && printf "\a"

Then immediately focus away from that workspace. In a few seconds, you should see a nice pop-up like:

notify-send 

You can see the title of the notification is the window name and I use the message to tell me the workspace number. In this case, the name is the default “urxvt” for a terminal window, but I also use a few wrapper scripts to open urxvt with the -n option to set its name to something specific which will then come through in any notifications from that window.

If that doesn’t work, it’s likely your terminal doesn’t set Urgency on bells. For rxvt at least, the setting is:

URxvt*urgentOnBell: true
URxvt*visualBell:   false

In Xresources or Xdefaults, whichever you use.

15 Oct 2013, tagged with xmonad, haskell, notify-osd

On Staticness

For almost 7 years, now I’ve had a desktop at home, running, serving (among many things) my personal blog. Doing so is how I learned much of what I now know about programming and system administration. It gave me a reason to learn HTML, then PHP, then finally Haskell. It taught me Postgres, Apache, then lighttpd, then nginx. Without maintaining this site myself, on my own desktop, I doubt I would’ve been sucked into these things and I may not have ended up where I am today.

However, I’m now a happily employed Developer and I do these things all day on other people’s machines and sites. Don’t get me wrong, I enjoy it all very much, but the educational value of maintaining my personal blog as a locally-hosted web-app is just not there any more. With that value gone, things like power outages, harddrive failures, Comcast, etc which bring my site down become unacceptable. It’s too easy to have something which requires almost no maintenance while still giving me the control and work-flow I want.

I realize I could’ve moved my site as-is to a VPS and no longer been at the whim of Comcast and NSTAR, but that wouldn’t decrease the maintenance burden enough. Contrast pretty much any typical web-app ecosystem with…

The services now required to host my blog:

nginx

The configuration required to host my blog:

$ wc -l < /etc/nginx/nginx.conf
19

Adding a new post:

$ cat > posts/2013-09-21-awesome_post.md <<EOF
---
title: Awesome Post
tags: some, tags
---

Pretty *awesome*.

EOF

Deployment:

$ jekyll build && rsync -a -e ssh _site/ pbrisbin.com:/srv/http/site/

Backups:

$ tar czf ~/site.backup _site

Comments

Unfortunately, Comments aren’t easy to do with a static site (at least not without something like Disqus, but meh). To all those that have commented on this site in the past, I apologize. That feature is just not worth maintaining a dynamic blog-as-web-app.

When considering this choice, I discovered that the comments on this site fell into one of three categories:

  1. Hey, nice post!
  2. Hey, here’s a correction
  3. Hey, here’s something additional about this topic

These are all useful things, but there’s never any real discussion going on between commenters; it’s all just notes to me. So I’ve decided to let these come in as emails. My hope is that folks who might’ve commented are OK sending it in an email. The address is in the footer pretty much where you’d expected a Comments section to be. I’ll make sure that any corrections or additional info sent via email will make it back into the main content of the post.

Pandoc

At some point during this process, I realized that I simply can’t convert my post markdown to html without pandoc. Every single markdown implementation I’ve found gets the following wrong:

<div class="something">
I want this content to **also** be parsed as markdown.
</div>

Pandoc does it right. Everything else puts the literal text inside the div. This breaks all my posts horribly because I’ll frequently do something like:

This is in a div with class="well", and the content inside is still markdown.

I had assumed that to get pandoc support I’d have to use Hakyll, but (at least from the docs) it seemed to be missing tags and next/previous link support. It appears extensible enough that I might code that in custom, but again, I’m trying to decrease overall effort here. Jekyll, on the other hand, had those features already and let me use pandoc easily by dropping a small ruby file in _plugins.

Update: I did eventually move this blog to Hakyll.

I figure if I want to be a Haskell evangelist, I really shouldn’t be using a Ruby site generator when such a good Haskell option exists. Also, tags are now supported and adding next/previous links myself wasn’t very difficult.

With the conversion complete, I was able to shut down a bunch of services on my desktop and even cancel a dynamic DNS account. At $5/month, the Digital Ocean VPS is a steal. The site’s faster, more reliable, easier to deploy, and even got a small facelift.

Hopefully the loss of Comments doesn’t upset any readers. I love email, so please send those comments to me at pbrisbin dot com.

21 Sep 2013, tagged with meta, system, jekyll, pandoc

Mocking Bash

Have you ever wanted to mock a program on your system so you could write fast and reliable tests around a shell script which calls it? Yeah, I didn’t think so.

Well I did, so here’s how I did it.

Cram

Verification testing of shell scripts is surprisingly easy. Thanks to Unix, most shell scripts have limited interfaces with their environment. Assertions against stdout can often be enough to verify a script’s behavior.

One tool that makes these kind of executions and assertions easy is cram.

Cram’s mechanics are very simple. You write a test file like this:

The ls command should print one column when passed -1

  $ mkdir foo
  > touch foo/bar
  > touch foo/baz

  $ ls -1 foo
  bar
  baz

Any line beginning with an indented $ is executed (with > allowing multi-line commands). The indented text below such commands is compared with the actual output at that point. If it doesn’t match, the test fails and a contextual diff is shown.

With this philosophy, retrofitting tests on an already working script is incredibly easy. You just put in a command, run the test, then insert whatever the actual output was as the assertion. Cram’s --interactive flag is meant for exactly this. Aces.

Not Quite

Suppose your script calls a program internally whose behavior depends on transient things which are outside of your control. Maybe you call curl which of course depends on the state of the internet between you and the server you’re accessing. With the output changing between runs, these tests become more trouble than they’re worth.

What’d be really great is if I could do the following:

  1. Intercept calls to the program
  2. Run the program normally, but record “the response”
  3. On subsequent invocations, just replay the response and don’t call the program

This means I could run the test suite once, letting it really call the program, but record the stdout, stderr, and exit code of the call. The next time I run the test suite, nothing would actually happen. The recorded response would be replayed instead, my script wouldn’t know the difference and everything would pass reliably and instantly.

In case you didn’t notice, this is VCR.

The only limitation here is that a mock must be completely affective while only mimicking the stdout, stderr, and exit code of what it’s mocking. A command that creates files, for example, which are used by other parts of the script could not be mocked this way.

Mucking with PATH

One way to intercept calls to executables is to prepend $PATH with some controllable directory. Files placed in this leading directory will be found first in command lookups, allowing us to handle the calls.

I like to write my cram tests so that the first thing they do is source a test/helper.sh, so this makes a nice place to do such a thing:

test/helper.sh

export PATH="$TESTDIR/..:$TESTDIR/bin:$PATH"

This ensures that a) the executable in the source directory is used and b) anything in test/bin will take precedence over system commands.

Now all we have to do to mock foo is add a test/bin/foo which will be executed whenever our Subject Under Test calls foo.

Record/Replay

The logic of what to do in a mock script is straight forward:

  1. Build a unique identifier for the invocation
  2. Look up a stored “response” by that identifier
  3. If not found, run the program and record said response
  4. Reply with the recorded response to satisfy the caller

We can easily abstract this in a generic, 12 line proxy:

test/bin/act-like

#!/usr/bin/env bash
program="$1"; shift
base="${program##*/}"

fixtures="${TESTDIR:-test}/fixtures/$base/$(echo $* | md5sum | cut -d ' ' -f 1)"

if [[ ! -d "$fixtures" ]]; then
  mkdir -p "$fixtures"
  $program "$@" >"$fixtures/stdout" 2>"$fixtures/stderr"
  echo $? > "$fixtures/exit_code"
fi

cat "$fixtures/stdout"
cat "$fixtures/stderr" >&2

read -r exit_code < "$fixtures/exit_code"

exit $exit_code

With this in hand, we can record any invocation of anything we like (so long as we only need to mimic the stdout, stderr, and exit code).

test/bin/curl

#!/usr/bin/env bash
act-like /usr/bin/curl "$@"

test/bin/makepkg

#!/usr/bin/env bash
act-like /usr/bin/makepkg "$@"

test/bin/pacman

#!/usr/bin/env bash
act-like /usr/bin/pacman "$@"

Success!

After my next test run, I find the following:

$ tree test/fixtures
test/fixtures
├── curl
│   ├── 008f2e64f6dd569e9da714ba8847ae7e
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── 2c5906baa66c800b095c2b47173672ba
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── c50061ffc84a6e1976d1e1129a9868bc
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── f38bb573029c69c0cdc96f7435aaeafe
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── fc5a0df540104584df9c40d169e23d4c
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   └── fda35c202edffac302a7b708d2534659
│       ├── exit_code
│       ├── stderr
│       └── stdout
├── makepkg
│   └── 889437f54f390ee62a5d2d0347824756
│       ├── exit_code
│       ├── stderr
│       └── stdout
└── pacman
    └── af8e8c81790da89bc01a0410521030c6
        ├── exit_code
        ├── stderr
        └── stdout

11 directories, 24 files

Each hash-directory, representing one invocation of the given program, contains the full response in the form of stdout, stderr, and exit_code files

I run my tests again. This time, rather than calling any of the actual programs, the responses are found and replayed. The tests pass instantly.

24 Aug 2013, tagged with bash, testing, mocks, cram, aurget, arch