pbrisbindotcom

Regular Expression Evaluation via Finite Automata

What follows is a literate haskell file runnable via ghci. The raw source for this page can be found here.

While reading Understanding Computation again last night, I was going back through the chapter where Tom Stuart describes deterministic and non-deterministic finite automata. These simple state machines seem like little more than a teaching tool, but he eventually uses them as the implementation for a regular expression matcher. I thought seeing this concrete use for such an abstract idea was interesting and wanted to re-enforce the ideas by implementing such a system myself – with Haskell, of course.

Before we get started, we’ll just need to import some libraries:

> import Control.Monad.State
> import Data.List (foldl')
> import Data.Maybe

Patterns and NFAs

We’re going to model a subset of regular expression patterns.

> data Pattern
>     = Empty                   -- ""
>     | Literal Char            -- "a"
>     | Concat Pattern Pattern  -- "ab"
>     | Choose Pattern Pattern  -- "a|b"
>     | Repeat Pattern          -- "a*"
>     deriving Show

With this, we can build “pattern ASTs” to represent regular expressions:

ghci> let p = Choose (Literal 'a') (Repeat (Literal 'b')) -- /a|b*/

It’s easy to picture a small parser to build these out of strings, but we won’t do that as part of this post. Instead, we’ll focus on converting these patterns into Nondeterministic Finite Automata or NFAs. We can then use the NFAs to determine if the pattern matches a given string.

To explain NFAs, it’s probably easiest to explain DFAs, their deterministic counter parts, first. Then we can go on to describe how NFAs differ.

A DFA is a simple machine with states and rules. The rules describe how to move between states in response to particular input characters. Certain states are special and flagged as “accept” states. If, after reading a series of characters, the machine is left in an accept state it’s said that the machine “accepted” that particular input.

An NFA is the same with two notable differences: First, an NFA can have rules to move it into more than one state in response to the same input character. This means the machine can be in more than one state at once. Second, there is the concept of a Free Move which means the machine can jump between certain states without reading any input.

Modeling an NFA requires a type with rules, current states, and accept states:

> type SID = Int -- State Identifier
> 
> data NFA = NFA
>     { rules         :: [Rule]
>     , currentStates :: [SID]
>     , acceptStates  :: [SID]
>     } deriving Show

A rule defines what characters tell the machine to change states and which state to move into.

> data Rule = Rule
>     { fromState  :: SID
>     , inputChar  :: Maybe Char
>     , nextStates :: [SID]
>     } deriving Show

Notice that nextStates and currentStates are lists. This is to represent the machine moving to, and remaining in, more than one state in response to a particular character. Similarly, inputChar is a Maybe value because it will be Nothing in the case of a rule representing a Free Move.

If, after processing some input, any of the machine’s current states (or any states we can reach via a free move) are in its list of “accept” states, the machine has accepted the input.

> accepts :: NFA -> [Char] -> Bool
> accepts nfa = accepted . foldl' process nfa
> 
>   where
>     accepted :: NFA -> Bool
>     accepted nfa = any (`elem` acceptStates nfa) (currentStates nfa ++ freeStates nfa)

Processing a single character means finding any followable rules for the given character and the current machine state, and following them.

> process :: NFA -> Char -> NFA
> process nfa c = case findRules c nfa of
>     -- Invalid input should cause the NFA to go into a failed state. 
>     -- We can do that easily, just remove any acceptStates.
>     [] -> nfa { acceptStates = [] }
>     rs -> nfa { currentStates = followRules rs }
> 
> findRules :: Char -> NFA -> [Rule]
> findRules c nfa = filter (ruleApplies c nfa) $ rules nfa

A rule applies if

  1. The read character is a valid input character for the rule, and
  2. That rule applies to an available state
> ruleApplies :: Char -> NFA -> Rule -> Bool
> ruleApplies c nfa r =
>     maybe False (c ==) (inputChar r) &&
>     fromState r `elem` availableStates nfa

An “available” state is one which we’re currently in, or can reach via Free Moves.

> availableStates :: NFA -> [SID]
> availableStates nfa = currentStates nfa ++ freeStates nfa

The process of finding free states (those reachable via Free Moves) gets a bit hairy. We need to start from our current state(s) and follow any Free Moves recursively. This ensures that Free Moves which lead to other Free Moves are correctly accounted for.

> freeStates :: NFA -> [SID]
> freeStates nfa = go [] (currentStates nfa)
> 
>   where
>     go acc [] = acc
>     go acc ss =
>         let ss' = followRules $ freeMoves nfa ss
>         in go (acc ++ ss') ss'

Free Moves from a given set of states are rules for those states which have no input character.

> freeMoves :: NFA -> [SID] -> [Rule]
> freeMoves nfa ss = filter (\r ->
>     (fromState r `elem` ss) && (isNothing $ inputChar r)) $ rules nfa

Of course, the states that result from following rules are simply the concatenation of those rules’ next states.

> followRules :: [Rule] -> [SID]
> followRules = concatMap nextStates

Now we can model an NFA and see if it accepts a string or not. You could test this in ghci by defining an NFA in state 1 with an accept state 2 and a single rule that moves the machine from 1 to 2 if the character “a” is read.

ghci> let nfa = NFA [Rule 1 (Just 'a') [2]] [1] [2]
ghci> nfa `accepts` "a"
True
ghci> nfa `accepts` "b"
False

Pretty cool.

What we need to do now is construct an NFA whose rules for moving from state to state are derived from the nature of the pattern it represents. Only if the NFA we construct moves to an accept state for a given string of input does it mean the string matches that pattern.

> matches :: String -> Pattern -> Bool
> matches s = (`accepts` s) . toNFA

We’ll define toNFA later, but if you’ve loaded this file, you can play with it in ghci now:

ghci> "" `matches` Empty
True
ghci> "abc" `matches` Empty
False

And use it in an example main:

> main :: IO ()
> main = do
>     -- This AST represents the pattern /ab|cd*/:
>     let p = Choose
>             (Concat (Literal 'a') (Literal 'b'))
>             (Concat (Literal 'c') (Repeat (Literal 'd')))
> 
>     print $ "xyz" `matches` p
>     -- => False
> 
>     print $ "cddd" `matches` p
>     -- => True

Before I show toNFA, we need to talk about mutability.

A Bit About Mutable State

Since Pattern is a recursive data type, we’re going to have to recursively create and combine NFAs. For example, in a Concat pattern, we’ll need to turn both sub-patterns into NFAs then combine those in some way. In the Ruby implementation, Mr. Stuart used Object.new to ensure unique state identifiers between all the NFAs he has to create. We can’t do that in Haskell. There’s no global object able to provide some guaranteed-unique value.

What we’re going to do to get around this is conceptually simple, but appears complicated because it makes use of monads. All we’re doing is defining a list of identifiers at the beginning of our program and drawing from that list whenever we need a new identifier. Because we can’t maintain that as a variable we constantly update every time we pull an identifier out, we’ll use the State monad to mimic mutable state through our computations.

I apologize for the naming confusion here. This State type is from the Haskell library and has nothing to with the states of our NFAs.

First, we take the parameterized State s a type, and fix the s variable as a list of (potential) identifiers:

> type SIDPool a = State [SID] a

This makes it simple to create a nextId action which requests the next identifier from this list as well as updates the computation’s state, removing it as a future option before presenting that next identifier as its result.

> nextId :: SIDPool SID
> nextId = do
>     (x:xs) <- get
>     put xs
>     return x

This function can be called from within any other function in the SIDPool monad. Each time called, it will read the current state (via get), assign the first identifier to x and the rest of the list to xs, set the current state to that remaining list (via put) and finally return the drawn identifier to the caller.

Pattern ⇒ NFA

Assuming we have some function buildNFA which handles the actual conversion from Pattern to NFA but is in the SIDPool monad, we can evaluate that action, supplying an infinite list as the potential identifiers, and end up with an NFA with unique identifiers.

> toNFA :: Pattern -> NFA
> toNFA p = evalState (buildNFA p) [1..]

As mentioned, our conversion function, lives in the SIDPool monad, allowing it to call nextId at will. This gives it the following type signature:

> buildNFA :: Pattern -> SIDPool NFA

Every pattern is going to need at least one state identifier, so we’ll pull that out first, then begin a case analysis on the type of pattern we’re dealing with:

> buildNFA p = do
>     s1 <- nextId
> 
>     case p of

The empty pattern results in a predictably simple machine. It has one state which is also an accept state. It has no rules. If it gets any characters, they’ll be considered invalid and put the machine into a failed state. Giving it no characters is the only way it can remain in an accept state.

>         Empty -> return $ NFA [] [s1] [s1]

Also simple is the literal character pattern. It has two states and a rule between them. It moves from the first state to the second only if it reads that character. Since the second state is the only accept state, it will only accept that character.

>         Literal c -> do
>             s2 <- nextId
> 
>             return $ NFA [Rule s1 (Just c) [s2]] [s1] [s2]

We can model a concatenated pattern by first turning each sub-pattern into their own NFAs, and then connecting the accept state of the first to the start state of the second via a Free Move. This means that as the combined NFA is reading input, it will only accept that input if it moves through the first NFAs states into what used to be its accept state, hop over to the second NFA, then move into its accept state. Conceptually, this is exactly how a concatenated pattern should match.

Note that freeMoveTo will be shown after.

>         Concat p1 p2 -> do
>             nfa1 <- buildNFA p1
>             nfa2 <- buildNFA p2
> 
>             let freeMoves = map (freeMoveTo nfa2) $ acceptStates nfa1
> 
>             return $ NFA
>                 (rules nfa1 ++ freeMoves ++ rules nfa2)
>                 (currentStates nfa1)
>                 (acceptStates nfa2)

We can implement choice by creating a new starting state, and connecting it to both sub-patterns’ NFAs via Free Moves. Now the machine will jump into both NFAs at once, and the composed machine will accept the input if either of the paths leads to an accept state.

>         Choose p1 p2 -> do
>             s2 <- nextId
>             nfa1 <- buildNFA p1
>             nfa2 <- buildNFA p2
> 
>             let freeMoves =
>                     [ freeMoveTo nfa1 s2
>                     , freeMoveTo nfa2 s2
>                     ]
> 
>             return $ NFA
>                 (freeMoves ++ rules nfa1 ++ rules nfa2) [s2]
>                 (acceptStates nfa1 ++ acceptStates nfa2)

A repeated pattern is probably hardest to wrap your head around. We need to first convert the sub-pattern to an NFA, then we’ll connect up a new start state via a Free Move (to match 0 occurrences), then we’ll connect the accept state back to the start state (to match repetitions of the pattern).

>         Repeat p -> do
>             s2 <- nextId
>             nfa <- buildNFA p
> 
>             let initMove = freeMoveTo nfa s2
>                 freeMoves = map (freeMoveTo nfa) $ acceptStates nfa
> 
>             return $ NFA
>                 (initMove : rules nfa ++ freeMoves) [s2]
>                 (s2: acceptStates nfa)

And finally, our little helper which connects some state up to an NFA via a Free Move.

>   where
>     freeMoveTo :: NFA -> SID -> Rule
>     freeMoveTo nfa s = Rule s Nothing (currentStates nfa)

That’s It

I want to give a big thanks to Tom Stuart for writing Understanding Computation. That book has opened my eyes in so many ways. I understand why he chose Ruby as the book’s implementation language, but I find Haskell to be better-suited to these sorts of modeling tasks. Hopefully he doesn’t mind me exploring that by rewriting some of his examples.

published on 07 Apr 2014, tagged with haskell

Applicative Functors

Every time I read Learn You a Haskell, I get something new out of it. This most recent time through, I think I’ve finally gained some insight into the Applicative type class.

I’ve been writing Haskell for some time and have developed an intuition and explanation for Monad. This is probably because monads are so prevalent in Haskell code that you can’t help but get used to them. I knew that Applicative was similar but weaker, and that it should be a super class of Monad but since it arrived later it is not. I now think I have a general understanding of how Applicative is different, why it’s useful, and I would like to bring anyone else who glossed over Applicative on the way to Monad up to speed.

The Applicative type class represents applicative functors, so it makes sense to start with a brief description of functors that are not applicative.

Values in a Box

A functor is any container-like type which offers a way to transform a normal function into one that operates on contained values.

Formally:

fmap :: Functor f    -- for any functor,
     => (  a ->   b) -- take a normal function,
     -> (f a -> f b) -- and make one that works on contained values

Some prefer to think of it like this:

fmap :: Functor f -- for any functor,
     => (a -> b)  -- take a normal function,
     -> f a       -- and a contained value,
     -> f b       -- and return the contained result of applying that 
                  -- function to that value

Because (->) is right-associative, we can reason about and use this function either way – with the former being more useful to the current discussion.

This is the first small step in the ultimate goal between all three of these type classes: allow us to work with values with context (in this case, a container of some sort) as if that context weren’t present at all. We give a normal function to fmap and it sorts out how to deal with the container, whatever it may be.

Functions in a Box

To say that a functor is “applicative”, we mean that the contained value can be applied. In other words, it’s a function.

An applicative functor is any container-like type which offers a way to transform a contained function into one that can operate on contained values.

(<*>) :: Applicative f -- for any applicative functor,
      => f (a ->   b)  -- take a contained function,
      -> (f a -> f b)  -- and make one that works on contained values

Again, we could also think of it like this:

(<*>) :: Applicative f -- for any applicative functor,
      => f (a -> b)    -- take a contained function,
      -> f a           -- and a contained value,
      -> f b           -- and return a contained result

Applicative functors also have a way to take an un-contained function and put it into a container:

pure :: Applicative f -- for any applicative functor,
     =>   (a -> b)    -- take a normal function,
     -> f (a -> b)    -- and put it in a container

In actuality, the type signature is simpler: a -> f a. Since a literally means “any type”, it can certainly represent the type (a -> b) too.

pure :: Applicative f => a -> f a

Understanding this is very important for understanding the usefulness of Applicative. Even though the type signature for (<*>) starts with f (a -> b), it can also be used with functions taking any number of arguments.

Consider the following:

:: f (a -> b -> c) -> f a -> f (b -> c)

Is this (<*>) or not?

Instead of writing its signature with b, lets use a question mark:

(<*>) :: f (a -> ?) -> f a -> f ?

Indeed it is: substitute the type (b -> c) for every ?, rather than the simple b in the actual class definition.

One In, One Out

What you just saw was a very concrete example of the benefits of how (->) works. When we say “a function of n arguments”, we’re actually lying. All functions in Haskell take exactly one argument. Multi-argument functions are really single-argument functions that return other single-argument functions that accept the remaining arguments via the same process.

Using the question mark approach, we see that multi-argument functions are actually of the form:

f :: a -> ?
f = -- ...

And it’s entirely legal for that ? to be replaced with (b -> ?), and for that ? to be replaced with (c -> ?) and so on ad infinitum. Thus you have the appearance of multi-argument functions.

As is common with Haskell, this results in what appears to be happy coincidence, but is actually the product of developing a language on top of such a consistent mathematical foundation. You’ll notice that after using (<*>) on a function of more than one argument, the result is not a wrapped result, but another wrapped function – does that sound familiar? Exactly, it’s an applicative functor.

Let me say that again: if you supply a function of more than one argument and a single wrapped value to (<*>), you end up with another applicative functor which can be given to (<*>) yet again with another wrapped value to supply the remaining argument to that original function. This can continue as long as the function needs more arguments. Exactly like normal function application.

A “Concrete” Example

Consider what this might look like if you start with a plain old function that (conceptually) takes more than one argument, but the values that it wants to operate on are wrapped in some container.

-- A normal function
f :: (a -> b -> c)
f = -- ...

-- One contained value, suitable for its first argument
x :: Applicative f => f a
x = -- ...

-- Another contained value, suitable for its second
y :: Applicative f => f b
y = -- ...

How do we pass x and y to f to get some overall result? You wrap the function with pure then use (<*>) repeatedly:

result :: Applicative f => f c
result = pure f <*> x <*> y

The first portion of that expression is very interesting: pure f <*> x. What is this bit doing? It’s taking a normal function and applying it to a contained value. Wait a second, normal functors know how to do that!

Since in Haskell every Applicative is also a Functor, that means it could be rewritten equivalently as fmap f x, turning the whole expression into fmap f x <*> y.

Never satisfied, Haskell introduced a function called (<$>) which is just fmap but infix. With this alias, we can write:

result = f <$> x <*> y

Not only is this epically concise, but it looks exactly like f x y which is how this code would be written if there were no containers involved. Here we have another, more powerful step towards the goal of writing code that has to deal with some context (in our case, still that container) without actually having to care about that context. You write your function like you normally would, then add (<$>) and (<*>) between the arguments.

What’s the Point?

With all of this background knowledge, I came to a simple mental model for applicative functors vs monads: Monad is for series where Applicative is for parallel. This has nothing to do with concurrency or evaluation order, this is only a concept I use to judge when a particular abstraction is better suited to the problem at hand.

Let’s walk through a real example.

Building a User

In an application I’m working on, I’m doing OAuth based authentication. My domain has the following (simplified) user type:

data User = User
    { userFirstName :: Text
    , userLastName  :: Text
    , userEmail     :: Text
    }

During the process of authentication, an OAuth endpoint provides me with some profile data which ultimately comes back as an association list:

type Profile = [(Text, Text)]

-- Example:
-- [ ("first_name", "Pat"            )
-- , ("last_name" , "Brisbin"        )
-- , ("email"     , "me@pbrisbin.com")
-- ]

Within this list, I can find user data via the lookup function which takes a key and returns a Maybe value. I had to write the function that builds a User out of this list of profile values. I also had to propagate any Maybe values by returning Maybe User.

First, let’s write this without exploiting the fact that Maybe is a monad or an applicative:

buildUser :: Profile -> Maybe User
buildUser p =
    case lookup "first_name" p of
        Nothing -> Nothing
        Just fn -> case lookup "last_name" p of
            Nothing -> Nothing
            Just ln -> case lookup "email" p of
                Nothing -> Nothing
                Just e  -> Just $ User fn ln e

Oof.

Treating Maybe as a Monad makes this much, much cleaner:

buildUser :: Profile -> Maybe User
buildUser p = do
    fn <- lookup "first_name" p
    ln <- lookup "last_name" p
    e  <- lookup "email" p

    return $ User fn ln e

Up until a few weeks ago, I would’ve stopped there and been extremely proud of myself and Haskell. Haskell for supplying such a great abstraction for potential failed lookups, and myself for knowing how to use it.

Hopefully, the content of this blog post has made it clear that we can do better.

Series vs Parallel

Using Monad means we have the ability to access the values returned by earlier lookup expressions in later ones. That ability is often critical, but not always. In many cases (like here), we do nothing but pass them all as-is to the User constructor “at once” as a last step.

This is Applicative, I know this.

-- f :: a    -> b    -> c    -> d
User :: Text -> Text -> Text -> User

-- x                  :: f     a
lookup "first_name" p :: Maybe Text

-- y                 :: f     b
lookup "last_name" p :: Maybe Text

-- z             :: f     c
lookup "email" p :: Maybe Text

-- result :: f d
-- result = f <$> x <*> y <*> z
buildUser :: Profile -> Maybe User
buildUser p = User
    <$> lookup "first_name" p
    <*> lookup "last_name" p
    <*> lookup "email" p

And now, I understand when to reach for Applicative over Monad. Perhaps you do too?

published on 30 Mar 2014, tagged with haskell, applicative

Writing JSON APIs with Yesod

Lately at work, I’ve been fortunate enough to work on a JSON API which I was given the freedom to write in Yesod. I was a bit hesitant at first since my only Yesod experience has been richer html-based sites and I wasn’t sure what support (if any) there was for strictly JSON APIs. Rails has a number of conveniences for writing concise controllers and standing up APIs quickly – I was afraid Yesod may be lacking.

I quickly realized my hesitation was unfounded. The process was incredibly smooth and Yesod comes with just as many niceties that allow for rapid development and concise code when it comes to JSON-only API applications. Couple this with all of the benefits inherent in using Haskell, and it becomes clear that Yesod is well-suited to sites of this nature.

In this post, I’ll outline the process of building such a site, explain some conventions I’ve landed on, and discuss one possible pitfall when dealing with model relations.

Note: The code in this tutorial was extracted from a current project and is in fact working there. However, I haven’t test-compiled the examples exactly as they appear in the post. It’s entirely possible there are typos and the like. Please reach out on Twitter or via email if you run into any trouble with the examples.

What We Won’t Cover

This post assumes you’re familiar with Haskell and Yesod. It also won’t cover some important but un-interesting aspects of API design. We’ll give ourselves arbitrary requirements and I’ll show only the code required to meet those.

Specifically, the following will not be discussed:

Getting Started

To begin, let’s get a basic Yesod site scaffolded out. How you do this is up to you, but here’s my preferred steps:

$ mkdir ./mysite && cd ./mysite
$ cabal sandbox init
$ cabal install alex happy yesod-bin
$ yesod init --bare
$ cabal install --dependencies-only
$ yesod devel

The scaffold comes with a number of features we won’t need. You don’t have to remove them, but if you’d like to, here they are:

Models

For our API example, we’ll consider a site with posts and comments. We’ll keep things simple, additional models or attributes would just mean more lines in our JSON instances or more handlers of the same basic form. This would result in larger examples, but not add any value to the tutorial.

Let’s go ahead and define the models:

config/models

Post
  title Text
  content Text

Comment
  post PostId
  content Text

JSON

It’s true that we can add a json keyword in our model definition and get derived ToJSON/FromJSON instances for free on all of our models; we won’t do that though. I find these JSON instances, well, ugly. You’ll probably want your JSON to conform to some conventional format, be it jsonapi or Active Model Serializers. Client side frameworks like Ember or Angular will have better built-in support if your API conforms to something conventional. Writing the instances by hand is also more transparent and easily customized later.

Since what we do doesn’t much matter, only that we do it, I’m going to write JSON instances and endpoints to appear as they would in a Rails project using Active Model Serializers.

Model.hs

share [mkPersist sqlSettings, mkMigrate "migrateAll"]
    $(persistFileWith lowerCaseSettings "config/models")

-- { "id": 1, "title": "A title", "content": "The content" }
instance ToJSON (Entity Post) where
    toJSON (Entity pid p) = object
        [ "id"      .= (String $ toPathPiece pid)
        , "title"   .= postTitle p
        , "content" .= postContent p
        ]

instance FromJSON Post where
    parseJSON (Object o) = Post
        <$> o .: "title"
        <*> o .: "content"

    parseJSON _ = mzero

-- { "id": 1, "post_id": 1, "content": "The comment content" }
instance ToJSON (Entity Comment) where
    toJSON (Entity cid c) = object
        [ "id"      .= (String $ toPathPiece cid)
        , "post_id" .= (String $ toPathPiece $ commentPost c)
        , "content" .= commentContent c
        ]

-- We'll talk about this later
--instance FromJSON Comment where

Routes and Handlers

Let’s start with a RESTful endpoint for posts:

config/routes

/posts         PostsR GET POST
/posts/#PostId PostR  GET PUT DELETE

Since our API should return proper status codes, let’s add the required functions to Import.hs, making them available everywhere:

Import.hs

import Network.HTTP.Types as Import
    ( status200
    , status201
    , status400
    , status403
    , status404
    )

Next we write some handlers:

Handlers/Posts.hs

getPostsR :: Handler Value
getPostsR = do
    posts <- runDB $ selectList [] [] :: Handler [Entity Post]

    return $ object ["posts" .= posts]

postPostsR :: Handler ()
postPostsR = do
    post <- requireJsonBody :: Handler Post
    _    <- runDB $ insert post

    sendResponseStatus status201 ("CREATED" :: Text)

You’ll notice we need to add a few explicit type annotations. Normally, Haskell can infer everything for us, but in this case the reason for the annotations is actually pretty interesting. The selectList function will return any type that’s persistable. Normally we would simply treat the returned records as a particular type and Haskell would say, “Aha! You wanted a Post” and then, as if by time travel, selectList would give us appropriate results.

In this case, all we do with the returned posts is pass them to object. Since object can work with any type than can be represented as JSON, Haskell doesn’t know which type we mean. We must remove the ambiguity with a type annotation somewhere.

Handlers/Post.hs

getPostR :: PostId -> Handler Value
getPostR pid = do
    post <- runDB $ get404 pid

    return $ object ["post" .= (Entity pid post)]

putPostR :: PostId -> Handler Value
putPostR pid = do
    post <- requireJsonBody :: Handler Post

    runDB $ replace pid post

    sendResponseStatus status200 ("UPDATED" :: Text)

deletePostR :: PostId -> Handler Value
deletePostR pid = do
    runDB $ delete pid

    sendResponseStatus status200 ("DELETED" :: Text)

I love how functions like get404 and requireJsonBody allow these handlers to be completely free of any error-handling concerns, but still be safe and well-behaved.

Comment Handlers

There’s going to be a small annoyance in our comment handlers which I alluded to earlier by omitting the FromJSON instance on Comment. Before we get to that, let’s take care of the easy stuff:

config/routes

/posts/#PostId/comments            CommentsR GET POST
/posts/#PostId/comments/#CommentId CommentR  GET PUT DELETE

Handlers/Comments.hs

getCommentsR :: PostId -> Handler Value
getCommentsR pid = do
    comments <- runDB $ selectList [CommentPost ==. pid] []

    return $ object ["comments" .= comments]

-- We'll talk about this later
--postCommentsR :: PostId -> Handler ()

For the single-resource handlers, we’re going to assume that a CommentId is unique across posts, so we can ignore the PostId in these handlers.

Handlers/Comment.hs

getCommentR :: PostId -> CommentId -> Handler Value
getCommentR _ cid = do
    comment <- runDB $ get404 cid

    return $ object ["comment" .= (Entity cid comment)]

-- We'll talk about this later
--putCommentR :: PostId -> CommentId -> Handler ()

deleteCommentR :: PostId -> CommentId -> Handler ()
deleteCommentR _ cid = do
    runDB $ delete cid

    sendResponseStatus status200 ("DELETED" :: Text)

Handling Relations

Up until now, we’ve been able to define JSON instances for our model, use requireJsonBody, and insert the result. In this case however, the request body will be lacking the Post ID (since it’s in the URL). This means we need to parse a different but similar data type from the JSON, then use that and the URL parameter to build a Comment.

Helpers/Comment.hs

-- This datatype would be richer if Comment had more attributes. For now 
-- we only have to deal with content, so I can use a simple newtype.
newtype CommentAttrs = CommentAttrs Text

instance FromJSON CommentAttrs where
    parseJSON (Object o) = CommentAttrs <$> o .: "content"
    parseJSON _          = mzero

toComment :: PostId -> CommentAttrs -> Comment
toComment pid (CommentAttrs content) = Comment
    { commentPost    = pid
    , commentContent = content
    }

This may seem a bit verbose and even redundant, and there’s probably a more elegant way to get around this situation. Lacking that, I think the additional safety (vs the obvious solution of making commentPost a Maybe) and separation of concerns (vs putting this in the model layer) is worth the extra typing. It’s also very easy to use:

Handlers/Comments.hs

import Helpers.Comment

postCommentsR :: PostId -> Handler ()
postCommentsR pid = do
    _ <- runDB . insert . toComment pid =<< requireJsonBody

    sendResponseStatus status201 ("CREATED" :: Text)

Handlers/Comment.hs

import Helpers.Comment

putCommentR :: PostId -> CommentId -> Handler ()
putCommentR pid cid = do
    runDB . replace cid . toComment pid =<< requireJsonBody

    sendResponseStatus status200 ("UPDATED" :: Text)
We don’t need a type annotation on requireJsonBody in this case. Since the result is being passed to toComment pid, Haskell knows we want a CommentAttrs and uses its parseJSON function within requireJsonBody

Conclusion

With a relatively small amount of time and code, we’ve written a fully-featured JSON API using Yesod. I think the JSON instances and API handlers are more concise and readable than the analogous Rails serializers and controllers. Our system is also far safer thanks to the type system and framework-provided functions like get404 and requireJsonBody without us needing to explicitly deal with any of that.

I hope this post has shown that Yesod is indeed a viable option for projects of this nature.

published on 22 Feb 2014, tagged with haskell, yesod

Random Numbers without Mutation

In lecture 5A of Structure & Interpretation of Computer Programs, Gerald Sussman introduces the idea of assignments, side effects and state. Before that, they had been working entirely in purely functional Lisp which could be completely evaluated and reasoned about using the substitution model. He states repeatedly that this is a horrible thing as it requires a far more complex view of programs. At the end of the lecture, he shows a compelling example of why we must introduce this horrible thing anyway; without it, we cannot decouple parts of our algorithms cleanly and would be reduced to huge single-function programs in some critical cases.

The example chosen in SICP is estimating π using Cesaro’s method. The method states that the probability that any two random numbers’ greatest common divisor equals 1 is itself equal to 6/π2.

Since I know Ruby better than Lisp (and I’d venture my readers do too), here’s a ported version:

def estimate_pi(trials)
  p = monte_carlo(trials) { cesaro }

  Math.sqrt(6 / p)
end

def cesaro
  rand.gcd(rand) == 1
end

def monte_carlo(trials, &block)
  iter = ->(trials, passed) do
    if trials == 0
      passed
    else
      if block.call
        iter.call(trials - 1, passed + 1)
      else
        iter.call(trials - 1, passed)
      end
    end
  end

  iter.call(trials, 0) / trials.to_f
end

I’ve written this code to closely match the Lisp version which used a recursive iterator. Unfortunately, this means that any reasonable number of trials will exhaust Ruby’s stack limit.

The code above also assumes a rand function which will return different random integers on each call. To do so, it must employ mutation and hold internal state:

def rand
  @x ||= random_init
  @x   = random_update(@x)

  @x
end

Here I assume the same primitives as Sussman does, though it wouldn’t be difficult to wrap Ruby’s built-in rand to return integers instead of floats. The important thing is that this function needs to hold onto the previously returned random value in order to provide the next.

Sussman states that without this impure rand function, it would be very difficult to decouple the cesaro function from the monte_carlo one. Without utilizing (re)assignment and mutation, we would have to write our estimation function as one giant blob:

def estimate_pi(trials)
  iter = ->(trials, passed, x1, x2) do
    if trials == 0
      passed
    else
      x1_ = rand_update(x2)
      x2_ = rand_update(x1_)

      if x1.gcd(x2) == 1
        iter.call(trials - 1, passed + 1, x1_, x2_)
      else
        iter.call(trials - 1, passed, x1_, x2_)
      end
    end
  end

  x1 = rand_init
  x2 = rand_update(x1)

  p = iter.call(trials, 0, x1, x2) / trials.to_f

  Math.sqrt(6 / p)
end

Ouch.

It’s at this point Sussman stops, content with his justification for adding mutability to Lisp. I’d like to explore a bit further: what if remaining pure were non-negotiable? Are there other ways to make decoupled systems and elegant code without sacrificing purity?

RGen

Let’s start with a non-mutating random number generator:

class RGen
  def initialize(seed = nil)
    @seed = seed || random_init
  end

  def next
    x = random_update(@seed)

    [x, RGen.new(x)]
  end
end

def rand(g)
  g.next
end

This allows for the following implementation:

def estimate_pi(trials)
  p = monte_carlo(trials) { |g| cesaro(g) }

  Math.sqrt(6 / p)
end

def cesaro(g)
  x1, g1 = rand(g)
  x2, g2 = rand(g1)

  [x1.gcd(x2) == 1, g2]
end

def monte_carlo(trials, &block)
  iter = ->(trials, passed, g) do
    if trials == 0
      passed
    else
      ret, g_ = block.call(g)

      if ret
        iter.call(trials - 1, passed + 1, g_)
      else
        iter.call(trials - 1, passed, g_)
      end
    end
  end

  iter.call(trials, 0, RGen.new) / trials.to_f
end

We’ve moved out of the single monolithic function, which is a step in the right direction. The additional generator arguments being passed all over the place makes for some readability problems though. The reason for that is a missing abstraction; one that’s difficult to model in Ruby. To clean this up further, we’ll need to move to a language where purity was in fact non-negotiable: Haskell.

In Haskell, the type signature of our current monte_carlo function would be:

monteCarlo :: Int                    -- number of trials
           -> (RGen -> (Bool, RGen)) -- the experiment
           -> Double                 -- result

Within monte_carlo, we need to repeatedly call the block with a fresh random number generator. Calling RGen#next gives us an updated generator along with the next random value, but that must happen within the iterator block. In order to get it out again and pass it into the next iteration, we need to return it. This is why cesaro has the type that it does:

cesaro :: RGen -> (Bool, RGen)

cesaro depends on some external state so it accepts it as an argument. It also affects that state so it must return it as part of its return value. monteCarlo is responsible for creating an initial state and “threading” it though repeated calls to the experiment given. Mutable state is “faked” by passing a return value as argument to each computation in turn.

You’ll also notice this is a similar type signature as our rand function:

rand :: RGen -> (Int, RGen)

This similarity and process is a generic concern which has nothing to do with Cesaro’s method or performing Monte Carlo tests. We should be able to leverage the similarities and separate this concern out of our main algorithm. Monadic state allows us to do exactly that.

RGenState

For the Haskell examples, I’ll be using System.Random.StdGen in place of the RGen class we’ve been working with so far. It is exactly like our RGen class above in that it can be initialized with some seed, and there is a random function with the type StdGen -> (Int, StdGen).

The abstract thing we’re lacking is a way to call those function successively, passing the StdGen returned from one invocation as the argument to the next invocation, all the while being able to access that a (the random integer or experiment outcome) whenever needed. Haskell, has just such an abstraction, it’s in Control.Monad.State.

First we’ll need some imports.

import System.Random
import Control.Monad.State

Notice that we have a handful of functions with similar form.

(StdGen -> (a, StdGen))

What Control.Monad.State provides is a type that looks awfully similar.

data State s a = State { runState :: (s -> (a, s)) }

Let’s declare a type synonym which fixes that s type variable to the state we care about: a random number generator.

type RGenState a = State StdGen a

By replacing the s in State with our StdGen type, we end up with a more concrete type that looks as if we had written this:

data RGenState a = RGenState
    { runState :: (StdGen -> (a, StdGen)) }

And then went on to write all the various instances that make this type useful. By using such a type synonym, we get all those instances and functions for free.

Our first example:

rand :: RGenState Int
rand = state random

We can “evaluate” this action with one of a number of functions provided by the library, all of which require some initial state. runState will literally just execute the function and return the result and the updated state (in case you missed it, it’s just the record accessor for the State type). evalState will execute the function, discard the updated state, and give us only the result. execState will do the inverse: execute the function, discard the result, and give us only the updated state.

We’ll be using evalState exclusively since we don’t care about how the random number generator ends up after these actions, only that it gets updated and passed along the way. Let’s wrap that up in a function that both provides the initial state and evaluates the action.

runRandom :: RGenState a -> a
runRandom f = evalState f (mkStdGen 1)

-- runRandom rand
-- => 7917908265643496962

Unfortunately, the result will be the same every time since we’re using a constant seed. You’ll see soon that this is an easy limitation to address after the fact.

With this bit of glue code in hand, we can re-write our program in a nice modular way without any actual mutable state or re-assignment.

estimatePi :: Int -> Double
estimatePi n = sqrt $ 6 / (monteCarlo n cesaro)

cesaro :: RGenState Bool
cesaro = do
    x1 <- rand
    x2 <- rand

    return $ gcd x1 x2 == 1

monteCarlo :: Int -> RGenState Bool -> Double
monteCarlo trials experiment = runRandom $ do
    outcomes <- replicateM trials experiment

    return $ (length $ filter id outcomes) `divide` trials

  where
    divide :: Int -> Int -> Double
    divide a b = fromIntegral a / fromIntegral b

Even with a constant seed, it works pretty well:

main = print $ estimatePi 1000
-- => 3.149183286488868

And For My Last Trick

It’s easy to fall into the trap of thinking that Haskell’s type system is limiting in some way. The monteCarlo function above can only work with random-number-based experiments? Pretty weak.

Consider the following refactoring:

estimatePi :: Int -> RGenState Double
estimatePi n = do
  p <- monteCarlo n cesaro

  return $ sqrt (6 / p)

cesaro :: RGenState Bool
cesaro = do
  x1 <- rand
  x2 <- rand

  return $ gcd x1 x2 == 1

monteCarlo :: Monad m => Int -> m Bool -> m Double
monteCarlo trials experiment = do
  outcomes <- replicateM trials experiment

  return $ (length $ filter id outcomes) `divide` trials

  where
    divide :: Int -> Int -> Double
    divide a b = fromIntegral a / fromIntegral b

main :: IO ()
main = print $ runRandom $ estimatePi 1000

The minor change made was moving the call to runRandom all the way up to main. This allows us to pass stateful computations throughout our application without ever caring about that state except at this highest level.

This would make it simple to add true randomness (which requires IO) by replacing the call to runRandom with something that pulls entropy in via IO rather than using mkStdGen.

runTrueRandom :: RGenState a -> IO a
runTrueRandom f = do
    s <- newStdGen

    evalState f s

main = print =<< runTrueRandom (estimatePi 1000)

One could even do this conditionally so that your random-based computations became deterministic during tests.

Another important point here is that monteCarlo can now work with any Monad! This makes perfect sense: The purpose of this function is to run experiments and tally outcomes. The idea of an experiment only makes sense if there’s some outside force which might change the results from run to run, but who cares what that outside force is? Haskell don’t care. Haskell requires we only specify it as far as we need to: it’s some Monad m, nothing more.

This means we can run IO-based experiments via the Monte Carlo method with the same monteCarlo function just by swapping out the monad:

What if Cesaro claimed the probability that the current second is an even number is equal to 6/π2? Seems reasonable, let’s model it:

-- same code, different name / type
estimatePiIO :: Int -> IO Double
estimatePiIO n = do
  p <- monteCarlo n cesaroIO

  return $ sqrt (6 / p)

cesaroIO :: IO Bool
cesaroIO = do
  t <- getCurrentTime

  return $ even $ utcDayTime t

monteCarlo :: Monad m => Int -> m Bool -> m Double
monteCarlo trials experiment = -- doesn't change at all!

main :: IO ()
main = print =<< estimatePiIO 1000

I find the fact that this expressiveness, generality, and polymorphism can share the same space as the strictness and incredible safety of this type system fascinating.

published on 09 Feb 2014, tagged with haskell

Automated Unit Testing in Haskell

Hspec is a BDD library for writing Rspec-style tests in Haskell. In this post, I’m going to describe setting up a Haskell project using this test framework. What we’ll end up with is a series of tests which can be run individually (at the module level), or all together (as part of packaging). Then I’ll briefly mention Guard (a Ruby tool) and how we can use that to automatically run relevant tests as we change code.

Project Layout

For any of this to work, our implementation and test modules must follow a particular layout:

Code/liquid/
├── src
│   └── Text
│       ├── Liquid
│       │   ├── Context.hs
│       │   ├── Parse.hs
│       │   └── Render.hs
│       └── Liquid.hs
└── test
    ├── SpecHelper.hs
    ├── Spec.hs
    └── Text
        └── Liquid
            ├── ParseSpec.hs
            └── RenderSpec.hs

Notice that for each implementation module (under ./src) there is a corresponding spec file at the same relative path (under ./test) with a consistent, conventional name (<ModuleName>Spec.hs). For this post, I’m going to outline the first few steps of building the Parse module of the above source tree which happens to be my liquid library, a Haskell implementation of Shopify’s template system.

Hspec Discover

Hspec provides a useful function called hspec-discover. If your project follows the conventional layout above, you can simply create a file like so:

test/Spec.hs

{-# OPTIONS_GHC -F -pgmF hspec-discover #-}

And when that file is executed, all of your specs will be found and run together as a single suite.

SpecHelper

I like to create a central helper module which gets imported into all specs. It simply exports our test framework and implementation code:

test/SpecHelper.hs

module SpecHelper
    ( module Test.Hspec
    , module Text.Liquid.Parse
    ) where

import Test.Hspec
import Text.Liquid.Parse

This file might not seem worth it now, but as you add more modules, it becomes useful quickly.

Baby’s First Spec

test/Text/Liquid/ParseSpec.hs

module Text.Liquid.ParseSpec where

import SpecHelper

spec :: Spec
spec = do
    describe "Text.Liquid.Parse" $ do
        context "Simple text" $ do
            it "parses exactly as-is" $ do
                let content = "Some simple text"

                parseTemplate content `shouldBe` Right [TString content]

main :: IO ()
main = hspec spec

With this first spec, I’ve already made some assumptions and design decisions.

The API into our module will be a single parseTemplate function which returns an Either type (commonly used to represent success or failure). The Right value (conventionally used for success) will be a list of template parts. One such part can be constructed with the TString function and is used to represent literal text with no interpolation or logic. This is the simplest template part possible and is therefore a good place to start.

The spec function is what will be found by hspec-discover and rolled up into a project-wide test. I’ve also added a main function which just runs said spec. This allows me to easily run the spec in isolation, which you should do now:

$ runhaskell -isrc -itest test/Text/Liquid/ParseSpec.hs

The first error you should see is an inability to find Test.Hspec. Go ahead and install it:

$ cabal install hspec

You should then get a similar error for Text.Liquid.Parse then some more about functions and types that are not yet defined. Let’s go ahead and implement just enough to get past that:

src/Text/Liquid/Parse.hs

module Text.Liquid.Parse where

type Template = [TPart]

data TPart = TString String

parseTemplate :: String -> Either Template String
parseTemplate = undefined

The test should run now and give you a nice red failure due to the attempted evaluation of undefined.

Since implementing Parse is not the purpose of this post, I won’t be moving forward in that direction. Instead, I’m going to show you how to set this library up as a package which can be cabal installed and/or cabal tested by end-users.

For now, you can pass the test easily like so:

src/Text/Liquid/Parse.hs

parseTemplate :: String -> Either Template String
parseTemplate str = Right [TString str]

For TDD purists, this is actually the correct thing to do here: write the simplest implementation to pass the test (even if you “know” it’s not going to last), then write another failing test to force you to implement a little more. I don’t typically subscribe to that level of TDD purity, but I can see the appeal.

Cabal

We’ve already got Spec.hs which, when executed, will run all our specs together:

$ runhaskell -isrc -itest test/Spec.hs

We just need to wire that into the Cabal packaging system:

liquid.cabal

name:          liquid
version:       0.0.0
license:       MIT
copyright:     (c) 2013 Pat Brisbin
author:        Pat Brisbin <pbrisbin@gmail.com>
maintainer:    Pat Brisbin <pbrisbin@gmail.com>
build-type:    Simple
cabal-version: >= 1.8

library
  hs-source-dirs: src

  exposed-modules: Text.Liquid.Parse

  build-depends: base == 4.*

test-suite spec
  type: exitcode-stdio-1.0

  hs-source-dirs: test

  main-is: Spec.hs

  build-depends: base  == 4.*
               , hspec >= 1.3
               , liquid

With this in place, testing our package is simple:

$ cabal configure --enable-tests
...
$ cabal build
...
$ cabal test
Building liquid-0.0.0...
Preprocessing library liquid-0.0.0...
In-place registering liquid-0.0.0...
Preprocessing test suite 'spec' for liquid-0.0.0...
Linking dist/build/spec/spec ...
Running 1 test suites...
Test suite spec: RUNNING...
Test suite spec: PASS
Test suite logged to: dist/test/liquid-0.0.0-spec.log
1 of 1 test suites (1 of 1 test cases) passed.

Guard

Another thing I like to setup is the automatic running of relevant specs as I change code. To do this, we can use a tool from Ruby-land called Guard. Guard is a great example of a simple tool doing one thing well. All it does is watch files and execute actions based on rules defined in a Guardfile. Through plugins and extensions, there are a number of pre-built solutions for all sorts of common needs: restarting servers, regenerating ctags, or running tests.

We’re going to use guard-shell which is a simple extension allowing for running shell commands and spawning notifications.

$ gem install guard-shell

Next, create a Guardfile:

Guardfile

# Runs the command and prints a notification
def execute(cmd)
  if system(cmd)
    n 'Build succeeded', 'hspec', :success
  else
    n 'Build failed', 'hspec', :failed
  end
end

def run_all_tests
  execute %{
    cabal configure --enable-tests &&
    cabal build && cabal test
  }
end

def run_tests(mod)
  specfile = "test/#{mod}Spec.hs"

  if File.exists?(specfile)
    files = [specfile]
  else
    files = Dir['test/**/*.hs']
  end

  execute "ghc -isrc -itest -e main #{files.join(' ')}"
end

guard :shell do
  watch(%r{.*\.cabal$})          { run_all_tests }
  watch(%r{test/SpecHelper.hs$}) { run_all_tests }
  watch(%r{src/(.+)\.hs$})       { |m| run_tests(m[1]) }
  watch(%r{test/(.+)Spec\.hs$})  { |m| run_tests(m[1]) }
end

Much of this Guardfile comes from this blog post by Michael Xavier. His version also includes cabal sandbox support, so be sure to check it out if that interests you.

If you like to bundle all your Ruby gems (and you probably should) that can be done easily, just see my main liquid repo as that’s how I do things there.

In one terminal, start guard:

$ guard

Finally, simulate an edit in your module and watch the test automatically run:

$ touch src/Text/Liquid/Parse.hs

And there you go, fully automated unit testing in Haskell.

published on 01 Dec 2013, tagged with testing, haskell, cabal, hunit, ruby, guard

Using Notify-OSD for XMonad Notifications

In my continuing efforts to strip my computing experience of any non-essential parts, I’ve decided to ditch my statusbars. My desktop is now solely a grid of tiled terminals (and a browser). It’s quite nice. The only thing I slightly missed, however, was notifications when one of my windows set Urgency. This used to trigger a bright yellow color for that workspace in my dzen-based statusbar.

A Brief Tangent:

Windows have these properties called “hints” which they can set on themselves at will. These properties can be read by Window Managers in an effort to do the Right Thing. Hints are how a Window tells the Manager, “Hey, I should be full-screen” or, “I’m a dialog, float me on top of everything”. One such hint is WM_URGENT.

WM_URGENT is how windows get your attention. It’s what makes them flash in your task bar or bounce in your dock. If you’re using a sane terminal, it should set WM_URGENT on itself if the program running within it triggers a “bell”.

By telling applications like mutt or weechat to print a bell when I get new email or someone nick-highlights me, I can easily get notifications of these events even from applications that are running within screen, in an ssh session, on some server far, far away. Pretty neat.

Now that I’m without a status bar, I need to be notified some other way. Enter Notify-OSD.

Notify-OSD

Notify-OSD is part of the desktop notification system of GNOME, but it can be installed standalone and used to send notifications from the command-line very easily:

$ notify-send "A title" "A message"

So how do we get XMonad to send a useful notification via notify-send whenever a window sets the WM_URGENT hint? Enter the UrgencyHook.

UrgencyHook

Setting a custom urgency hook is very easy, but not exactly intuitive. What we’re actually doing is declaring a custom data type, then making it an instance of the UrgencyHook typeclass. The single required function to be a member of this typeclass is an action which will be run whenever a window sets urgency. Conveniently, it’s given the window with urgency as an argument. We can use this to format our notification.

First off, add the module imports we’ll need:

import XMonad.Hooks.UrgencyHook
import XMonad.Util.NamedWindows
import XMonad.Util.Run

import qualified XMonad.StackSet as W

Then make that custom datatype and instance:

data LibNotifyUrgencyHook = LibNotifyUrgencyHook deriving (Read, Show)

instance UrgencyHook LibNotifyUrgencyHook where
    urgencyHook LibNotifyUrgencyHook w = do
        name     <- getName w
        Just idx <- fmap (W.findTag w) $ gets windowset

        safeSpawn "notify-send" [show name, "workspace " ++ idx]

Finally, update main like so:

main :: IO ()
main = xmonad
     $ withUrgencyHook LibNotifyUrgencyHook
     $ defaultConfig
        { -- ...
        , -- ...
        }

To test this, open a terminal in some workspace and type:

$ sleep 3 && printf "\a"

Then immediately focus away from that workspace. In a few seconds, you should see a nice pop-up like:

notify-send 

You can see the title of the notification is the window name and I use the message to tell me the workspace number. In this case, the name is the default “urxvt” for a terminal window, but I also use a few wrapper scripts to open urxvt with the -n option to set its name to something specific which will then come through in any notifications from that window.

If that doesn’t work, it’s likely your terminal doesn’t set Urgency on bells. For rxvt at least, the setting is:

URxvt*urgentOnBell: true
URxvt*visualBell:   false

In Xresources or Xdefaults, whichever you use.

published on 15 Oct 2013, tagged with xmonad, haskell, notify-osd

On Staticness

For almost 7 years, now I’ve had a desktop at home, running, serving (among many things) my personal blog. Doing so is how I learned much of what I now know about programming and system administration. It gave me a reason to learn HTML, then PHP, then finally Haskell. It taught me Postgres, Apache, then lighttpd, then nginx. Without maintaining this site myself, on my own desktop, I doubt I would’ve been sucked into these things and I may not have ended up where I am today.

However, I’m now a happily employed Developer and I do these things all day on other people’s machines and sites. Don’t get me wrong, I enjoy it all very much, but the educational value of maintaining my personal blog as a locally-hosted web-app is just not there any more. With that value gone, things like power outages, harddrive failures, Comcast, etc which bring my site down become unacceptable. It’s too easy to have something which requires almost no maintenance while still giving me the control and work-flow I want.

I realize I could’ve moved my site as-is to a VPS and no longer been at the whim of Comcast and NSTAR, but that wouldn’t decrease the maintenance burden enough. Contrast pretty much any typical web-app ecosystem with…

The services now required to host my blog:

nginx

The configuration required to host my blog:

$ wc -l < /etc/nginx/nginx.conf
19

Adding a new post:

$ cat > posts/2013-09-21-awesome_post.md <<EOF
---
title: Awesome Post
tags: some, tags
---

Pretty *awesome*.

EOF

Deployment:

$ jekyll build && rsync -a -e ssh _site/ pbrisbin.com:/srv/http/site/

Backups:

$ tar czf ~/site.backup _site

Comments

Unfortunately, Comments aren’t easy to do with a static site (at least not without something like Disqus, but meh). To all those that have commented on this site in the past, I apologize. That feature is just not worth maintaining a dynamic blog-as-web-app.

When considering this choice, I discovered that the comments on this site fell into one of three categories:

  1. Hey, nice post!
  2. Hey, here’s a correction
  3. Hey, here’s something additional about this topic

These are all useful things, but there’s never any real discussion going on between commenters; it’s all just notes to me. So I’ve decided to let these come in as emails. My hope is that folks who might’ve commented are OK sending it in an email. The address is in the footer pretty much where you’d expected a Comments section to be. I’ll make sure that any corrections or additional info sent via email will make it back into the main content of the post.

Pandoc

At some point during this process, I realized that I simply can’t convert my post markdown to html without pandoc. Every single markdown implementation I’ve found gets the following wrong:

<div class="something">
I want this content to **also** be parsed as markdown.
</div>

Pandoc does it right. Everything else puts the literal text inside the div. This breaks all my posts horribly because I’ll frequently do something like:

This is in a div with class="well", and the content inside is still markdown.

I had assumed that to get pandoc support I’d have to use Hakyll, but (at least from the docs) it seemed to be missing tags and next/previous link support. It appears extensible enough that I might code that in custom, but again, I’m trying to decrease overall effort here. Jekyll, on the other hand, had those features already and let me use pandoc easily by dropping a small ruby file in _plugins.

Update: I did eventually move this blog to Hakyll.

I figure if I want to be a Haskell evangelist, I really shouldn’t be using a Ruby site generator when such a good Haskell option exists. Also, tags are now supported and adding next/previous links myself wasn’t very difficult.

With the conversion complete, I was able to shut down a bunch of services on my desktop and even cancel a dynamic DNS account. At $5/month, the Digital Ocean VPS is a steal. The site’s faster, more reliable, easier to deploy, and even got a small facelift.

Hopefully the loss of Comments doesn’t upset any readers. I love email, so please send those comments to me at pbrisbin dot com.

published on 21 Sep 2013, tagged with meta, system, jekyll, pandoc

Mocking Bash

Have you ever wanted to mock a program on your system so you could write fast and reliable tests around a shell script which calls it? Yeah, I didn’t think so.

Well I did, so here’s how I did it.

Cram

Verification testing of shell scripts is surprisingly easy. Thanks to Unix, most shell scripts have limited interfaces with their environment. Assertions against stdout can often be enough to verify a script’s behavior.

One tool that makes these kind of executions and assertions easy is cram.

Cram’s mechanics are very simple. You write a test file like this:

The ls command should print one column when passed -1

  $ mkdir foo
  > touch foo/bar
  > touch foo/baz

  $ ls -1 foo
  bar
  baz

Any line beginning with an indented $ is executed (with > allowing multi-line commands). The indented text below such commands is compared with the actual output at that point. If it doesn’t match, the test fails and a contextual diff is shown.

With this philosophy, retrofitting tests on an already working script is incredibly easy. You just put in a command, run the test, then insert whatever the actual output was as the assertion. Cram’s --interactive flag is meant for exactly this. Aces.

Not Quite

Suppose your script calls a program internally whose behavior depends on transient things which are outside of your control. Maybe you call curl which of course depends on the state of the internet between you and the server you’re accessing. With the output changing between runs, these tests become more trouble than they’re worth.

What’d be really great is if I could do the following:

  1. Intercept calls to the program
  2. Run the program normally, but record “the response”
  3. On subsequent invocations, just replay the response and don’t call the program

This means I could run the test suite once, letting it really call the program, but record the stdout, stderr, and exit code of the call. The next time I run the test suite, nothing would actually happen. The recorded response would be replayed instead, my script wouldn’t know the difference and everything would pass reliably and instantly.

In case you didn’t notice, this is VCR.

The only limitation here is that a mock must be completely affective while only mimicking the stdout, stderr, and exit code of what it’s mocking. A command that creates files, for example, which are used by other parts of the script could not be mocked this way.

Mucking with PATH

One way to intercept calls to executables is to prepend $PATH with some controllable directory. Files placed in this leading directory will be found first in command lookups, allowing us to handle the calls.

I like to write my cram tests so that the first thing they do is source a test/helper.sh, so this makes a nice place to do such a thing:

test/helper.sh

export PATH="$TESTDIR/..:$TESTDIR/bin:$PATH"

This ensures that a) the executable in the source directory is used and b) anything in test/bin will take precedence over system commands.

Now all we have to do to mock foo is add a test/bin/foo which will be executed whenever our Subject Under Test calls foo.

Record/Replay

The logic of what to do in a mock script is straight forward:

  1. Build a unique identifier for the invocation
  2. Look up a stored “response” by that identifier
  3. If not found, run the program and record said response
  4. Reply with the recorded response to satisfy the caller

We can easily abstract this in a generic, 12 line proxy:

test/bin/act-like

#!/usr/bin/env bash
program="$1"; shift
base="${program##*/}"

fixtures="${TESTDIR:-test}/fixtures/$base/$(echo $* | md5sum | cut -d ' ' -f 1)"

if [[ ! -d "$fixtures" ]]; then
  mkdir -p "$fixtures"
  $program "$@" >"$fixtures/stdout" 2>"$fixtures/stderr"
  echo $? > "$fixtures/exit_code"
fi

cat "$fixtures/stdout"
cat "$fixtures/stderr" >&2

read -r exit_code < "$fixtures/exit_code"

exit $exit_code

With this in hand, we can record any invocation of anything we like (so long as we only need to mimic the stdout, stderr, and exit code).

test/bin/curl

#!/usr/bin/env bash
act-like /usr/bin/curl "$@"

test/bin/makepkg

#!/usr/bin/env bash
act-like /usr/bin/makepkg "$@"

test/bin/pacman

#!/usr/bin/env bash
act-like /usr/bin/pacman "$@"

Success!

After my next test run, I find the following:

$ tree test/fixtures
test/fixtures
├── curl
│   ├── 008f2e64f6dd569e9da714ba8847ae7e
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── 2c5906baa66c800b095c2b47173672ba
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── c50061ffc84a6e1976d1e1129a9868bc
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── f38bb573029c69c0cdc96f7435aaeafe
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   ├── fc5a0df540104584df9c40d169e23d4c
│   │   ├── exit_code
│   │   ├── stderr
│   │   └── stdout
│   └── fda35c202edffac302a7b708d2534659
│       ├── exit_code
│       ├── stderr
│       └── stdout
├── makepkg
│   └── 889437f54f390ee62a5d2d0347824756
│       ├── exit_code
│       ├── stderr
│       └── stdout
└── pacman
    └── af8e8c81790da89bc01a0410521030c6
        ├── exit_code
        ├── stderr
        └── stdout

11 directories, 24 files

Each hash-directory, representing one invocation of the given program, contains the full response in the form of stdout, stderr, and exit_code files

I run my tests again. This time, rather than calling any of the actual programs, the responses are found and replayed. The tests pass instantly.

published on 24 Aug 2013, tagged with bash, testing, mocks, cram, aurget, arch

Email Encryption

The recent hullabaloo with Snowden and the NSA is very scary. I agree with most Americans that The Government is doing some pretty evil things these days. That said, I also believe that we as cloud users are primarily responsible for the privacy of our own data. Thankfully, the problem of transmitting or storing data via a 3rd party without granting that party access to said data was recently solved.

What follows is a high-level walk-through of one such example of securing your own privacy when it comes to cloud-based communications: encrypted email using GnuPG and Mutt.

This is mainly a regurgitation of this and this, so I recommend you check out those resources as well.

Signing vs Encrypting

We’ll be adding two features to our email repertoire: Signing, which we can do all the time, and Encrypting, which we can only do if the person with whom we’re communicating also supports it.

Signing a message is a way to prove that the message actually came from you. The process works by including an attachment which has been cryptographically signed using your private key. The recipient can then use your public key to verify that signature. Successful verification doesn’t prove the message came from you per se, but it does prove that it came from someone who has access to your private key.

Encryption, on the other hand, is a way to send a message which only the intended recipient can read. To accomplish this, the sender encrypts the message using the recipient’s public key. This means that only someone in possession of the corresponding private key (i.e. the recipient themselves) can decrypt and read the message.

How Do I Encryption?

The first step is generating your Key Pair:

$ gpg --gen-key

The prompts are fairly self-explanatory. I suggest choosing a one year expiration and be sure to give it a strong pass-phrase. After this has finished, take note of your Key ID which is the value after the slash in the following output:

$ gpg --list-keys
/home/patrick/.gnupg/pubring.gpg
--------------------------------
pub   2048R/CEC8925D 2013-08-16 [expires: 2014-08-16]
uid                  Patrick Brisbin <pbrisbin@gmail.com>
sub   2048R/33868FEC 2013-08-16 [expires: 2014-08-16]

For example, my Key ID is CEC8925D.

The next step is to put your public key on a key server so anyone can find it when they wish to verify your signatures or send you encrypted messages:

$ gpg --keyserver hkp://subkeys.pgp.net --send-keys <Key ID>

At this point we have all we would need to manually use the gpg command to encrypt or decrypt documents, but that makes for a clumsy emailing process. Instead, we’re going to tell Mutt how to execute these commands for us as they’re needed.

Mutt ships with a sample configuration file which specifies the various crypto-related commands for using GnuPG. Since I have no need to tweak these settings, I just source this sample file as-is, then go on to set only the options I care about:

source /usr/share/doc/mutt/samples/gpg.rc

set pgp_timeout = 3600       # how long to cache the pass-phrase

set crypt_autosign = yes     # automatically sign all outgoing mail

set crypt_replyencrypt = yes # automatically encrypt replies to 
                             # encrypted messages

set pgp_sign_as = CEC8925D   # my Key ID

That’s it – you’re all set to start having fully encrypted conversations.

Try It Out

To confirm everything is working, restart Mutt and compose a test message to yourself. When you get to the compose view (after quitting vim), you should see something like the following:

Security: Sign (PGP/MIME)
 sign as: CEC8925D

This confirms that auto-signing is working and it’s using the correct key.

Press p to enter the (p)gp menu. This menu allows you to remove or modify the security-related things you’re planning on doing with this email. We’ll choose b to (b)oth sign and encrypt this message.

Upon receiving the test message, the body should look like this:

[-- PGP output follows (current time: Tue 20 Aug 2013 04:14:20 PM EDT) --]
gpg: Signature made Fri 16 Aug 2013 11:02:51 AM EDT using RSA key ID CEC8925D
gpg: Good signature from "Patrick Brisbin <pbrisbin@gmail.com>"
[-- End of PGP output --]

[-- The following data is PGP/MIME encrypted --]

Test

--
patrick brisbin

[-- End of PGP/MIME encrypted data --]

You can see here the message signature was verified and the body came in as encrypted and was successfully decrypted and presented to us by Mutt. This means just about everything’s working. To test the final piece, go ahead and reply to this message. Back in the compose view, you should see this:

Security: Sign, Encrypt (PGP/MIME)
 sign as: CEC8925D

This confirms the last piece of the puzzle: replies to encrypted messages are automatically encrypted as well.

Hopefully, this post has shown just how easy it is to have secure, private communication. And you don’t even have to ditch Gmail! All you need is a decent client and a little bit of setup. Now send me some encrypted secrets!

published on 20 Aug 2013, tagged with mutt, encryption, gpg

The Advent of IO

What if we wanted to write a Haskell program to behave something like this:

$ runhaskell hello.hs
Hello who?

$ runhaskell hello.hs Pat
Hello Pat

$ runhaskell hello.hs -u Pat
Hello PAT

One implementation may look like this:

main :: IO ()
main = do
    args <- getArgs

    let name = case args of
                ("-u":n:_) -> map toUpper n
                (     n:_) -> n
                otherwise  -> "who?"

    putStrLn $ "Hello " ++ name

And almost immediately, the budding Haskell programmer is met with a number of confusing concepts: What the heck is IO ()? What does <- mean? When questions like these are raised, the answer is “well, because Monad.” Not very enlightening.

Haskell’s IO monad is an amazingly elegant solution to a very thorny problem, but why is it so hard to wrap one’s head around? I think the reason it can be so confusing is that we come at it backwards, we see this elegant result but know not the problem it solves.

In the Beginning

In the very early days of Haskell, there was no IO monad. Instead, programs used a somewhat confusing [Response] -> [Request] model (some details can be found here).

It was clear that if Haskell were to become generally useful, there had to be something better, something that allowed more intuitive interactions with the outside word. The problem was extending this idea of a globally accessible Outside World without sacrificing the purity of the program.

Recently, while pondering the State monad, I had an epiphany which confirms how the problem was solved: Every function is still pure.

How is this possible? Well, first we have to look at IO actions as any other form of stateful computation. Then we just have to prove to ourselves that stateful computations can be done in a pure way.

Take a program like this:

main :: IO ()
main = doTheThing

doTheThing :: IO ()
doTheThing = do
    putStrLn "one"
    putStrLn "two"

It’s common to refer to these functions as impure and having side effects. We look at an imperative line like putStrLn and assume that the function is “reaching out” affecting the outside world by printing text to some terminal it has not received as a direct input, and is therefore impure.

This mis-characterization isn’t itself bad, we do need a way to differentiate Haskell functions which “live in IO” vs those that don’t. Pure vs impure seems like good enough categories, but it’s not entirely correct and can lead folks astray when more complex concepts are introduced.

Imagine if we instead wrote the program like this:

main :: World -> (World, ())
main world = doTheThing world

putStrLn :: String -> World -> (World, ())
putStrLn str world = appendText (str ++ "\n") (terminal world)

doTheThing :: World -> (World, ())
doTheThing world =
    let (world1, _) = (putStrLn "one") world
        (world2, _) = (putStrLn "two") world1

    in (world2, ())

I’ve purposely left appendText undefined and not told you what World is, but you can still confirm that these functions act only on their direct inputs, thus remaining completely pure. If we accept that there is some notion of a World to which we can appendText provided by the Haskell language, then the above is a completely accurate de-sugaring of the original program.

To further explore this idea, I went through the mental exercise of building the IO monad myself by substituting my own World into the confines of a very simple alternate main syntax.

I hope you’ll find it as illustrative as I did.

Limiting Main.main

Let’s pretend that Haskell is in its infancy and the designers have punted the idea of IO. They’ve chosen instead to flesh out the rest of the language with vastly simpler semantics for a program’s main.

In this hypothetical language, a program’s main function is of the type [String] -> String. When executed, the Haskell runtime will provide the program’s commandline arguments to your main function as a list of Strings. Whatever String your main function returns will then be printed on stdout.

Let’s try out this language on our sample problem:

import Data.Char (toUpper)

main1 :: [String] -> String
main1 args = sayHello1 args

sayHello1 :: [String] -> String
sayHello1 args = "Hello " ++ (nameFromArgs1 args)

nameFromArgs1 :: [String] -> String
nameFromArgs1 ("-u":name:_) = map toUpper name
nameFromArgs1 (     name:_) = name
nameFromArgs1            _  = "who?"

Obviously things could be done simpler, but I’ve purposely written it using two functions: one which requires access to program input and one which affects program output. This will make our exercise much more interesting as we move toward monadic IO.

Our current method of passing everything that’s needed as direct arguments and getting back anything that’s needed as direct results works well for simple cases, but it doesn’t scale. When we consider that the input to and output of main might eventually be a rich object representing the entire outside world (file handles, TCP sockets, environment variables, etc), it becomes clear that passing these resources down into and back out of any functions we wish to use is simply not workable.

However, passing the data directly in and getting the result directly out is the only way to keep functions pure. It’s also the only way to keep them honest. If any one function needs access to some piece of the outside world, any functions which use it also need that same access. This required access propagates all the way up to main which is the only place that data is available a-priori.

What if there were a way to continue to do this but simply make it easier on the eyes (and fingers) through syntax or abstraction?

Worldly Actions

The solution to our problem begins by defining two new types: World and Action.

A World is just something that represents the commandline arguments given to main and the String which must be returned by main for our program to have any output. At this point in time, there’s no other aspects of the world that we have access to or could hope to affect.

data World = World
    { input  :: [String]
    , output :: String
    }

An Action is a function which takes one World and returns a different one along with some result. The differences between the given World and the returned one are known as the function’s side-effects. Often, we don’t care about the result itself and only want the side-effects, in these cases we’ll use Haskell’s () (known as Bottom, or Unit) as the result.

sayHello2 :: World -> (World, ())
sayHello2 w =
    let (w', n) = nameFromArgs2 w

    in (w' { output = output w ++ "Hello " ++ n }, ())

nameFromArgs2 :: World -> (World, String)
nameFromArgs2 w =
    case input w of
        ("-u":name:_) -> (w, map toUpper name)
        (     name:_) -> (w, name)
        otherwise     -> (w, "who?")

Now we can rewrite main to just convert its input and output into a World which gets passed through our world-changing functions.

main2 :: [String] -> String
main2 args =
    let firstWorld    = World args ""
        (newWorld, _) = sayHello2 firstWorld

    in output newWorld

In the above, we’ve just accepted that World -> (World, a) is this thing we call an Action. There’s no reason to be implicit about these things in Haskell, so let’s give it a name.

newtype Action w a = Action { runAction :: (w -> (w, a)) }

In order to create a value of this type, we simply need to give a world-changing function to its constructor. The runAction accessor allows us to pull the actual world-changing function back out again. Once we have the function itself, we can execute it on any value of type w and we’ll get a new value of type w along with a result of type a.

As mentioned, we often don’t care about the result and want to run an Action only for its side-effects. This next function makes running an action and discarding its result easy:

execAction :: Action w a -> w -> w
execAction a w = let (w', _) = (runAction a) w in w'

This becomes immediately useful in our newest main:

main3 :: [String] -> String
main3 args = output $ execAction (Action sayHello2) (World args "")

You’ll notice we need to pass sayHello2 to the Action constructor before giving it to execAction. This is because sayHello2 is just the world-changing function itself. For reasons that should become clear soon, we don’t want to do this, it would be better for our world-changing functions to be actual Actions themselves.

Before we address that, let’s define a few helper Actions:

-- | Access a world's input without changing it
getArgs :: Action World [String]
getArgs = Action (\w -> (w, input w))

-- | Change a world by appending str to its output buffer
putStrLn :: String -> (Action World ())
putStrLn str = Action (\w ->
    (w { output = (output w) ++ str ++ "\n"}, ()))

Now let’s fix our program:

sayHello3 :: Action World ()
sayHello3 = Action (\w ->
    let (w', n) = (runAction nameFromArgs3) w

    in (runAction (putStrLn $ "Hello " ++ n)) w')

nameFromArgs3 :: Action World String
nameFromArgs3 = Action (\w ->
    let (w', args) = (runAction getArgs) w

    in case args of
        ("-u":name:_) -> (w', map toUpper name)
        (     name:_) -> (w', name)
        otherwise     -> (w', "who?"))

This allows us to use sayHello3 directly in main:

main4 :: [String] -> String
main4 args = output $ execAction sayHello3 (World args "")

Things are still pretty clunky, but one thing to notice is that now all of the world-changing things are of the same type, specifically Action World a. Getting things to all be the same type has exposed the underlying duplication involved with sequencing lists of actions over some world.

A Monad is Born

One obvious duplication is taking two Actions and combining them into one Action which represents passing a World through them, one after another.

combine :: Action w a -> Action w b -> Action w b
combine f g = Action (\w ->
    -- call the first action on the world given to produce a new world,
    let (w',  _) = (runAction f) w

        -- then call the second action on that new world
        (w'', b) = (runAction g) w'

    -- to produce the final world and result
    in (w'', b))

f = combine (putStrLn "one") (putStrLn "two")

execAction f $ World [] ""
-- => World [] "one\ntwo\n"

What about functions like putStrLn which aren’t themselves an Action until they’ve been given their first argument? How can we combine those with other Actions?

pipe :: Action w a -> (a -> Action w b) -> Action w b
pipe f g = Action (\w ->
    -- call the first action on the world given to produce a new world 
    -- and a result of type a,
    let (w',  a) = (runAction f) w

        -- then give the result of type a to the second function which 
        -- turns it into an action which can be called on the new world
        (w'', b) = (runAction (g a)) w'

    -- to produce the final world and result
    in (w'', b))

f = pipe getArgs (putStrLn . head)

execAction f $ World ["Pat"] ""
-- => World ["Pat"] "Pat\n"

pipe and combine both require their first argument be an Action, but what if all we have is a non-Action value?

-- turn the value into an Action by returning it as the result along 
-- with the world given
promote :: a -> Action w a
promote x = Action (\w -> (w, x))

f = pipe (promote "Hello world") putStrLn

execAction f $ World [] ""
-- => World [] "Hello world\n"

Finally, we can remove that duplication and make our code much more readable:

sayHello4 :: Action World ()
sayHello4 = pipe nameFromArgs4 (\n -> putStrLn $ "Hello " ++ n)

nameFromArgs4 :: Action World String
nameFromArgs4 =
    pipe getArgs (\args ->
        promote $ case args of
                    ("-u":name:_) -> map toUpper name
                    (     name:_) -> name
                    otherwise     -> "who?")

Turns out, the behaviors we’ve just defined have a name: Monad. And once you’ve made your type a Monad (by defining these three functions), any and all functions which have been written to deal with Monads (which is a lot) will now be able to work with your type.

To show that there are no tricks here, I’ll even use the functions we’ve defined as the implementation in our real Monad instance:

instance Monad (Action w) where
    return = promote
    (>>=)  = pipe

-- As our first free lunch, Haskell already provides "combine" in terms 
-- of >>=. A combination is just a pipe but with the result of the first 
-- action discarded.
(>>) f g = f >>= \_ -> g

Now our functions are looking like real Haskell syntax:

sayHello5 :: Action World ()
sayHello5 = nameFromArgs5 >>= (\n -> putStrLn $ "Hello " ++ n)

nameFromArgs5 :: Action World String
nameFromArgs5 =
    getArgs >>= \args ->
        return $ case args of
                    ("-u":name:_) -> map toUpper name
                    (     name:_) -> name
                    otherwise     -> "who?"

Do It to It

Now that we’ve made our type a real Monad, and now that we understand what functions like return and (>>=) mean, we can make the final leap to the more imperative looking code we started with.

Haskell has something called “do-notation”. All it is is a form of pre-processing which transforms expressions like this:

f = do
  args <- getArgs

  putStrLn $ head args

Into expressions like this:

f = getArgs >>= (\args -> putStrLn $ head args)

Either syntax is valid Haskell, and I use both freely depending on the scenario. Let’s go ahead and rewrite our functions in do-notation:

sayHello6 :: Action World ()
sayHello6 = do
    name <- nameFromArgs5

    putStrLn $ "Hello " ++ name

nameFromArgs6 :: Action World String
nameFromArgs6 = do
    args <- getArgs

    return $ case args of
                ("-u":name:_) -> map toUpper name
                (     name:_) -> name
                otherwise     -> "who?"

It’s hard to believe that, to this point, we have no such thing as IO. These functions simply describe how to make one World from another, and that only actually happens when main puts sayHello together with some initial World via execAction.

What we’ve done is built the system we want for IO all the way up to main. We’ve given any function in our system “direct” access to program input and output, all that’s required is they make themselves Actions. Through the use of the Monad typeclass and do-notation, making functions Actions has become quite pleasant while keeping everything entirely pure.

Final Touches

Let’s say that instead of being a primitive [String] -> String, we’ll let main be itself an Action World (). Then we can let the Haskell runtime handle constructing a World, calling execAction main on it, then outputting whatever output there is in the new World we get back.

Then, let’s imagine we didn’t have our simplistic World type which only deals with commandline arguments and an output string. Imagine we had a rich World that knew about environment variables, file handles, and memory locations. That type would live in an impure space with access to all the richness of reality, but we could use pure Actions to describe how to read its files or access its volatile memory.

Things might end up like this:

type IO a = Action World a

main :: IO ()
main = do
    args <- getArgs

    let name = case args of
                ("-u":n:_) -> map toUpper n
                (     n:_) -> n
                otherwise  -> "who?"

    putStrLn $ "Hello " ++ name
$ runhaskell hello.hs -u io
Hello IO

published on 28 Jul 2013, tagged with haskell, monad, io

More posts...