On Staticness

For almost 7 years, now I’ve had a desktop at home, running, serving (among many things) my personal blog. Doing so is how I learned much of what I now know about programming and system administration. It gave me a reason to learn HTML, then PHP, then finally Haskell. It taught me Postgres, Apache, then lighttpd, then nginx. Without maintaining this site myself, on my own desktop, I doubt I would’ve been sucked into these things and I may not have ended up where I am today.

However, I’m now a happily employed Developer and I do these things all day on other people’s machines and sites. Don’t get me wrong, I enjoy it all very much, but the educational value of maintaining my personal blog as a locally-hosted web-app is just not there any more. With that value gone, things like power outages, harddrive failures, Comcast, etc which bring my site down become unacceptable. It’s too easy to have something which requires almost no maintenance while still giving me the control and work-flow I want.

I realize I could’ve moved my site as-is to a VPS and no longer been at the whim of Comcast and NSTAR, but that wouldn’t decrease the maintenance burden enough. Contrast pretty much any typical web-app ecosystem with…

The services now required to host my blog:


The configuration required to host my blog:

$ wc -l < /etc/nginx/nginx.conf

Adding a new post:

$ cat > posts/2013-09-21-awesome_post.md <<EOF
title: Awesome Post
tags: some, tags

Pretty *awesome*.



$ jekyll build && rsync -a -e ssh _site/ pbrisbin.com:/srv/http/site/


$ tar czf ~/site.backup _site


Unfortunately, Comments aren’t easy to do with a static site (at least not without something like Disqus, but meh). To all those that have commented on this site in the past, I apologize. That feature is just not worth maintaining a dynamic blog-as-web-app.

When considering this choice, I discovered that the comments on this site fell into one of three categories:

  1. Hey, nice post!
  2. Hey, here’s a correction
  3. Hey, here’s something additional about this topic

These are all useful things, but there’s never any real discussion going on between commenters; it’s all just notes to me. So I’ve decided to let these come in as emails. My hope is that folks who might’ve commented are OK sending it in an email. The address is in the footer pretty much where you’d expected a Comments section to be. I’ll make sure that any corrections or additional info sent via email will make it back into the main content of the post.


At some point during this process, I realized that I simply can’t convert my post markdown to html without pandoc. Every single markdown implementation I’ve found gets the following wrong:

<div class="something">
I want this content to **also** be parsed as markdown.

Pandoc does it right. Everything else puts the literal text inside the div. This breaks all my posts horribly because I’ll frequently do something like:

This is in a div with class="well", and the content inside is still markdown.

I had assumed that to get pandoc support I’d have to use Hakyll, but (at least from the docs) it seemed to be missing tags and next/previous link support. It appears extensible enough that I might code that in custom, but again, I’m trying to decrease overall effort here. Jekyll, on the other hand, had those features already and let me use pandoc easily by dropping a small ruby file in _plugins.

Update: I did eventually move this blog to Hakyll.

I figure if I want to be a Haskell evangelist, I really shouldn’t be using a Ruby site generator when such a good Haskell option exists. Also, tags are now supported and adding next/previous links myself wasn’t very difficult.

With the conversion complete, I was able to shut down a bunch of services on my desktop and even cancel a dynamic DNS account. At $5/month, the Digital Ocean VPS is a steal. The site’s faster, more reliable, easier to deploy, and even got a small facelift.

Hopefully the loss of Comments doesn’t upset any readers. I love email, so please send those comments to me at pbrisbin dot com.

21 Sep 2013, tagged with self

Table links

Often you might want to present a table of items, each of which links to its own page. Typically you might add an additional cell with a link to go to the item-specific page.

Wouldn’t it be better if the entire row was itself clickable? Well, I did the googling, and here’s one easy way I’ve found to accomplish that.

You’ll need a little jQuery:

$(function() {
  $('tbody.link tr').click(function() {
    window.location = $(this).find('a').attr('href');
  }).hover(function() {

Of course, you can choose to put this only on pages that need it, but it’s not very heavy and if it’s on your site-wide template, you can quickly apply this method to any table you want by just adding a class to the tbody tag (all of your tables do have thead and tbody tags, right?)

For that hover callback to have the desired effect and let your users know they should click on the row, you’ll need a little bit of css as well:

.pointer { cursor: pointer; }

You could put this css change right in the javascript, but I find this pointer class comes in handy throughout my site anyway.

Finally, for any tables which you want to behave this way, just use markup like the following:


  <tbody class="link"><!-- 1. add the class -->
        <a href="/items/1"></a><!-- 2. add the link(s) -->
        The first item
        <a href="/items/2"></a>
        The second item

    <!-- ... -->


Notice the content of the “Name” field is outside of the link tag and the link itself has no content. This ensures no actual link will be visible to confuse users, all they have to do is click anywhere on the row.

For a real example, checkout the archives page.

09 Feb 2012, tagged with self

Live Search (part 2)

In my last post I went over setting up sphinx full-text search using an xml data source from a yesod application as well as hooking into sphinx to return search results for a given query as a JSON data feed.

In this (shorter) post, I’ll go over the front-end javascript that I used to implement a fairly simple search-as-you-type interface.

Object oriented

Now, I could have easily defined some simple functions in the global namespace to execute the search, display the results, then attach an event handler to the changes to the input box, but I’d rather not.

Javascript can be used fairly effectively in an object oriented way. No, I’m not doing any inheritance or method overloads, but I do want to try and group all my logic in an instance of some object. This will let me store some values in instance variables (properties) for use between methods as well as give me a namespace for all my stuff.

Here’s the structure:

var Search = {
    execute: function(qstring) {
        // actually execute the search and call display as the success 
        // callback

    display: function(results) {
        // update the page with the contents of the search results.

    attach: function() {
        // attach a listener for changes to the input element and fire 
        // off the search when appropriate.

Our feed is accessed at /search/j/query-string and returns something like this:

    "slug":    "some_post",
    "title":   "Some post",
    "excerpt": "... some excerpt with matches in it ..."
    "slug":   "other_post",
    "title":  "Other title",
    "excerpt":"... other excerpt with matches ..."

Given that, our execute and display functions should look like this:

    execute: function(qstring) {
        var search = this;
        var url    = "/search/j/" + encodeURIComponent(qstring);

        $.getJSON(url, function(data) {

    display: function(results) {
        var html = "";

        $.each(results, function(id, result) {
            html += '<div class="result">'
                  + '<h3><a href="/posts/' + result['slug'] + '/">' + result['title'] + "</a></h3>"
                  + '<div class="result-excerpt">' + result['excerpt'] + '</div>'
                  + '</div>';

        // assume this property exists for now

Our attach method will handle a few things:

  1. Store selectors for the input element and the results container as properties on our object.

  2. Attach a listener to the input element that fires every time a character is entered.

  3. Check that the entered search term is non-empty, big enough, and has actually changed – to prevent a “needless” search.

    attach: function() {
        this.search  = $('#search');
        this.results = $('#results');

        var search = this;

        this.search.keyup(function() {
            var $this = $(this);

            var newVal = $this.val();
            var oldVal = $this.data('old-value');

            if (newVal.length >= 3 && newVal != oldVal) {

            $this.data('old-value', newVal);

I use jQuery’s data function to store the input’s current value between each event to see if it’s changed since last time.

Note that we also have to store a reference to this outside of the keyup callback, since calling this inside that closure means something else (the element itself).

With all that in place, a search page that uses this object would look something like this:

<input id="search">

<div id="results"></div>

    $(function() {

That’s it, simple and effective. Go ahead, try it out.

30 Jan 2012, tagged with self

Live Search

Note: this post describes a system for searching posts which once appeared on this site. It was removed in a fit of simplification. Please see Google’s site: keyword for any searching needs.

I’ve had some fun recently, adding full-text search support to the posts on the site to try and make a simple-but-still-useful archive.

I’d like to post a bit about the feature and how it works. It’s got a few moving parts so I’m going to break it up a bit.

This post will focus on the backend, setting up sphinx, providing content to it from a yesod application, and executing a search from within a handler. The second post will go into the front-end javascript that I implemented for a pretty simple but effective search-as-you-type interface.

For the full context, including required imports and supporting packages, please see this feature in the wild.


Sphinx is a full-text search tool. This assumes you’ve got some concept of “documents” hanging around with lots of content you want to search through by key word.

What sphinx does is let you define a source – a way to get at all of the content you have in a digestible format. It will then consume all that content and build an index which you can search very efficiently returning a list of Ids. You can then use those Ids to display the results to your users.

There are other aspects re: weighting and attributes, but I’m not going to go into that here.

The first thing you need to do (after installing sphinx) is to get your content into a sphinx-index.

If you’ve got the complete text you’ll be searching actually in your database, sphinx can natively pull from mysql or postgresql. In my case, the content is stored on disk in markdown files. For such a scenario, sphinx allows an “xmlpipe” source.

What this means is that you provide sphinx with a command to fetch an xml document containing the content it should index.

Now, if you’ve got a large amount of content, you’re going to want to use clever conduit/enumerator tricks to stream the xml to the indexer in constant memory. That’s what’s being done in this example. I’m doing something a little bit more naive – for two reasons:

  1. I need to break out into IO to get the content. This is difficult from within a lifted conduit Monad, etc.
  2. I don’t have that much shit – the thing indexes in almost no time and using almost no memory even with this naive approach.

So, here’s the simple way:

getSearchXmlR :: Handler RepXml
getSearchXmlR = do
    -- select all posts
    posts <- runDB $ selectList [] []

    -- convert each post into an xml block
    blocks <- liftIO $ forM posts $ \post -> do
        docBlock (entityKey post) (entityVal post)

    -- concat those blocks together to one xml document
    fmap RepXml $ htmlToContent $ mconcat blocks

        htmlToContent :: Html -> Handler Content
        htmlToContent = hamletToContent . const

docBlock :: PostId -> Post -> IO Html
docBlock pid post = do
    let file = pandocFile $ postSlug post

    -- content is kept in markdown files on disk, if the file can't be 
    -- found, try to use the in-db description, else just give up.
    exists <- doesFileExist file
    mkd    <- case (exists, postDescr post) of
        (True, _         ) -> markdownFromFile file
        (_   , Just descr) -> return descr
        _                  -> return $ Markdown "nothing?"

    return $
        -- this is the simple document structure expected by sphinx's 
        -- "xmlpipe" source
                <id>#{toPathPiece pid}
                <title>#{postTitle post}
                <body>#{markdownToText mkd}

        markdownToText :: Markdown -> Text
        markdownToText (Markdown s) = T.pack s

With this route in place, a sphinx source can be setup like the following:

source pbrisbin-src
	type		= xmlpipe
        xmlpipe_command = curl http://localhost:3001/search/xmlpipe

index pbrisbin-idx
	source		= pbrisbin-src
	path		= /var/lib/sphinx/data/pbrisbin
	docinfo		= extern
	charset_type	= utf-8

Notice how I actually hit localhost? Since pbrisbin.com is reverse proxied via nginx to 3 warp instances running on 3001 through 3003 there’s no need to go out to the internet, dns, and back through nginx – I can just hit the backend directly.

With that setup, we can do a test search to make sure all is well:

$ sphinx-indexer --all # setup the index, ensure no errors
$ sphinx-search mutt
Sphinx 2.1.0-id64-dev (r3051)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc 

using config file '/etc/sphinx/sphinx.conf'...
index 'pbrisbin-idx': query 'mutt ': returned 6 matches of 6 total in 
0.000 sec

displaying matches:
1. document=55, weight=2744, gid=1, ts=Wed Dec 31 19:00:01 1969
2. document=62, weight=2728, gid=1, ts=Wed Dec 31 19:00:01 1969
3. document=73, weight=1736, gid=1, ts=Wed Dec 31 19:00:01 1969
4. document=68, weight=1720, gid=1, ts=Wed Dec 31 19:00:01 1969
5. document=56, weight=1691, gid=1, ts=Wed Dec 31 19:00:01 1969
6. document=57, weight=1655, gid=1, ts=Wed Dec 31 19:00:01 1969

1. 'mutt': 6 documents, 103 hits



Now we need to be able to execute these searches from haskell. This part is actually going to be split up into two sub-parts: first, the interface to sphinx which returns a list of SearchResults for a given query, and second, the handler to return JSON search results to some abstract client.

I’ve started to get used to the following “design pattern” with my yesod sites:

Keep Handlers as small as possible.

I mean no bigger than this:

getFooR :: Handler RepHtml
getFooR = do
    things      <- getYourThings

    otherThings <- doRouteSpecificStuffTo things

    defaultLayout $ do
        setTitle "..."
        $(widgetFile "...")

And that’s it. Some of my handlers break this rule, but many of them fell into it accidentally. I’ll be going through and trying to enforce it throughout my codebase soon.

For this reason, I’ve come to love per-handler helpers. Tuck all that business logic into a per-handler or per-model (which often means the same thing) helper and export a few smartly named functions to call from within that skinny handler.

Anyway, I digress – Here’s the sphinx interface implemented as Helpers.Search leveraging gweber’s great sphinx package:

The below helper actually violates my second “design pattern”: Keep Helpers generic and could be generalized away from anything app-specific by simply passing a few extra arguments around. You can see a more generic example here.

sport :: Int
sport = 9312

index :: String
index = "pbrisbin-idx"

-- here's what I want returned to my Handler
data SearchResult = SearchResult
    { resultSlug    :: Text
    , resultTitle   :: Text
    , resultExcerpt :: Text

-- and here's how I'll get it:
executeSearch :: Text -> Handler [SearchResult]
executeSearch text = do
    res <- liftIO $ query config index (T.unpack text)

    case res of
        Ok sres -> do
            let pids = map (Key . PersistInt64 . documentId) $ matches sres

            posts <- runDB $ selectList [PostId <-. pids] []

            forM posts $ \(Entity _ post) -> do
                excerpt <- liftIO $ do
                    context <- do
                        let file = pandocFile $ postSlug post

                        exists <- doesFileExist file
                        mkd    <- case (exists, postDescr post) of
                            (True, _         ) -> markdownFromFile file
                            (_   , Just descr) -> return descr
                            _                  -> return $ Markdown "nothing?"

                        return $ markdownToString mkd

                    buildExcerpt context (T.unpack text)

                return $ SearchResult
                            { resultSlug    = postSlug post
                            , resultTitle   = postTitle post
                            , resultExcerpt = excerpt

        _ -> return []

        markdownToString :: Markdown -> String
        markdownToString (Markdown s) = s

        config :: Configuration
        config = defaultConfig
            { port   = sport
            , mode   = Any

-- sphinx can also build excerpts. it doesn't do this as part of the 
-- search itself but once you have your results and some context, you 
-- can ask sphinx to do it after the fact, as I do above.
buildExcerpt :: String -- ^ context
             -> String -- ^ search string
             -> IO Text
buildExcerpt context qstring = do
    excerpt <- buildExcerpts config [concatMap escapeChar context] index qstring
    return $ case excerpt of
        Ok bss -> T.pack $ C8.unpack $ L.concat bss
        _      -> ""

        config :: E.ExcerptConfiguration
        config = E.altConfig { E.port = sport }

        escapeChar :: Char -> String
        escapeChar '<' = "&lt;"
        escapeChar '>' = "&gt;"
        escapeChar '&' = "&amp;"
        escapeChar c   = [c]

OK, so now that I have a nice clean executeSearch which I don’t have to think about, I can implement a JSON route to actually be used by clients:

getSearchR :: Text -> Handler RepJson
getSearchR qstring = do
    results <- executeSearch qstring

    objects <- forM results $ \result -> do
        return $ object [ ("slug"   , resultSlug    result)
                        , ("title"  , resultTitle   result)
                        , ("excerpt", resultExcerpt result)

    jsonToRepJson $ array objects

Gotta love that skinny handler, does its structure look familiar?

In the next post, I’ll give you the javascript that consumes this, creating the search-as-you-type interface you see on the Archives page.

29 Jan 2012, tagged with haskell, self

UI Refresh

Astute readers may have noticed, the site looks a little bit different today. I know, it’s tough to discern, but if you look closely you might see… It’s now dark on light!

This is actually a small, tangential change that I made as part of a sweeping upgrade and cleanup effort. In moving to Yesod 0.10 (the 1.0 release candidate), I decided to take an axe to some of the bloatier areas of the site.

After dropping a few pounds in backend logic, I decided to keep going and attack the css as well – and by attack, I mean drop entirely.

Believe it or not just about all styling on the site is now coming from twitter’s awesome bootstrap framework.

Breadcrumbs, notices, login dropdowns, general forms, and sweet tables all without a line of styling by me.

The change does make the site less-then-great on less-than-wide monitors, but I’m not sure how many people are viewing this on mobile devices, etc. We’ll see if I need to bring back my @media queries.

Bootstrap 2.0 brings a “responsive” grid, so now the site looks pretty good on just about any device.

I should be posting more in the coming weeks on some of the specific changes as well a new search feature I’m hoping to roll out soon, but I figured such a noticeable visual change should have an accompanying post… So, there it was.

27 Jan 2012, tagged with self

Static Refactor

Just a quick heads-up post about a recent site refactoring.

I decided to switch to nginx from lighttpd, let it do the static file serving, and at the same time drop all the complicated redirects I’d been carrying since going live on yesod. I also cleaned out the /static directory a little bit and streamlined its folder structure.

Below please find info about the deprecated routes that I’ve finally dropped (and some that were dropped a while ago).

Please use pbrisbin.com to view the site.

No longer redirecting /dotfiles and /bin to github

Please see github for all of my configs and other projects.

No longer redirecting *.rss to /feed

Please use pbrisbin.com/feed/ for my rss.

Removed /music

Please email if you really were interested in that stuff.

Rearranged documentation folders

Note: the documentation subdomain has since been removed entirely.

Haskell docs (including xmonad libraries) are at docs.pbrisbin.com/haskell and ruby docs are at docs.pbrisbin.com/ruby.

I think that’s it – let me know if I’ve missed something and I’ll add a note here.

22 Nov 2011, tagged with self

Lighttpd Reverse Proxy

This site was previously served via lighttpd using fastcgi. My haskell source was compiled using Network.Wai.Handler.FastCGI and the binary was placed at /srv/http/app.cgi to be handled by lighttpd’s mod_fastcgi.

I decided to switch it up and let Warp serve the haskell app directly, then proxy certain urls through to it via lighttpd.

This how-to will outline the steps needed to get this setup and comment a little bit on what all the moving parts do.

This guide assumes your code is structured roughly like the 0.9.1 scaffolder, and your Application.hs exports that withYourApp function which is used by main.hs and compiled into a binary and executed.

My application is called “DevSite” (I don’t know why), so anywhere you see that in this guide, just assume I mean your foundation type / app name.


Compiling to fastcgi was starting to feel kind of icky. Warp is all grown up now and capable of serving content mighty quickly. Often a problem with my app would result in lighttpd silently failing and leaving troublesome pid files around.

It’s nicer to let that front-facing server sit there running, none-the-wiser that I’m constantly developing and recompiling the app that it’s forwarding to. Installing and starting a compiled Warp binary will give me greater feedback in the event something goes awry.

Fortunately, I already had the backbone of url-rewriting going on to get requests to app.cgi so I just needed to update that to pull http traffic from another port on localhost rather than actually call a CGI process to get the response.

Lighttpd 1.5 built the proxy framework with the intention of superseding mod_fastcgi and providing that feature simply by telling you to proxy to a fastcgi application in the same way you would to another domain. This meant I just had to update my syntax for 1.5, then it was almost as easy as s/fastcgi/http/ing the config.

There’s also the minor benefit that I no longer need duplicated support files (like client-session key and favicon) between development and production.

The Moving Parts

Lighttpd will rewrite / redirect urls in a few stages:

  1. Certain urls will be handled by lighttpd itself. I like lighttpd for static file serving. It’s got a pretty directory listing, it’s fast, and it makes it super easy to setup a /static/private which enforces simple http-auth for access – very handy.

  2. Everything else will be rewritten (once) to /proxy/$1.

  3. Anything coming in for /proxy/.* (presumably via rewrite) will go to another port on localhost where my Warp server will take over.

Lighttpd can also load-balance over multiple instances of Warp (nifty!).

The Setup

First let’s get lighttpd setup. I’m using mod_proxy_core which is only available in lighttpd-1.5+. If you’re on Arch, you can install aur/lighttpd-svn.

Import some modules:

server.modules = ( "mod_rewrite"
                 , "mod_proxy_core"
                 , "mod_proxy_backend_http"

Setup the “stage-one” redirects:

# notice that by rewriting to /proxy$1 and not /proxy/$1 we get the 
# desired behavior where / becomes /proxy/ and /what/ever becomes 
# /proxy/what/ever.
url.rewrite-once = ( "^/static.*" => "$0"
                   , "(.*)"       => "/proxy$1"

Finally, setup the actual proxying:

$HTTP["url"] =~ "^/proxy.*" {
  # straight, http pass-through
  proxy-core.protocol        = "http"

  # lighttpd will manage its own queue and send requests to whichever 
  # instance has the shortest queue
  proxy-core.balancer        = "sqf"

  # these are the 5 Warp instances we'll start
  proxy-core.backends        = ( ""
                               , ""
                               , ""
                               , ""
                               , ""

  # strip the /proxy prefix
  proxy-core.rewrite-request = ( "_uri" => ( "^/proxy(.*)" => "$1" ) )

Now that we’ve got that going we need to spin up some Warp instances to serve out anything lighttpd redirects from /proxy.

Luckily the scaffolded main.hs allows us to pass a port on the command line, so we’ll just start up a bunch of instances of our app all listening on a different port.

Script It Out

I like to script this process of starting and stopping the multiple Warp instances. To facilitate this, we need to create some support directories alongside your source code:

mkdir tmp/{pid,log}

With those in place, feel free to take the following functions and incorporate them into some server management script:

instances='1 2 3 4 5'

start_devsite() {
  local n

  echo 'Starting worker processes...'
  for n in $instances; do
    devsite -p=300$n > tmp/log/$n.log 2> tmp/log/${n}_errors.log &
    echo $! > tmp/pid/$n.pid

stop_devsite() {
  local pid n

  echo 'Stopping worker processes...'
  for n in $instances; do
    if [[ -f tmp/pid/$n.pid ]]; then
      read -r pid < tmp/pid/$n.pid
      if [[ -n $pid ]]; then
        kill $pid
        rm tmp/pid/$n.pid

Once you execute the start function, you should see 5 processes running listening on ports 3001 through 3005. Lighttpd is already setup to forward to those apps in a load-balanced way so go ahead and see if it worked!

10 Sep 2011, tagged with haskell, self, yesod


Note: this page describes a custom system for commenting once present on this site. As you can see, it no longer is. Please provide any comments via twitter or email. Thanks.

Recently I decided I no longer like disqus. It was a great hands-off commenting system, but it had its downsides. I decided to make a change.

I have my own commenting system which I wrote as a yesod module. The initial reason I didn’t go with this when I first moved to yesod (and lost my existing homebrew php commenting system) was because it didn’t support authentication. I knew that an unauthenticated “enter comments” box had like a 100 to 1 ratio of spam to real comments.

When I put up rentersreality, I took the time to build authentication into my comments system by natively supporting YesodAuth so that I could use my module there. With that in place, the only thing keeping me back was losing my existing comments (more on that later).

This module really speaks to the flexibility of the framework. With about 8 lines of code I (the end user) was able to add tightly integrated commenting to my yesod site. Comments are stored in my database and users are authenticated using my authentication model. Furthermore, I (the developer) was able to put together a system that integrates this way even with my limited haskell-foo thanks to the openness and robustness of existing modules like yesod-auth and yesod-persistent.

I decided to make a few other changes along with this one. First, I moved the site from sqlite to postgresql. I originally went with sqlite because I was intimidated by postgres. After moving rentersreality to postgres, I realized it was no harder to get set up and offered great benefits in speed, reliability and maintenance tools. Secondly, I had to ditch my simple auth setup (one manually added user) in favor of a more typical auth setup. Now, anyone can authenticate with their open id (to be able to comment) and I just maintain an isAdmin flag manually to allow me to do the me-only stuff.

Why did you do it?

For the sake of completeness here are the Pros and Cons of the recent shift:


  • I wrote it.

Since I develop the comments module, I know that I’ll get preferential treatment when it comes to bug fixes and features. Also, it’s awesome.

  • Not javascript.

Pages load faster and usage is clean, pure GET and POST html-forms.

  • Markdown

Comments are parsed by Pandoc the same way my post content itself is. This means that you have the full expressive power of this awesome markdown system (any non dangerous html is possible plus syntax highlighting and other nice features).

  • Integration

Both visually and architecturally, the comments are deeply integrated into my site. This is in spite of the code itself being completely modularized and reusable anywhere. Yay haskell.


  • I lose all existing comments

I have a plan for this.

  • No Quote, or Reply functionality.

I kind of sidestepped the whole javascript edit-in-place usage scenario and instead created a full-blown management sub-site for commenting users. By following the “comments” link in my sidebar you can see all the comments you’ve left on this site and edit/delete them at-will. It’s a really clean method which provides a lot of functionality while being a) javascript-free and b) still completely modularized out of my specific site and reusable anywhere.

Quote could be done with some javascript to just populate the box with a markdown blockquote for you. Reply (and the implied threading) would require a re-engineering that I might not be willing to go through for a while.

  • No notifications or rss services

I’m not sure how much of a use there is for this on a site like mine but with the new administration sub-site it would be almost trivial to add this functionality – maybe I’ll do this soon.

To existing commenters

If you’ve commented on this site before I want to restore your comments, but I need your help.

What I need you to do is go ahead and login once, choose a username, and email it to me along with the disqus account you commented on previously.

If you commented on the old-old system, I could still restore your comments, you’ll just have to decide the best way to let me know what comments they were (the username you used or the thread/nature of the comments, etc).

With this information, I’ll be able to reinstate your comments and link them to your new identifier on the site.

I hope you’ll help me with this; but if not, I understand.

Play around

So go ahead and use this page to try out the commenting system. See what kind of markdown results in what.

If you want any comments (here or on other pages) edited or removed, I can always be reached by email. I don’t mind running a quick sql statement on your behalf.

Let me know of any bugs you find but don’t worry about css-fails, those should get fixed almost immediately (I just need the content present to find them).

08 Jul 2011, tagged with haskell, self, yesod

Anatomy of a Yesod Application

subtitle: how to stay sane when developing for the web in haskell

This post was originally about how I structure my Yesod applications and where it differs from the scaffold tool. I’ve since done a bit of a 180 and started to really like the scaffold tool and its structure.

For that reason, I’ve rewritten the post to outline that structure, its benefits and some strategies I use above what it provides to develop and deploy Yesod applications.

Note that this information is 0.8-specific and with the 0.9 and 1.0 versions of Yesod, this post will be obsolete (until I update it again).

I recently had the chance to do a coding exercise for a job interview. Due to some misunderstandings of the task on my part I had to do it twice. I’m still embarrassed about it, but I did end up getting the job so all’s well that ends well…

Anyway, the cool thing about doing it twice is that the first time, I did it in Rails and the second time in Yesod and that gave me a chance to evaluate the two frameworks in somewhat of a side-by-side.

I’m not going to get into a big discussion on the pros and cons of each one in general – though I will say they both excelled at staying out of my way and getting me a base for the exercise very quickly. What I’d rather talk about is structure.

During my brief time hacking in Rails, I quickly grew to like the “convention over configuration” philosophy. If you create a model/foo and a controller/foo and a test/foo, then foo itself just magically works. I liked it.

I hadn’t used the Yesod scaffold tool since about 0.1, but I knew I needed to get a site up quickly so I decided to use it for this exercise. I found that the new structure was very organized and well thought out. It gave me a similar convention-driven feeling, create hamlet/foo and cassius/foo then widgetFile foo would just work.

I think the framework could use a bit more of this approach to make the yesod-scaffolding tool as versatile as the ruby one. Mechanisms like widgetFile could be more plentiful and library provided (rather than scaffolded into your Settings.hs).

The yesod scaffold basically built you an “example” site which you can rewrite to your needs. In contrast, the ruby scaffold tool(s) let you say “I have some data structure like X” and it goes and creates a bunch of code to make X work. You’re obviously still going to rewrite a lot of the generated code, but it’s not a 100% guarantee like with yesod init.

Putting on my yesod-end-user cap (vs the usual yesod-contributer one), I would love to see yesod work more like rails: yesod init should give you a simple status-page site with links to all the documentation (you could still chose tiny, or database-driven and get all that setup at this point too). Then, yesod scaffold --whatever commands could be used to build up a CRUD interface with your actual data types.

Hmm, that turned into a bit of a wine about how rails is better than yesod – that is not my opinion in general. There are tons of reasons I prefer yesod overall, I was just really impressed with rails scaffolding abilities.


Enough comparison, let’s talk about yesod as it is now.

The scaffold tool sets up the following basic structure:

|-- config
|   `-- ...
|-- hamlet
|   `-- ...
|-- cassius
|   `-- ...
|-- julius
|   `-- ...
|-- Handlers
|   `-- ...
|-- Model.hs
|-- YourSite.hs
|-- Controller.hs
`-- yoursite.cabal

config is going to hold your Settings.hs module along with some text files where you define your routes and models. I also like to throw the main executables’ source files in there which I’ll discuss later.

hamlet, cassius, and julius will contain the templates files for html, css, and javascript respectively. One awesome new development is the aforementioned function widgetFile which I use 100% of the time regardless of what templates the page actually calls for. If you write, say, addWidget $(widgetFile "foo"), that will splice in the templates hamlet/foo.hamlet, cassius/foo.cassius, and julius/foo.julius and just ignores non-existent files.

Model.hs, YourSite.hs, and Controller.hs are pretty self-explanatory, and are either entirely site-dependant or scaffold-generated so I’m not going to discuss them.

One other cool feature of the scaffold is how it sets up the imports and exports in YourSite.hs. It handles all of the major imports (like Yesod itself, etc) and those that need to be qualified (like Settings) and then reexports them as a single clean interface. This means that all of your Handlers, Helpers, etc can just import YourSite and be done with it. Very nice, very clean.

Handlers usually contains one module per route and only defines the route handling functions. I try to keep any support functions in either per-handler or site-wide Helpers.

One last note: do yourself a favor and keep/maintain the generated cabal file. It’s a nice way to prevent breakage (when dependencies are updated) and keep dev vs prod options straight. It’s also nice to keep all the object and interface files hidden under an ignorable dist directory.


For development, I use an easy simple-server approach. The haskell is as follows:

import Controller (withServer)
import System.IO (hPutStrLn, stderr)
import Network.Wai.Middleware.Debug (debug)
import Network.Wai.Handler.Warp (run)

main :: IO ()
main = do
    let port = 3000
    hPutStrLn stderr $ "Application launched, listening on port " ++ show port
    withServer $ run port . debug

I then keep a simple shell script that runs it:

#!/bin/bash -e

touch config/Settings.hs
runhaskell -Wall -iconfig config/simple-server.hs

The touch just ensures that anything set by CPP options are up to date every time I ./devel.


By using the cabal file, deployments are pretty easy. I use lighttpd as my server-of-choice (I also let it do the static file serving), so I need to compile to fastcgi.

I keep exactly one copy of any static files (including my main css) and it lives only in the production location. To support this, I define a staticLink function in Settings.hs which is conditional on the PRODUCTION flag.

If I’m developing locally, staticLink "foo" would return http://the-real-domain/static/foo so that the file is linked from its live location. When running in production, that function just returns /static/foo which is what I would actually want in the html.

I find this approach is way simpler than any other way I’ve done static file serving.

My cabal file builds an executable from config/mysite.hs which looks like this:

import Controller (withServer)
import Network.Wai.Handler.FastCGI (run)

main :: IO ()
main = withServer run

Then I’ve got another shell script to make deployments a single-command operation:

#!/bin/bash -e


sudo true

# this command just adds an auto-incrementing git tag so that if there's 
# some issue, I can just checkout the last tag and redeploy. this 
# completely sidesteps the need to backup the binary itself

touch config/Settings.hs
cabal install --bindir=./

# cabal will install to ./myapp as defined in the cabal file so we just 
# stop the service and replace the binary
sudo /etc/rc.d/lighttpd stop
sudo mv myapp "$app"
sudo /etc/rc.d/lighttpd start

This approach can be easily extended to a non-local deployment. In the case of rentersreality, the site lives on a slicehost. Its deployment file looks like this:

#!/bin/bash -e


deptag # tag deployments

touch config/Settings.hs
cabal install --bindir=./

scp ./renters "$ip":~/

ssh -t "$ip" '
  sudo /etc/rc.d/lighttpd stop        &&
  sudo mv ./renters /srv/http/app.cgi &&
  sudo /etc/rc.d/lighttpd start       &&
  sleep 3

I found that after executing the remote command I had to sleep so that the process could detach correctly. Things would end up in a bad state if I disconnected right away.

I must say, since moving to the more structured approach and utilizing cabal install as the main deployment step, I have had far less issues with developing and deploying my apps.

To see two sites that are currently using this structure, just browse the projects on my github.

29 Apr 2011, tagged with haskell, yesod, self


I always thought Ajax (and JavaScript for that matter) was some crazy web technology, some new way of programming the web, some big scary thing that would be really difficult to learn or use.

It’s not. It’s actually just a defined, convenient way of using existing tools to accomplish some goal. I won’t say it’s perfect, or the be-all-end-all of web-tech, but it did come in handy for me in one case.

I’d like to share the experience, shed some light on this methodology and perhaps make it easy for someone else to add it to their bag of tricks.

What is Ajax?

Ajax stands for Asynchronous JavaScript and XML. It’s a means of making static web pages dynamic.

The way it works is this:

You’ve got some page that loads up in a user’s browser presenting some information. Once the content is served to the user that’s it. It’s there, static and stale.

You want to periodically update some information on that page, but how?

JavaScript could update tag contents, but the source code for your awesome JavaScript is served out along with the content and it’s just as stale. It has no idea what new information it needs to present on your behalf.

Well, what if we could phone home, tell the JavaScript to call back to the server and ask for updated information, then update the page accordingly.

That is Ajax.

You basically develop two webpages: the first, is the normal static html page that the user requests. The second is a dynamic page which serves an XML document (or JSON) which provides updated info through server-side logic.

The first page comes complete with JavaScript capable of requesting the second page on an interval and updating the first page with the new information it finds in the XML.

Pretty slick.

An Example

I recently wrote a “subsite” for the Haskell web framework Yesod. That’s a lot of jargon, but just think of it as a plug-in.

It lets you control an instance of mpd running on the same server via a web page.

More information (and the full source) can be found here but I’ll try and distill out the generic Ajax involved.

Let’s say I have a “Now Playing” page which shows the currently playing Artist, Album, and Title. I want to update that information as the track changes without the user having to constantly refresh the page.

Here’s how you do that:

The main page is served as normal html with some handy id’s so we can find specific tags when we want to update them.

This page will also include all the JavaScript to make the request and do the updates, but I’m going to save that information until after the XML part.

<!-- http://server/pages/nowplaying.html -->
<!DOCTYPE html>
        // There be lots of JavaScript here, but I'll get to that 
        // later...
        <p id="artist">Medeski Martin &amp; Wood</p>
        <p id="album">Tonic</p>
        <p id="title">Thaw</p>

    <!-- our main ajax entry point will be this function, call it on 
         document load to kick things off -->
    <script>window.onload = timedRefresh;</script>

Behind the scenes, you’ll need that second page which is just XML. It’ll have to be driven by server-side code to serve updated information each request.

<!-- http://server/nowplaying.xml -->
<?xml version="1.0" encoding="utf-8"?>
    <artist>Dave Weckl Band</artist>
    <title>Mixed bag</title>

The status tag is an extra fail-safe in-case your server runs into trouble coming up with updated info. You’ll see we write our JavaScript to be conditional on this tag’s value.

With the XML page available at any moment to provide updated information, here’s the JavaScript which you would need to put in the user-facing page to accomplish the constant screen updates.

// this object will make the request for the updated xml and provide an 
// easy means for parsing it
var xmlhttp = new XMLHttpRequest();

// when it receives a response from your server this code will run
xmlhttp.onreadystatechange = function()
    // if the state is 4 and the status is 200, that means we're ready 
    // to parse the response
    if (xmlhttp.readyState == 4 && xmlhttp.status == 200)
        // parse the response into a workable document
        xmlDoc = xmlhttp.responseXML;

        // use a helper function to get the contents of specific tags 
        // and deal with them appropriately
        xStatus = xmlHelper(xmlDoc, "status");

        // make sure our server thinks all's ok
        if (xStatus == "OK")
            // get the new info from the xml response
            xArtist = xmlHelper(xmlDoc, "artist");
            xAlbum  = xmlHelper(xmlDoc, "album" );
            xTitle  = xmlHelper(xmlDoc, "title" );

            // another helper function updates a specific tag on the 
            // current page by its id value
            docHelper(true, "artist", xArtist);
            docHelper(true, "album" , xAlbum );
            docHelper(true, "title" , xTitle );

// the helper to get a specific tag from the xml:
function xmlHelper(_xmlDoc, _tag) {
    return _xmlDoc.getElementsByTagName(_tag)[0].childNodes[0].nodeValue;

// the helper to set (or get) the value of a tag in this document by
// its id:
function docHelper(_set, _id, _value) {
    if (_set) {
        document.getElementById(_id).innerHTML = _value;
    return document.getElementById(_id).innerHTML;

// this function actually makes the request, phoning home for updated 
// now playing information
function getNowPlaying() {
    xmlhttp.open("GET", "http://server/nowplaying.xml", true);

    // loop again

// on document load, we call this function which will start the 
// never-ending loop of updates
function timeRefresh() {
    var delay = 1000; // seconds * 1000
    setTimeout("getNowPlaying();", delay);

And that’s it my friends. Constantly updating screen content without compulsive refreshing cluttering up your server logs.

Update: JSON and jQuery

So, the above all works fine and dandy. Nowadays though, seems all the cool kids are doing this with JSON and jQuery.

JSON is a structured data response that can better play the role initially delegated to XML. It’s nicer because it can be worked with in JavaScript without parsing.

Using jQuery just removes boilerplate code and makes a lot happen in just a few lines.

Combine this with the awesome JSON support built into Yesod, and I was able to get my MPD controller updating via JSON with much cleaner code than the XML version.

For those that are interested, here is how you set up a page in Yesod to reply with either the normal HTML or a JSON response depending on what the client asks for.

This means I don’t need a separate status.xml to deliver the updated info, just call the current page url but ask for JSON this time.

-- note: nowPlaying returns a 'Maybe NowPlaying' data type which holds 
-- (Just) all the information about the currently playing song or a 
-- 'Nothing' if there's, well, nothing playing.
getStatusR :: YesodMPC m => GHandler MPC m RepHtmlJson
getStatusR = do
    mnp <- nowPlaying
    defaultLayoutJson (htmlReply mnp) (jsonReply mnp)

        -- here's what's returned if the client requests HTML:
        htmlReply mnp = do
            addJulius [$julius|
                // add your jQuery here (see below).

            case mnp of
                Just np -> [$hamlet| 
                    <div #artist>#{npArtist np}
                    <div #album>#{npAlbum np}
                    <div #title>#{npTitle np}

                Nothing -> [$hamlet| Nothing playing? |]

        -- and what's returned if the client requests JSON data:
        jsonReply mnp = case mnp of
            Just np -> jsonMap
                [ ("status"  , jsonScalar $ "OK"       )
                , ("artist"  , jsonScalar $ npArtist np)
                , ("album"   , jsonScalar $ npAlbum  np)
                , ("title"   , jsonScalar $ npTitle  np)

            Nothing -> jsonMap
                [ ("status", jsonScalar $ "ERR"               )
                , ("error" , jsonScalar $ "MPD threw an error")

So now my JavaScript can be simplified (don’t forget to source some jquery.js in the head of your page):

function getNowPlaying() {
    var delay = 1000; // seconds * 1000

    if (delay != 0) {
        $.getJSON(window.location.href, {}, function(o) { // <- Ajax jQuery style...
            if (o.status == "OK") {
                // update each div with the info in "o"

        // and, loop again
        setTimeout("getNowPlaying();", delay);

$(function() { getNowPlaying(); })

Pretty cool, huh?

29 Jan 2011, tagged with self

Posts Database

This post is crazy out of date. If your interested in the updated ways in which I accomplish the same goals on yesod 0.9, feel free to checkout the site’s source.

For the longest time since my move to Yesod, I’ve had the listing of posts on this site embedded in its source. This is not only poor practice, but it made it kludgy to add a new post the site. I’d have to write the post, add the Post info to the source file, then recompile my development sources and deploy them to production.

This marriage of code and content made it impossible to write posts and develop anything new in the framework at the same time because both actions required a “push to PROD”.

The actual content for each post is stored in a markdown file and parsed at run time, so I could at least edit content while the site is live. Ideally, I would’ve put the post meta-data (title, date published, rss description, and tags) as a header in this markdown file and have that parsed at runtime as well. This would’ve been difficult when it came to the rss description. Parsing multi-line tokens is no fun.

So given the requirement that post content and post meta-data would live separately I decided to give Persist a try. I could store post information in a small sqlite database and access it through some management pages to add, remove, and update post meta-data on my site.

This approach required me to work through three aspects of Yesod that aren’t extremely well documented: Persistent, Forms, and Authentication. I figured I’d share what I did in this post and maybe others can benefit (or point out what I’ve done wrong).

I’m going to skip over any required extensions or imports just to keep the code presented here simple and somewhat readable.

If you’re able to use Yesod in some way, you can probably decipher ghc errors enough to figure out what’s needed.


First up was getting CUD actions possible on Posts (the existing data type).

I had to make my site an instance of YesodPersist which will allow abstract data base actions to happen in the GHandler Monad:

-- I added this to Settings.hs:
dataBase :: String
dataBase = "posts.db3"

withConnectionPool :: MonadInvertIO m => (ConnectionPool -> m a) -> m a
withConnectionPool = withSqlitePool dataBase 10

-- And this to the main DevSite.hs file:
instance YesodPersist DevSite where
    type YesodDB DevSite = SqlPersist
    runDB db = fmap connPool getYesod >>= runSqlPool db

Next up, I had to use some template haskell to define the storable entries:

-- I elected to put this code right in Posts.hs which handles all the 
-- Post related stuff already:
share2 mkPersist (mkMigrate "migratePosts") [$persist|
    slug        String
    date        UTCTime Desc
    title       String
    descr       String
    UniqueSqlPost slug
    post SqlPostId Eq
    name String Asc

This creates two tables. One to hold all the post information and a second to just hold the Post-to-Tag relationships.

We also have to add the migration function to the main method which will initialize these tables on first run:

-- This goes in my Controller.hs:
withServer :: (Application -> IO a) -> IO a
withServer f = withConnectionPool $ \p -> do
    runSqlPool (runMigration migratePosts) p -- right here
    let h = DevSite p
    toWaiApp h >>= f

With all the boilerplate in place, we can write some functions to actually do the selects, inserts, and deletes (I cheat and actually do a delete-then-insert for any updates):

-- | The data type of a single post, this is what we actually want to work 
--   with in all our other code
data Post = Post
    { postSlug  :: String
    , postDate  :: UTCTime
    , postTitle :: String
    , postDescr :: String
    , postTags  :: [String]

-- | Select n recent posts from the database and return them
selectPosts :: Int -> Handler [Post]
selectPosts n = mapM go =<< runDB (selectList [] [SqlPostDateDesc] n 0)

        go :: (Key SqlPost, SqlPost) -> Handler Post
        go (sqlPostKey, sqlPost) = do
            -- tags for this post
            sqlTags <- runDB $ selectList [SqlTagPostEq sqlPostKey] [SqlTagNameAsc] 0 0
            return Post
                { postSlug  = sqlPostSlug  sqlPost
                , postDate  = sqlPostDate  sqlPost
                , postTitle = sqlPostTitle sqlPost
                , postDescr = sqlPostDescr sqlPost
                , postTags  = fmap (sqlTagName . snd) sqlTags

-- | Insert a post into the database
insertPost :: Post -> Handler ()
insertPost post = do
    let sqlPost = SqlPost
            { sqlPostSlug  = postSlug post
            , sqlPostDate  = postDate post
            , sqlPostTitle = postTitle post
            , sqlPostDescr = postDescr post

    -- insert the Post record
    sqlPostKey <- runDB $ insert sqlPost

    -- insert each tag record
    mapM_ (go sqlPostKey) $ postTags post

        go :: SqlPostId -> String -> Handler SqlTagId
        go key tag = runDB (insert $ SqlTag key tag)

-- | Delete an existing post by slug
deletePost :: String -> Handler ()
deletePost slug = do
    sqlPost <- runDB $ getBy $ UniqueSqlPost slug
    case sqlPost of
        Just (sqlPostKey, _) -> do
            -- delete the post and the tags
            runDB $ deleteBy $ UniqueSqlPost slug
            runDB $ deleteWhere [SqlTagPostEq sqlPostKey]
        Nothing -> return ()

As an example of the simple case, selecting a post out of the data base and displaying it, here is my handler for a GET request on the post route plus some of the immediate support functions:

-- | This was already in place with the hardcoded Posts so I just put it 
--   in the Handler Monad and call from the database instead.
--   I know, it's silly to not make the slug part of a targetted select, 
--   but with such a small data base this is fine and much easier to 
--   code.
getPostBySlug :: String -> Handler [Post]
getPostBySlug slug = do
    allPosts <- selectPosts 0
    return $ filter ((== slug) . postSlug) allPosts

-- | Used with posts so that we have post-specific info within scope
--   while still abstracting the overall template/css
postLayout :: Post -> Handler RepHtml
postLayout post = do
    mmesg       <- getMessage
    (t, h)      <- breadcrumbs
    postContent <- loadPostContent post -- parses the markdown file

    pc <- widgetToPageContent $ do
        setTitle $ string $ "pbrisbin - " ++ postTitle post
        addCassius $(S.cassiusFile "root-css")
    hamletToRepHtml $(S.hamletFile "post-layout")

-- | Load a Post
getPostR :: String -> Handler RepHtml
getPostR slug = do
    posts <- getPostBySlug slug
    case posts of
        []       -> notFound
        (post:_) -> postLayout post

So now that that part’s done, we need a page where we can edit and delete the existing posts. That will require a form.


I probably have a bit more boilerplate here than I need, but oh well. I had a lot of this code already in yesod-comments where I need a bit more customization in the form so I reused it.

The first thing we need is an overall page which has an “Add new post” form at the top and a table of existing posts with links to edit and delete them:

-- | The overall template showing the input box and a list of existing
--   posts
managePostTemplate :: String -> Widget () -> Enctype -> Widget ()
managePostTemplate title form enctype = do
    posts <- liftHandler $ selectPosts 0
    <div .post_input>
        <h3>#{string title}

        <form enctype=#{enctype} method="post"

    <div .posts_existing>
        <h3>Existing posts:


            $forall post <- posts
                        <a href=@{PostR $ postSlug post}> #{shortenShort $ postTitle post}
                    <td>#{shortenLong $ postDescr post}
                        <a href=@{EditPostR $ postSlug post}> edit
                        <a href=@{DelPostR $ postSlug post} delete

        shortenLong  = shorten 40 
        shortenShort = shorten 15 
        shorten n s  = if length s > n then take n s ++ "..." else s

Don’t worry about EditPostR or DelPostR yet, we’ll get to those.

Now we need to code the Form itself. The way I do it is I create a data type whose records represent the Form fields. Then, when I run the form, I’ll use a function to convert that datatype into the thing I really want from the Form (a Post) with any required conversions or time stamping happening there.

To make things a little more flexible, I’m going to pass an initial argument to most of these functions. If that argument is Just Post, then I’m editing an existing post and I will pre-populate the “new” form with its information and update rather than insert on submit. If that first argument is Nothing, then it’s a truly new Post and I’ll continue as such.

This is why I pass title to the function above; it might say “Edit …” or “Add …” accordingly.

-- the form data type
data PostForm = PostForm
    { formSlug  :: String
    , formTitle :: String
    , formTags  :: String
    , formDescr :: Textarea

-- | Convert form input into a Post and update the db.
updatePostFromForm :: Maybe Post -> PostForm -> Handler ()
updatePostFromForm p pf = do
    postDate' <- if isJust p 
        -- preserve original publish date
        then return $ postDate $ fromJust p
        else liftIO getCurrentTime
    let post = Post
            { postSlug  = formSlug pf
            , postTitle = formTitle pf
            , postDescr = unTextarea $ formDescr pf
            , postDate  = postDate'
            , postTags  = parseTags $ formTags pf
    if isJust p
        then do
            -- delete the original and insert a new version
            deletePost (postSlug post)
            insertPost post
            setMessage $ [$hamlet| %em post updated! |]
        else do
            insertPost post
            setMessage $ [$hamlet| %em post added! |]

    redirect RedirectTemporary ManagePostsR

-- | some minor changes to 
--   <https://github.com/fortytools/lounge/blob/master/Handler/Entry.hs#L57>
parseTags :: String -> [String]
parseTags [] = []
parseTags s  = let (l,s') = break (==',') $ dropWhile (==',') s
    in trim l : case s' of
        []      -> []
        (_:s'') -> parseTags s''

        trim  = trim' . trim' 
        trim' = reverse . dropWhile isSpace

-- | Display the new post form inself. If the first argument is Just,
--   then use that to prepopulate the form
postForm :: Maybe Post -> FormMonad (FormResult PostForm, Widget ())
postForm post = do
    (slug       , fiSlug       ) <- stringField   "post slug:"   $ fmap postSlug  post
    (title      , fiTitle      ) <- stringField   "title:"       $ fmap postTitle post
    (tags       , fiTags       ) <- stringField   "tags:"        $ fmap (formatTags . postTags) post
    (description, fiDescription) <- textareaField "description:" $ fmap (Textarea . postDescr)  post
    return (PostForm <$> slug <*> title <*> tags <*> description, [$hamlet|
            ^{fieldRow fiSlug}
            ^{fieldRow fiTitle}
            ^{fieldRow fiTags}
            ^{fieldRow fiDescription}
                <td colspan="2">
                    <input type="submit" value=#{buttonText}>

        fieldRow fi = [$hamlet|
                    <label for=#{fiIdent fi}> #{fiLabel fi}
                    <div .tooltip> #{fiTooltip fi}
                    ^{fiInput fi}
                    $maybe error <- fiErrors fi

        formatTags = intercalate ", "
        buttonText = string $ if isJust post then "Update post" else "Add post"

-- | Run the post form and insert or update based on the entered data
runPostForm :: Maybe Post -> Widget ()
runPostForm post = do
    ((res, form), enctype) <- liftHandler . runFormMonadPost $ postForm post
    case res of
        FormMissing    -> return ()
        FormFailure _  -> return ()
        FormSuccess pf -> liftHandler $ updatePostFromForm post pf

    managePostTemplate title form enctype

        title = if isJust post 
            then "Edit post:" 
            else "Add new post:"

Anyway, with all that boilerplate out of the way, we can define our routes.

We need to add the following to our main parseRoutes function first:

-- | Define all of the routes and handlers
mkYesodData "DevSite" [$parseRoutes|

/manage                ManagePostsR GET POST
/manage/edit/#String   EditPostR    GET POST
/manage/delete/#String DelPostR     GET

/auth AuthR Auth getAuth

We’ll get to that AuthR bit a little later, but with the three routes defined we can create our handler functions for our various actions:

-- | Manage posts
getManagePosts/ :: Handler RepHtml
getManagePostsR = pageLayout $ do
    setTitle $ string "pbrisbin - Manage posts"
    addHamlet [$hamlet| %h1 Manage Posts |]
    runPostForm Nothing

postManagePostsR :: Handler RepHtml
postManagePostsR = getManagePostsR

-- | Edit post
getEditPostR :: String -> Handler RepHtml
getEditPostR slug = do
    post <- getPostBySlug slug
    case post of
        []        -> notFound
        (post':_) -> pageLayout $ do
            setTitle $ string "pbrisbin - Edit post"
            addHamlet [$hamlet| %h1 Edit Post |]
            runPostForm $ Just post'

postEditPostR :: String -> Handler RepHtml
postEditPostR = getEditPostR

-- | Delete post
getDelPostR :: String -> Handler RepHtml
getDelPostR slug = do
    deletePost slug
    setMessage $ [$hamlet| %em post deleted! |]
    redirect RedirectTemporary ManagePostsR

At least that part is pretty easy after all the upfront forms work.

Now, you can start up your test server and head to localhost:3000/manage to try it out.

Manage Posts Screenshot 

The problem now is that if you were push this live, everyone could mess with your data base. We need authentication.


This was the most difficult part of the whole thing.

There’s very little documentation on yesod-auth and any real world examples I could find were for social-networking type authentication where every login box had “register for a new account” logic tied into it. This was not what I needed.

I wanted to store a username and hashed password in my new Persistent database and validate incoming users (just me) against that.

I ended up writing my own AuthPlugin modeled after Yesod.Helpers.Auth.Email which does just that.

I’m not going to go into the details, if you want to view the code it’s on my github. I’d rather describe briefly how to use it.

This has now been added to the main yesod-auth package.

After copying the source for the module into your app directory, make your site an instance of YesodAuth and put my plugin in your list:

instance YesodAuth DevSite where
    type AuthId DevSite = UserId

    -- these are site specific
    loginDest _  = ManagePostR
    logoutDest _ = RootR

    getAuthId    = getAuthIdHashDB AuthR 
    showAuthId _ = showIntegral
    readAuthId _ = readIntegral
    authPlugins  = [authHashDB]

That’s where the /auth addition to parseRoutes is used.

Add the migration function to your controller:

withServer :: (Application -> IO a) -> IO a
withServer f = withConnectionPool $ \p -> do
    runSqlPool (runMigration migratePosts) p
    runSqlPool (runMigration migrateUsers) p -- right here
    let h = DevSite p

And that’s it!

Well kinda. You’ll need to add users to the database by hand, but that’s not too hard, here’s how I did mine:

$ echo -n 'MySuperAwesomePassword' | sha1sum
bf1bfb9af6e50018dacf19e1618e4fb5f981c14e  -
$ sqlite3 posts.db3
SQLite version 3.7.4
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> insert into user (username,password) 
   ...> values ('pbrisbin','bf1bfb9af6e50018dacf19e1618e4fb5f981c14e');
sqlite> .exit

And it’s incredibly easy to add authentication to any page, here’s how I adjusted my management routes:

getManagePostsR :: Handler RepHtml
getManagePostsR = do
    _ <- requireAuth -- that's it, right there!

    postForm <- runPostForm Nothing
    pageLayout $ do

Now when you go to any page with authentication required, you’re hit with a Login box:

Login Screenshot 

Login and manage your posts.

09 Jan 2011, tagged with haskell, self

Site Migration

20:24 rson: if there is anything i could ever suggest that you'd listen to, let
            it be this.  do it.

Wise words from someone who’s been there before. That’s rson telling me that I should move my site to some sort of framework. Make things cleaner, easier to maintain, and get away from that goddamn php I seem to be so fond of.

I had been thinking about doing this myself for quite some time. As silly as it sounds, I was unhappy with my urls. The whole site (from a purely url-appearance standpoint) was inconsistent. I dreamed for /feed/ and /posts/my_post/.

I could also feel my spider web of php and html spiralling away from me. I was spending too much time monitoring comments, tweaking the syntax highlighting, and figuring out the best way to format bread crumbs based on not only filepath but also custom translations from content.php to all posts and similar.


Then I found Yesod, a web framework in Haskell. As anyone who’s ever been to this site knows, I love Haskell. It’s just a cool language. So if I were going to move to some sort of framework, this would be it.

So, using the Yesod Docs, the haddock documentation, and even the actual source for the Yesod Docs, I was able to hobble my site over to the framework. It wasn’t easy, but there’s a lot of benefit there.

My breadcrumbs went from 100 lines of php to about 14 lines of Haskell. And those 14 lines are simply defining what Routes are children of what other Routes.

My posts have tags now. This extra bit of post-metadata was even added later without disrupting any existing code.

My Rss feed is dynamically created whenever it’s loaded.

And probably most important of all, urls used throughout the site are type safe, compile-time-guaranteed to be valid.

What that means is that I don’t type the url directly, I insert a Haskell function that corresponds to those pages’ Routes. And no, they aren’t built from regular expressions; each Route is generated as a distinct type as defined by me.

Routes can also have arguments. Right now you’re viewing the output of the PostR Route using site_migration as its argument. But the best part of all that is that the compiler validates every link in my site each time it’s compiled to ensure it’s in scope and type checks!

Sell Out!

As part of the transition, I’m also giving up some control over code snippets and comments. I enjoyed the DIY approach but it was getting cumbersome (and less and less KISS as things went on).

Instead, I’m stealing two more ideas from the Yesod Docs site. The new site uses git’s gist feature for code snippets and disqus for comments. I know, I originally said I, “didn’t want to farm comments out to 3rd party javascript,” but disqus is really nice and I’m getting sick of all the overhead that comes with my homebrew php setup.

I’m really sorry to anyone who’s left comments so far on the site. I appreciate them greatly. I still have them and I’ll continue to look into ways to port them over to disqus, but so far, it’s not looking too promising.

I’ve changed my approach to posts and am now using pandoc to write them. This means that I don’t need gist anymore thanks to pandoc’s great syntax highlighting features. I’m also working on my own Yesod module for Comments to get things back the way it was on the old site. That’s a bit of a work in progress at the moment and will be its own post when it’s done… I’ll be keeping disqus around for a while.


Another change I’m making is from Apache over to Lighttpd (pronounced: lighty). To be honest, I just couldn’t get (Fast)CGI working with apache and I had it running with lighttpd in minutes. Hopefully it’ll be faster and easier to maintain too, we’ll see…

So anyway, enjoy the new site; let me know if anything is broken or missing – I’m still in the process of migrating old posts, so give me some time before reporting that.

The site’s source is also in my git repo if anyone’s interested.

10 Oct 2010, tagged with haskell, self

PHP Authentication

Recently I had the opportunity to write some php pages (some mine, some others) that required simple authentication. Nothing worthy of social security or credit card numbers; but just enough to keep something from being public.

In my case it was an admin script for the comments left on this site. I could view all of the most recent comments and click a link to mark any as spam. Doing this would remove all comments made with that IP address as well as blacklist it from any future additions.

Anyway, the authentication part was simple. It only took a little googling, so I thought I’d share the method I landed on.


First, I wrote a small php script to hold the authentication logic. It would have one method, authenticate() that would accept an array of (user => password) values. If it fails, the page can’t go any further. I just keeps prompting for user/pass until there’s a valid login or the user hits cancel. At which time you’ll see a Not authorized page.

It serves its purpose easily, with the added bonus that it’s hidden behind a simple authenticate() call that I can update as needed.


function do_auth() {
    // prompt for password
    header('WWW-Authenticate: Basic realm="pbrisbin dot com"');
    header('HTTP/1.0 401 Unauthorized');

    // if user cancels
    header('Content-type: text/plain');
    echo 'Not authorized.';

function authenticate($_valid_users) {
    // credentials not known
    if (!isset($_SERVER['PHP_AUTH_USER']))

    $user = $_SERVER['PHP_AUTH_USER'];
    $pass = $_SERVER['PHP_AUTH_PW'];

    // user not known
    if (!isset($_valid_users[$user]))

    // bad password
    if ($_valid_users[$user] != $pass)


Usage is fairly simple; on any page that needs authentication, use the following:

<?php require_once('path/to/authentication.php');

$valid_users = array( 'user1' => 'password1'
                    , 'user2' => 'password2'


// rest of page logic...


Is it awesome? Is it safe? Is it secure? Probably not. But it serves the purpose I need. And, is it easy? Yes.

The PHP header() function has to be the absolute first thing to generate any output from your page. This means you can’t embed this authentication logic in a page with any printed HTML (static or coded) ahead of it.

02 Oct 2010, tagged with self