For as long as I’ve built Docker images on CI, I’ve fought the layer caching problem. Working on Haskell projects of many dependencies, an un-cached multi-stage build can take close to an hour. That’s a deal-breaker for deployments, where ten minutes is a reasonable maximum.
At some point, Circle quietly released a docker_layer_caching
flag in their setup_remote_docker
Workflow step, and I happened to get the main Restyled image (restyled/restyled.io
) into the beta. It was hit-or-miss generally, but it’s now being hard-blocked behind a very expensive pay wall – hence my renewed interest in alternatives.
For a time, I wired up a Rube Goldberg string of webhooks to get my images building on Quay.io, because it supposedly offered good layer caching. After seeing a 0% hit rate, I emailed support. They said, and I’m only paraphrasing slightly here, “That’s broken right now, and we have no timeline for fixing it.”
Most recently, I went back to Docker Hub (or is it Docker Cloud?), because they have a wonderful Automated Builds interface with a little “Cache Layers?” check box; it seemed to work perfectly! The only issue?
- Just having my Automated Builds trigger reliably at all was rare, and
- There’s no way to just build every push to a tag that is the git sha
I decided it was time to just handle the process myself. The answer was an implementation of the Remote Image Cache section of this blog post, which I’ll now detail.
The Basics
The basics of this solution are simple:
- Pull the last image you built first
- Build using
--cache-from <that>
- Push what you built, for next time
Pretty easy:
# Pull whatever you built last
docker pull restyled/restyled.io:x
# Build new, using prior work as much as possible
docker build \
--tag restyled/restyled.io:x \
--cache-from restyled/restyled.io:x \
.
# Update the cache
docker push restyled/restyled.io:x
# Actual deployment
docker tag restyled/restyled.io:x restyled/restyled.io:prod
docker push restyled/restyled.io:prod
As usual, there is ample incidental complexity:
We probably want branch-specific tags, and a fallback to master
Each PR should work with its own cache, and the first push should start with master as its cache.
Doing this with multi-stage builds is, annoying…
See below.
Multi-stage 101
For compiled software, it’s common to use a multi-stage Docker build:
# Stage 1
FROM fat-image-with-compiler-toolchain AS builder
RUN mkdir -p /src
WORKDIR /src
COPY src .
RUN make my-exe
# Stage 2
FROM slim-image
RUN mkdir -p /app
WORKDIR /app
COPY --from=builder /src/my-exe /app/
CMD ["/app/my-exe"]
This results in a slim final image, which only contains the executable (and possibly other runtime libraries). And if you’re doing all your builds on a single machine, caching Just Works. But if you’re trying to use --cache-from
, you can’t push and pull just the final image, since it won’t have any of the builder
layers (by design). Somehow, you have to separately build, push, and pull the builder
stage too.
The Not-So-Basics
Accounting for said incidental complexity, here is what I’m actually doing…
Let’s assume some inputs:
# Git branch name, sanitized for use as an image tag
branch=
# The (REGISTRY/)NAME(:TAG) you actually want to deploy
image=
# The (REGISTRY/)NAME image, without any TAG (hence _base)
image_base=
And the following two helper functions:
docker-pull-tag() {
# Given remote and local image names, pull {remote}, then tag it as {local}
}
docker-tag-push() {
# Given remote and local image names, tag {local} as {remote}, then push it
}
(Please forgive the hand-waving here, there are links to full source code at the end.)
1- Load Cache
First, try to pull an image tagged for our branch, then master, then entirely un-prefixed. Stop on the first one to succeed. Each step tags the pulled image to the same thing, so we can use that tag later, and we’ll be working with the most specific cache image we were able to find.
For each of these, we have to pull two images: one with the suffix -builder
, for our first stage, and another (un-suffixed) one for our runtime image. Wrap it up in a function for the convenience of return
and an overall || true
(so as to not anger the set -e
gods).
pull_cached() {
echo ":: Pulling cached images for $branch"
docker-pull-tag "$image_base:$branch-builder" "$image_base:builder" &&
docker-pull-tag "$image_base:$branch" "$image_base" &&
return 0
echo ":: Pulling cached images for master"
docker-pull-tag "$image_base:master-builder" "$image_base:builder" &&
docker-pull-tag "$image_base:master" "$image_base" &&
return 0
echo ":: Pulling unprefixed cache images"
docker-pull-tag "$image_base:builder" &&
docker-pull-tag "$image_base"
}
pull_cached || true
2- Build
Next, build the stages separately, so that we can store a cache image of each stage. The second, runtime build also uses the first image as a cache source, so this is negligibly slower than doing it all at once.
echo ":: Building builder image"
docker build \
--tag "$image_base:builder" \
--cache-from "$image_base:builder" \
--target builder \
"$@" \
.
echo ":: Building image"
docker build \
--tag "$image_base" \
--cache-from "$image_base:builder" \
--cache-from "$image_base" \
"$@" \
.
3- Store Cache
Finally, push the cache images back. We always update our branch tags and the un-prefixed ones. We don’t update master here since we wouldn’t want to do that from a non-master branch. And if we’re actually on master, then branch=master
anyway.
echo ":: Pushing cached images for $branch"
docker-tag-push "$image_base:builder" "$image_base:$branch-builder"
docker-tag-push "$image_base" "$image_base:$branch"
echo ":: Pushing unprefixed cached images"
docker-tag-push "$image_base:builder"
docker-tag-push "$image_base"
NOTE: I actually put this before the build step as a trap
,
push_cached() {
echo ":: Pushing cached images for $branch"
docker-tag-push "$image_base:builder" "$image_base:$branch-builder"
docker-tag-push "$image_base" "$image_base:$branch"
echo ":: Pushing unprefixed cached images"
docker-tag-push "$image_base:builder"
docker-tag-push "$image_base"
}
trap 'push_cached || true' EXIT
echo ":: Building builder image"
# ...
If the second build fails, we would still preserve a cache of the first.
4- Deploy
And then, do whatever it is you need to actually deploy the thing…
echo ":: Pushing final image"
docker-tag-push "$image_base" "$image"
Installation & Usage
The scripts involved can be found here.
You’ll just need the 3 docker-
files on $PATH
, then usage is:
docker-build-remote-cache <[REGISTRY/]NAME[:TAG]> [DOCKER BUILD OPTION...]
The script assumes you have two stages, and the first uses AS builder
. I’d gladly welcome a PR that makes that configurable, perhaps by script argument.
Circle
If you’re using Circle, and your current step isn’t doing anything particularly exotic, you should be able to use my actual restyled/ops
image for this and swap out docker build
with docker-build-remote-cache
:
version: 2.0
jobs:
image:
docker:
- image: restyled/ops:v5
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
- run:
name: Build
command: |
docker login \
-u "$DOCKERHUB_USERNAME" \
-p "$DOCKERHUB_PASSWORD"
# The goods 👇
docker-build-remote-cache \
"your-registry/$CIRCLE_PROJECT_NAME:$CIRCLE_SHA1"
You can also verify your build locally first. Within your repository, run:
docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume "$HOME"/.docker/config.json:/root/.docker/config.json:ro \
--volume "$PWD":/build:ro \
--workdir /build \
restyled/ops:v5 docker-build-remote-cache <registry/image:tag>
The best part is the caching from this local build is re-usable on CI. If you push to CI from the same branch, you should see a 100% cached build.
Result
For restyled/restyled.io
, where an un-cached build can take up to an hour, most of my PRs have been finishing the image
job in around 5 minutes. So incurring the transfer cost of moving all the images around seems well worth it. And by putting this all inside the restyled/ops
image, which can be used directly as a Circle job’s environment, I can “port” this to all my projects trivially.
17 Oct 2019, tagged with docker
I maintain an Auth plugin for authenticating with Google OAuth2 in a Yesod application. This plugin has always had functionality overlap with the GoogleEmail2
plugin in the yesod-auth
package. Our Google plugin was present, removed (due to said overlap), then returned again, along with some discussion about deprecating GoogleEmail2
in favor of it. What was missing was the documentation for migrating.
Things lived happily in this state for some time, until the deprecation of the Google+ API sparked the discussion again. GooglEmail2
relies on this API, but OAuth2.Google
does not. Therefore, it makes more sense to push the ecosystem towards our plugin. That means we need to document the migration path, which is what this blog post is.
Caveat
The following describes the fastest way to migrate your codebase, by changing as little about your application as is required to maintain existing functionality under the new plugin. However, I would consider it an introduction of Technical Debt. I encourage you to spend the time to actually alter your application to align better with how the new plugin does things. How to do that would be application-specific, so I don’t offer concrete guidance here – but my hope is that after following the “fast way” below, you will understand enough about the differences between the plugins to know what to best do in your own codebase.
Migration
Actually changing plugins is as simple as you might expect:
Add the yesod-auth-oauth2
package to your cabal file or package.yaml
Update your authPlugins
:
-import Yesod.Auth.GoogleEmail2
+import Yesod.Auth.OAuth2.Google
-authPlugins = [authGoogleEmailSaveToken clientId clientSecret]
+authPlugins = [oauth2GoogleScoped ["email", "profile"] clientId clientSecret]
This will result in:
- The API token no longer being present in the session post-authentication
- The
Creds
value seen in authenticate
to differ; most importantly, the credsIdent
value will no longer be the user’s email address
If neither of these matter to you (or are trivial to deal with in your application), you are done.
Assuming that’s not the case, the following is an example authenticate
function that masks these differences at that seam. That way, downstream code shouldn’t have to change:
import Yesod.Auth.OAuth2 (getAccessToken, getUserResponseJSON)
data GoogleUser
= GoogleUser
{ name :: Text
, email :: Text
-- And any other fields you need from /userinfo. See below.
}
deriving Generic
instance FromJSON GoogleUser
authenticate creds = do
Just (AccessToken token) <- getAccessToken creds
setSession "_GOOGLE_ACCESS_TOKEN" token
Right user <- getUserResponseJSON creds
let updatedCreds = Creds
{ credsPlugin = "googleemail2"
, credsIdent = email user
, credsExtra =
[ ("name", name user)
-- And any other fields you were relying on. See below.
]
}
-- Proceed as before, but using updatedCreds
This approach simplifies– and makes explicit –the values you’ll find in credsExtra
. This may or may not be problematic to your application, but it is unavoidable. GoogleEmail2
was requesting a Person
resource from the deprecated /plus/v1/people/me
endpoint and serializing the entire JSON Value
into [(Text, Text)]
in an ad hoc way. The former will stop working some time in March and the latter is generally discouraged as a way of handling data in Haskell.
For migration purposes, this Person
resource is much richer and so cannot be fully re-created from the simpler /userinfo
response that OAuth2.Google
provides:
{
"id": "999999999999999999999",
"email": "you@gmail.com",
"verified_email": true,
"name": "Your Name",
"given_name": "Your",
"family_name": "Name",
"link": "https://plus.google.com/999999999999999999999",
"picture": "https://lh3.googleusercontent.com/...",
"locale": "en"
}
If you were relying on data not present here, you will need to make additional API calls to retrieve it.
For an example of transitioning a real application, see this commit.
Addendum: it’s likely that after following these instructions, you’ll encounter:
Error: redirect_uri_mismatch
when trying to log into your application.
That would be because your OAuth2 application only allows redirects to the googleemail2
plugin’s callback URL. You’ll just need to update that in the Developer Console to allow .../auth/page/google/callback
too.
08 Feb 2019, tagged with haskell, yesod
The following are all the things I want in place for a Haskell project. This is primarily a copy-paste-able reference for myself, but I’ve also tried to explain or generalize some things to make it useful for anyone first bootstrapping a Haskell project.
NOTE: if you were brought here after googling something like “how to Haskell on Circle 2.0”, you’ll just need the Makefile
and .circleci/config.yml
.
.gitignore
*.cabal
.stack-work
stack.yaml
---
resolver: lts-12.19
ghc-options:
"$locals": -fhide-source-paths
package.yaml
---
name: {package-name}
version: 0.0.0.0 # EPOCH.MAJOR.MINOR.PATCH
category:
synopsis: Short synopsis
description: >
Longer, wrapping description.
author:
maintainer:
github: {username}/{package-name}
license: MIT
ghc-options: -Wall
dependencies:
- base >=4.8.0 && <5 # GHC 7.10+
library:
source-dirs: src
dependencies:
- text # for example
tests:
# More on this later
2. Run Everything Through make
Makefile
all: setup build test lint
.PHONY: setup
setup:
stack setup
stack build --dependencies-only --test --no-run-tests
stack install hlint weeder
.PHONY: build
build:
stack build --pedantic --test --no-run-tests
.PHONY: test
test:
stack test
.PHONY: lint
lint:
hlint .
weeder .
3. Use Hspec
package.yaml
tests:
spec:
main: Spec.hs
source-dirs: test
dependencies:
- {package-name}
- hspec
test/Spec.hs
{-# OPTIONS_GHC -F -pgmF hspec-discover #-}
Add modules that export a spec :: Spec
function and match test/**/*Spec.hs
.
4. Use Doctest
package.yaml
tests:
spec:
# ...
doctest:
main: Main.hs
source-dirs: doctest
dependencies:
- doctest
doctest/Main.hs
module Main (main) where
import Test.DocTest
main :: IO ()
main = doctest ["-XOverloadedStrings", "src"]
Fill your Haddocks with executable examples.
-- | Strip whitespace from the end of a string
--
-- >>> stripEnd "foo "
-- "foo"
--
stripEnd :: String -> String
stripEnd = -- ...
See the Doctest documentation for more details.
5. Always Be Linting
As you saw, we have a make lint
target that uses HLint and Weeder. I also have my editor configured to run stylish-haskell
on write.
.hlint.yaml
---
- ignore:
name: Redundant do
within: spec
.stylish-haskell.yaml
---
steps:
- simple_align:
cases: false
top_level_patterns: false
records: false
- imports:
align: none
list_align: after_alias
pad_module_names: false
long_list_align: new_line_multiline
empty_list_align: right_after
list_padding: 4
separate_lists: false
space_surround: false
- language_pragmas:
style: vertical
align: false
remove_redundant: true
- trailing_whitespace: {}
columns: 80
newline: native
The defaults for weeder
are usually fine for me.
If you’re interested in having style fixes automatically resolved as part of your Pull Request process, check out Restyled.
6. Use Circle 2.0
When you set up the project, make sure you say it’s Haskell via the Other option in the language select; maybe they’ll add better support in the future.
.circleci/config.yml
---
version: 2.0
jobs:
build:
docker:
- image: fpco/stack-build:lts-9.18
steps:
- checkout
- run:
name: Digest
command: |
# Bust cache on any tracked file changing. We'll still fall back to
# the most recent cache for this branch, or master though.
git ls-files | xargs md5sum > digest
- restore_cache:
keys:
- stack-{{ .Branch }}-{{ checksum "digest" }}
- stack-{{ .Branch }}-
- stack-master-
- stack-
- run:
name: Dependencies
command: make setup
- run:
name: Build
command: make build
- save_cache:
key: stack-{{ .Branch }}-{{ checksum "digest" }}
paths:
- ~/.stack
- ./.stack-work
- run:
name: Test
command: make test
- run:
name: Lint
command: make lint
Quite nice.
Don’t forget to enable “build forked Pull Requests” in Circle’s settings.
7. Release to Hackage
I wrap this up in my own hackage-release script, but here are the relevant actions:
stack build --pedantic --test
stack upload .
And it’s a good practice to tag releases:
git tag --sign --message "v$version" "v$version"
git push --follow-tags
8. Add to Stackage
Check the documentation here. In short, just open a Pull Request adding yourself and/or your package to build-constraints.yaml
. It can be done without even leaving GitHub.
You should ensure your package builds “on nightly”. I add a target for this to my Makefile
:
.PHONY: check-nightly
check-nightly:
stack setup --resolver nightly
stack build --resolver nightly --pedantic --test
Sometimes I have this run on CI, sometimes I don’t.
16 Dec 2017, tagged with haskell
In a recent effort to keep my latest laptop more standard and less customized, I’ve been experimenting with XTerm over my usual choice of rxvt-unicode. XTerm is installed with the xorg
group, expected by the template ~/.xinitrc
, and is the terminal opened by most window managers’ default keybindings.
The only downside so far has been the inability to select and open URLs via the keyboard. This is trivial to configure in urxvt
, but seems impossible in xterm
. Last week, not having this became painful enough that I sat down to address it.
UPDATE: After a few weeks of use, discovering and attempting to fix a number of edge-case issues, I’ve decided to stop playing whack-a-mole and just move back to urxvt
. Your mileage may vary, and if the setup described here works for you that’s great, but I can no longer fully endorse it.
I should’ve listened to 2009 me.
Step 1: charClass
Recent versions of XTerm allow you to set a charClass
value which determines what XTerm thinks are WORDs when doing a triple-click selection. If you do a bit of googling, you’ll find there’s a way to set this charClass
such that it considers URLs as WORDs, so you can triple-click on a URL and it’ll select it and only it.
~/.Xresources:
xterm*charClass: 33:48,37-38:48,45-47:48,64:48,58:48,126:48,61:48,63:48,43:48,35:48
I don’t recommend trying to understand what this means.
Now that we can triple-click-select URLs, we can leverage another feature of modern XTerms, exec-formatted
, to automatically send the selection to our browser, instead of middle-click pasting it ourselves:
~/.Xresources:
*VT100.Translations: #override \n\
Alt <Key>o: exec-formatted("chromium '%t'", PRIMARY) select-start() select-end()
Step 3: select-needle
You might be satisfied there. You can triple-click a URL and hit a keybinding to open it, no patching required. However, I despise the mouse, so we need to avoid that triple-click.
Here’s where select-needle
comes in. It’s a patch I found on the Arch forums that allows you to, with a keybinding, select the first WORD that includes some string, starting from the cursor or any current selection.
What this means is we can look for the first WORD containing “://” and select it. You can hit the keybinding again to search up for the next WORD, or hit our current exec-formatted
keybinding to open it. Just like the functionality present in urxvt
.
I immediately found the patch didn’t work in mutt, which is a deal breaker. It seemed to rely on the URL being above screen->cursorp
and mutt doesn’t really care about a cursor so it often leaves it at (0, 0)
, well above any URLs on screen. So I changed the algorithm to instead start at the bottom of the terminal always, regardless of where the cursor is. So far this has been working reliably.
I put the updated patch, along with a PKGBUILD
for installing it on GitHub. I’ll eventually post it to the AUR to make this easier, but for now:
git clone https://github.com/pbrisbin/xterm-select-needle
(cd ./xterm-select-needle && makepkg -i)
rm -r ./xterm-select-needle
Then update that ~/.Xresources entry to:
*VT100.Translations: #override \n\
Alt <Key>u: select-needle("://") select-set(PRIMARY) \n\
Alt <Key>o: exec-formatted("chromium '%t'", PRIMARY) select-start() select-end()
And that’s it.
17 Dec 2016, tagged with arch
Deleting Git tags that have already been pushed to your remote is something I have to google literally every time I do it; why the invocation is so arcane, I don’t know. Finally, I decided to automate it with a custom sub-command:
~/.local/bin/git-delete-tag
#!/bin/sh
for tag; do
git tag -d "$tag"
git push origin :refs/tags/"$tag"
done
With this script present on $PATH
, I can just invoke git delete-tag TAG, ...
. This is great, but I soon noticed that typing git dele<tab>
wouldn’t complete this command (or any custom sub-commands for that matter). After a little digging in the _git
completion file, I found the relevant zstyle
needed to get this working:
.zshrc
zstyle ':completion:*:*:hub:*' user-commands ${${(M)${(k)commands}:#git-*}/git-/}
Since I’m actually invoking hub
, a git
wrapper with added functionality for interacting with GitHub, I had to use :hub:
in place of :git:
, which is what the documentation shows.
I also wanted git delete-tag <tab>
to complete with the current tags for the repository. Again, the extension points in the Zsh tab-completion system shine, and it only took a little _git-
completion function to make it happen:
.zshrc
_git-delete-tag() { compadd "$@" $(git tag) }
Hopefully this short post will come in useful for Git and Zsh users who, like myself, can never remember how to delete Git tags. As always, you can find the described configuration “in the wild” by way of my dotfiles repo. These items will be within the scripts
or zsh
tags.
24 Jun 2016, tagged with git
A while back, I launched a side project called tee-io. It’s sort of like a live pastebin. You use its API to create a command and then send it buffered output, usually a line at a time. Creating the command gives you a URL where you can watch the output come in in real time. We use it at work to monitor the commands run by our bot instead of waiting for the (potentially long) command to finish and report all the output back to us at once.

While working on this project, which is built with Yesod, I started to settle on some conventions for things I’ve not seen written up in the wild. I’d like to collect my thoughts here, both for myself and in case these conventions are useful to others.
Worker
One thing tee-io does that I think is common but under-served in the tutorial space is background work. In addition to the main warp-based binary, it’s often necessary to run something on a schedule and do periodic tasks. In tee-io’s case, I want to archive older command output to S3 every 10 minutes.
My approach is to define a second executable target:
executable tee-io-worker
if flag(library-only)
buildable: False
main-is: main-worker.hs
hs-source-dirs: app
build-depends: base
, tee-io
ghc-options: -Wall -Werror -threaded -O2 -rtsopts -with-rtsopts=-N
This is basically a copy-paste of the existing executable, and the implementation is also similar:
import Prelude (IO)
import Worker (workerMain)
main :: IO ()
main = workerMain
workerMain
uses the “unsafe” handler
function to run a Handler
action as IO
:
workerMain :: IO ()
workerMain = handler $ do
timeout <- appCommandTimeout . appSettings <$> getYesod
archiveCommands timeout
archiveCommands :: Second -> Handler ()
archiveCommands timeout = runDB $ -- ...
Making the heavy lifting a Handler ()
means I have access to logging, the database, and any other configuration present in a fully-inflated App
value. It’s certainly possible to write this directly in IO
, but the only real downside to Handler
is that if I accidentally try to do something request or response-related, it won’t work. In my opinion, pragmatism outweighs principle in this case.
Logging
One of the major functional changes I make to a scaffolded Yesod project is around AppSettings
, and specifically logging verbosity.
I like to avoid the #define DEVELOPMENT
stuff as much as possible. It’s required for template-reloading and similar settings because there’s no way to give the functions that need to know those settings an IO
context. For everything else, I prefer environment variables.
In keeping with that spirit, I replace the compile-time, logging-related configuration fields with a single, env-based log-level
:
Settings.hs
instance FromJSON AppSettings where
parseJSON = withObject "AppSettings" $ \o -> do
let appStaticDir = "static"
appDatabaseConf <- fromDatabaseUrl
<$> o .: "database-pool-size"
<*> o .: "database-url"
appRoot <- o .: "approot"
appHost <- fromString <$> o .: "host"
appPort <- o .: "port"
appIpFromHeader <- o .: "ip-from-header"
appCommandTimeout <- fromIntegral
<$> (o .: "command-timeout" :: Parser Integer)
S3URL appS3Service appS3Bucket <- o .: "s3-url"
appMutableStatic <- o .: "mutable-static"
appLogLevel <- parseLogLevel <$> o .: "log-level"
-- ^ here
return AppSettings{..}
where
parseLogLevel :: Text -> LogLevel
parseLogLevel t = case T.toLower t of
"debug" -> LevelDebug
"info" -> LevelInfo
"warn" -> LevelWarn
"error" -> LevelError
_ -> LevelOther t
config/settings.yml
approot: "_env:APPROOT:http://localhost:3000"
command-timeout: "_env:COMMAND_TIMEOUT:300"
database-pool-size: "_env:PGPOOLSIZE:10"
database-url: "_env:DATABASE_URL:postgres://teeio:teeio@localhost:5432/teeio"
host: "_env:HOST:*4"
ip-from-header: "_env:IP_FROM_HEADER:false"
log-level: "_env:LOG_LEVEL:info"
mutable-static: "_env:MUTABLE_STATIC:false"
port: "_env:PORT:3000"
s3-url: "_env:S3_URL:https://s3.amazonaws.com/tee.io"
I don’t use config/test-settings.yml
and prefer to inject whatever variables are appropriate for the given context directly. To make that easier, I load .env
files through my load-env package in the appropriate places.
.env (development)
COMMAND_TIMEOUT=5
LOG_LEVEL=debug
MUTABLE_STATIC=true
S3_URL=https://s3.amazonaws.com/tee.io.development
.env.test
DATABASE_URL=postgres://teeio:teeio@localhost:5432/teeio_test
LOG_LEVEL=error
S3_URL=http://localhost:4569/tee.io.test
Now I can adjust my logging verbosity in production with a simple heroku config:set
, whereas before I needed a compilation and deployment to do that!
Yesod applications log in a few different ways, so there are a handful of touch-points where we need to check this setting. To make that easier, I put a centralized helper alongside the data type in Settings.hs
:
allowsLevel :: AppSettings -> LogLevel -> Bool
AppSettings{..} `allowsLevel` level = level >= appLogLevel
The first place to use it is the shouldLog
member of the Yesod
instance:
shouldLog App{..} _source level = appSettings `allowsLevel` level
Second is the logging middleware. It’s a little tricky to get the right behavior here because, with the default scaffold, this logging always happens. It has no concept of level and wasn’t attempting to make use of shouldLog
in any way.
The approach I landed on was to change the destination to (basically) /dev/null
if we’re not logging at INFO
or lower. That’s equivalent to if these messages were tagged INFO
and respected our configured level, which seems accurate to me. The big win here is they no longer mess up my test suite output.
makeLogWare foundation = mkRequestLogger def
{ outputFormat = if appSettings foundation `allowsLevel` LevelDebug
then Detailed True
else Apache apacheIpSource
, destination = if appSettings foundation `allowsLevel` LevelInfo
then Logger $ loggerSet $ appLogger foundation
else Callback $ \_ -> return ()
}
One last thing, specific to tee-io, is that I can use this setting to turn on debug logging in the AWS library I use:
logger <- AWS.newLogger (if appSettings foundation `allowsLevel` LevelDebug
then AWS.Debug
else AWS.Error) stdout
It’s pretty nice to set LOG_LEVEL=debug
and start getting detailed logging for all AWS interactions. Kudos to amazonka for having great logging too.
REPL-Driven-Development
DevelMain.hs
has quickly become my preferred way to develop Yesod applications. This file ships with the scaffold and defines a module for starting, stopping, or reloading an instance of your development server directly from the REPL:
stack repl --ghc-options="-DDEVELOPMENT -O0 -fobject-code"
λ> :l DevelMain
DevelMain.update
Devel application launched: http://localhost:3000
The big win here in my opinion is that, in addition to viewing changes in your local browser, you naturally fall into a REPL-based workflow. It’s not something I was actively missing in Yesod projects, but now that I’m doing it, it feels really great.
I happen to have a nice Show
instance for my settings, which I can see with handler
:
λ> appSettings <$> handler getYesod
log_level=LevelDebug host=HostIPv4 port=3000 root="http://localhost:3000"
db=[user=teeio password=teeio host=localhost port=5432 dbname=teeio]
s3_bucket=tee.io.development command_timeout=5s
(Line breaks added for readability, here and below.)
And I can investigate or alter my local data easily with db
:
λ> db $ selectFirst [] [Desc CommandCreatedAt]
Just (Entity
{ entityKey = CommandKey
{ unCommandKey = SqlBackendKey {unSqlBackendKey = 1097} }
, entityVal = Command
{ commandToken = Token {tokenUUID = e79dae2c-020e-48d4-ac0b-6d9c6d79dbf4}
, commandDescription = Just "That example command"
, commandCreatedAt = 2016-02-11 14:50:19.786977 UTC
}
})
λ>
Finally, this makes it easy to locally test that worker process:
λ> :l Worker
λ> workerMain
16/Apr/2016:14:08:28 -0400 [Debug#SQL]
SELECT "command"."id", ...
FROM "command"
LEFT OUTER JOIN "output"
ON ("command"."id" = "output"."command")
AND ("output"."created_at" > ?)
WHERE (("command"."created_at" < ?)
AND ("output"."id" IS NULL))
; [ PersistUTCTime 2016-04-16 18:08:23.903484 UTC
, PersistUTCTime 2016-04-16 18:08:23.903484 UTC
]
16/Apr/2016:14:08:28 -0400 [Info] archive_commands count=1
@(main:Worker /home/patrick/code/pbrisbin/tee-io/src/Worker.hs:37:7)
[Client Request] {
host = s3.amazonaws.com:443
secure = True
method = PUT
target = Nothing
timeout = Just 70000000
redirects = 0
path = /tee.io.development/b9a74a98-0b16-4a23-94f1-5df0a01667d0
query =
headers = ...
body = ...
}
[Client Response] {
status = 200 OK
headers = ...
}
16/Apr/2016:14:08:28 -0400 [Debug#SQL] SELECT "id", "command", ...
16/Apr/2016:14:08:28 -0400 [Debug#SQL] DELETE FROM "output" WHERE ...
16/Apr/2016:14:08:28 -0400 [Debug#SQL] DELETE FROM "command" WHERE ...
16/Apr/2016:14:08:28 -0400 [Info] archived token=b9a74a98-0b16-4a23-94f1-5df0a01667d0
@(main:Worker /home/patrick/code/pbrisbin/tee-io/src/Worker.hs:59:7)
λ>
Since I run with DEBUG
in development, and that was picked up by the REPL, we can see all the S3 and database interactions the job goes through.
The console was one of the features I felt was lacking when first coming to Yesod from Rails. I got used to not having it, but I’m glad to see there have been huge improvements in this area while I wasn’t paying attention.
Deployment
I’ve been watching the deployment story for Yesod and Heroku change drastically over the past few years. From compiling on a VM, to a GHC build pack, to Halcyon, the experience hasn’t exactly been smooth. Well, it seems I might have been right in the conclusion of that last blog post:
Docker […] could solve these issues in a complete way by accident.
We now have a Heroku plugin for using Docker to build a slug in a container identical to their Cedar infrastructure, then extracting and releasing it via their API.
Everything we ship at work is Docker-based, so I’m very comfortable with the concepts and machine setup required (which isn’t much), so using this release strategy for my Yesod applications has been great. Your mileage may vary though: while I do feel it’s the best approach available today, there may be some bumps and yaks for those not already familiar with Docker – especially if on an unfortunate operating system, like OS X.
Thanks to the good folks at thoughtbot, who are maintaining a base image for releasing a stack-based project using this Heroku plugin, making tee-io deployable to Heroku looked like this:
% cat Procfile
web: ./tee-io
% cat app.json
{
"name": "tee.io",
"description": "This is required for heroku docker:release"
}
% cat docker-compose.yml
# This is required for heroku docker:release
web:
build: .
% cat Dockerfile
FROM thoughtbot/heroku-haskell-stack:lts-5.12
MAINTAINER Pat Brisbin <pbrisbin@gmail.com>
And I just run:
heroku docker:release
And that’s it!
If you’re interested in seeing any of the code examples here in the context of the real project, checkout the tee-io source on GitHub.
16 Apr 2016, tagged with haskell, yesod
What follows is a literate haskell file runnable via ghci
. The raw source for this page can be found here.
While reading Understanding Computation again last night, I was going back through the chapter where Tom Stuart describes deterministic and non-deterministic finite automata. These simple state machines seem like little more than a teaching tool, but he eventually uses them as the implementation for a regular expression matcher. I thought seeing this concrete use for such an abstract idea was interesting and wanted to re-enforce the ideas by implementing such a system myself – with Haskell, of course.
Before we get started, we’ll just need to import some libraries:
import Control.Monad.State
import Data.List (foldl')
import Data.Maybe
Patterns and NFAs
We’re going to model a subset of regular expression patterns.
data Pattern
= Empty -- ""
| Literal Char -- "a"
| Concat Pattern Pattern -- "ab"
| Choose Pattern Pattern -- "a|b"
| Repeat Pattern -- "a*"
deriving Show
With this, we can build “pattern ASTs” to represent regular expressions:
ghci> let p = Choose (Literal 'a') (Repeat (Literal 'b')) -- /a|b*/
It’s easy to picture a small parser to build these out of strings, but we won’t do that as part of this post. Instead, we’ll focus on converting these patterns into Nondeterministic Finite Automata or NFAs. We can then use the NFAs to determine if the pattern matches a given string.
To explain NFAs, it’s probably easiest to explain DFAs, their deterministic counter parts, first. Then we can go on to describe how NFAs differ.
A DFA is a simple machine with states and rules. The rules describe how to move between states in response to particular input characters. Certain states are special and flagged as “accept” states. If, after reading a series of characters, the machine is left in an accept state it’s said that the machine “accepted” that particular input.
An NFA is the same with two notable differences: First, an NFA can have rules to move it into more than one state in response to the same input character. This means the machine can be in more than one state at once. Second, there is the concept of a Free Move which means the machine can jump between certain states without reading any input.
Modeling an NFA requires a type with rules, current states, and accept states:
type SID = Int -- State Identifier
data NFA = NFA
{ rules :: [Rule]
, currentStates :: [SID]
, acceptStates :: [SID]
} deriving Show
A rule defines what characters tell the machine to change states and which state to move into.
data Rule = Rule
{ fromState :: SID
, inputChar :: Maybe Char
, nextStates :: [SID]
} deriving Show
Notice that nextStates
and currentStates
are lists. This is to represent the machine moving to, and remaining in, more than one state in response to a particular character. Similarly, inputChar
is a Maybe
value because it will be Nothing
in the case of a rule representing a Free Move.
If, after processing some input, any of the machine’s current states (or any states we can reach via a free move) are in its list of “accept” states, the machine has accepted the input.
accepts :: NFA -> [Char] -> Bool
accepts nfa = accepted . foldl' process nfa
where
accepted :: NFA -> Bool
accepted nfa = any (`elem` acceptStates nfa) (currentStates nfa ++ freeStates nfa)
Processing a single character means finding any followable rules for the given character and the current machine state, and following them.
process :: NFA -> Char -> NFA
process nfa c = case findRules c nfa of
-- Invalid input should cause the NFA to go into a failed state.
-- We can do that easily, just remove any acceptStates.
[] -> nfa { acceptStates = [] }
rs -> nfa { currentStates = followRules rs }
findRules :: Char -> NFA -> [Rule]
findRules c nfa = filter (ruleApplies c nfa) $ rules nfa
A rule applies if
- The read character is a valid input character for the rule, and
- That rule applies to an available state
ruleApplies :: Char -> NFA -> Rule -> Bool
ruleApplies c nfa r =
maybe False (c ==) (inputChar r) &&
fromState r `elem` availableStates nfa
An “available” state is one which we’re currently in, or can reach via Free Moves.
availableStates :: NFA -> [SID]
availableStates nfa = currentStates nfa ++ freeStates nfa
The process of finding free states (those reachable via Free Moves) gets a bit hairy. We need to start from our current state(s) and follow any Free Moves recursively. This ensures that Free Moves which lead to other Free Moves are correctly accounted for.
freeStates :: NFA -> [SID]
freeStates nfa = go [] (currentStates nfa)
where
go acc [] = acc
go acc ss =
let ss' = filter (`notElem` acc) $ followRules $ freeMoves nfa ss
in go (acc ++ ss') ss'
(Many thanks go to Christopher Swenson for spotting an infinite loop here and fixing it by filtering out any states already in the accumulator)
Free Moves from a given set of states are rules for those states which have no input character.
freeMoves :: NFA -> [SID] -> [Rule]
freeMoves nfa ss = filter (\r ->
(fromState r `elem` ss) && (isNothing $ inputChar r)) $ rules nfa
Of course, the states that result from following rules are simply the concatenation of those rules’ next states.
followRules :: [Rule] -> [SID]
followRules = concatMap nextStates
Now we can model an NFA and see if it accepts a string or not. You could test this in ghci
by defining an NFA in state 1 with an accept state 2 and a single rule that moves the machine from 1 to 2 if the character “a” is read.
ghci> let nfa = NFA [Rule 1 (Just 'a') [2]] [1] [2]
ghci> nfa `accepts` "a"
True
ghci> nfa `accepts` "b"
False
Pretty cool.
What we need to do now is construct an NFA whose rules for moving from state to state are derived from the nature of the pattern it represents. Only if the NFA we construct moves to an accept state for a given string of input does it mean the string matches that pattern.
matches :: String -> Pattern -> Bool
matches s = (`accepts` s) . toNFA
We’ll define toNFA
later, but if you’ve loaded this file, you can play with it in ghci
now:
ghci> "" `matches` Empty
True
ghci> "abc" `matches` Empty
False
And use it in an example main
:
main :: IO ()
main = do
-- This AST represents the pattern /ab|cd*/:
let p = Choose
(Concat (Literal 'a') (Literal 'b'))
(Concat (Literal 'c') (Repeat (Literal 'd')))
print $ "xyz" `matches` p
-- => False
print $ "cddd" `matches` p
-- => True
Before I show toNFA
, we need to talk about mutability.
A Bit About Mutable State
Since Pattern
is a recursive data type, we’re going to have to recursively create and combine NFAs. For example, in a Concat
pattern, we’ll need to turn both sub-patterns into NFAs then combine those in some way. In the Ruby implementation, Mr. Stuart used Object.new
to ensure unique state identifiers between all the NFAs he has to create. We can’t do that in Haskell. There’s no global object able to provide some guaranteed-unique value.
What we’re going to do to get around this is conceptually simple, but appears complicated because it makes use of monads. All we’re doing is defining a list of identifiers at the beginning of our program and drawing from that list whenever we need a new identifier. Because we can’t maintain that as a variable we constantly update every time we pull an identifier out, we’ll use the State
monad to mimic mutable state through our computations.
I apologize for the naming confusion here. This State
type is from the Haskell library and has nothing to with the states of our NFAs.
First, we take the parameterized State s a
type, and fix the s
variable as a list of (potential) identifiers:
type SIDPool a = State [SID] a
This makes it simple to create a nextId
action which requests the next identifier from this list as well as updates the computation’s state, removing it as a future option before presenting that next identifier as its result.
nextId :: SIDPool SID
nextId = do
(x:xs) <- get
put xs
return x
This function can be called from within any other function in the SIDPool
monad. Each time called, it will read the current state (via get
), assign the first identifier to x
and the rest of the list to xs
, set the current state to that remaining list (via put
) and finally return the drawn identifier to the caller.
Pattern ⇒ NFA
Assuming we have some function buildNFA
which handles the actual conversion from Pattern
to NFA
but is in the SIDPool
monad, we can evaluate that action, supplying an infinite list as the potential identifiers, and end up with an NFA with unique identifiers.
toNFA :: Pattern -> NFA
toNFA p = evalState (buildNFA p) [1..]
As mentioned, our conversion function, lives in the SIDPool
monad, allowing it to call nextId
at will. This gives it the following type signature:
buildNFA :: Pattern -> SIDPool NFA
Every pattern is going to need at least one state identifier, so we’ll pull that out first, then begin a case analysis on the type of pattern we’re dealing with:
buildNFA p = do
s1 <- nextId
case p of
The empty pattern results in a predictably simple machine. It has one state which is also an accept state. It has no rules. If it gets any characters, they’ll be considered invalid and put the machine into a failed state. Giving it no characters is the only way it can remain in an accept state.
Empty -> return $ NFA [] [s1] [s1]
Also simple is the literal character pattern. It has two states and a rule between them. It moves from the first state to the second only if it reads that character. Since the second state is the only accept state, it will only accept that character.
Literal c -> do
s2 <- nextId
return $ NFA [Rule s1 (Just c) [s2]] [s1] [s2]
We can model a concatenated pattern by first turning each sub-pattern into their own NFAs, and then connecting the accept state of the first to the start state of the second via a Free Move. This means that as the combined NFA is reading input, it will only accept that input if it moves through the first NFAs states into what used to be its accept state, hop over to the second NFA, then move into its accept state. Conceptually, this is exactly how a concatenated pattern should match.
Note that freeMoveTo
will be shown after.
Concat p1 p2 -> do
nfa1 <- buildNFA p1
nfa2 <- buildNFA p2
let freeMoves = map (freeMoveTo nfa2) $ acceptStates nfa1
return $ NFA
(rules nfa1 ++ freeMoves ++ rules nfa2)
(currentStates nfa1)
(acceptStates nfa2)
We can implement choice by creating a new starting state, and connecting it to both sub-patterns’ NFAs via Free Moves. Now the machine will jump into both NFAs at once, and the composed machine will accept the input if either of the paths leads to an accept state.
Choose p1 p2 -> do
s2 <- nextId
nfa1 <- buildNFA p1
nfa2 <- buildNFA p2
let freeMoves =
[ freeMoveTo nfa1 s2
, freeMoveTo nfa2 s2
]
return $ NFA
(freeMoves ++ rules nfa1 ++ rules nfa2) [s2]
(acceptStates nfa1 ++ acceptStates nfa2)
A repeated pattern is probably hardest to wrap your head around. We need to first convert the sub-pattern to an NFA, then we’ll connect up a new start state via a Free Move (to match 0 occurrences), then we’ll connect the accept state back to the start state (to match repetitions of the pattern).
Repeat p -> do
s2 <- nextId
nfa <- buildNFA p
let initMove = freeMoveTo nfa s2
freeMoves = map (freeMoveTo nfa) $ acceptStates nfa
return $ NFA
(initMove : rules nfa ++ freeMoves) [s2]
(s2: acceptStates nfa)
And finally, our little helper which connects some state up to an NFA via a Free Move.
where
freeMoveTo :: NFA -> SID -> Rule
freeMoveTo nfa s = Rule s Nothing (currentStates nfa)
That’s It
I want to give a big thanks to Tom Stuart for writing Understanding Computation. That book has opened my eyes in so many ways. I understand why he chose Ruby as the book’s implementation language, but I find Haskell to be better-suited to these sorts of modeling tasks. Hopefully he doesn’t mind me exploring that by rewriting some of his examples.
07 Apr 2014, tagged with haskell
Every time I read Learn You a Haskell, I get something new out of it. This most recent time through, I think I’ve finally gained some insight into the Applicative
type class.
I’ve been writing Haskell for some time and have developed an intuition and explanation for Monad
. This is probably because monads are so prevalent in Haskell code that you can’t help but get used to them. I knew that Applicative
was similar but weaker, and that it should be a super class of Monad
but since it arrived later it is not. I now think I have a general understanding of how Applicative
is different, why it’s useful, and I would like to bring anyone else who glossed over Applicative
on the way to Monad
up to speed.
The Applicative
type class represents applicative functors, so it makes sense to start with a brief description of functors that are not applicative.
Values in a Box
A functor is any container-like type which offers a way to transform a normal function into one that operates on contained values.
Formally:
fmap :: Functor f -- for any functor,
=> ( a -> b) -- take a normal function,
-> (f a -> f b) -- and make one that works on contained values
Some prefer to think of it like this:
fmap :: Functor f -- for any functor,
=> (a -> b) -- take a normal function,
-> f a -- and a contained value,
-> f b -- and return the contained result of applying that
-- function to that value
Because (->)
is right-associative, we can reason about and use this function either way – with the former being more useful to the current discussion.
This is the first small step in the ultimate goal between all three of these type classes: allow us to work with values with context (in this case, a container of some sort) as if that context weren’t present at all. We give a normal function to fmap
and it sorts out how to deal with the container, whatever it may be.
Functions in a Box
To say that a functor is “applicative”, we mean that the contained value can be applied. In other words, it’s a function.
An applicative functor is any container-like type which offers a way to transform a contained function into one that can operate on contained values.
(<*>) :: Applicative f -- for any applicative functor,
=> f (a -> b) -- take a contained function,
-> (f a -> f b) -- and make one that works on contained values
Again, we could also think of it like this:
(<*>) :: Applicative f -- for any applicative functor,
=> f (a -> b) -- take a contained function,
-> f a -- and a contained value,
-> f b -- and return a contained result
Applicative functors also have a way to take an un-contained function and put it into a container:
pure :: Applicative f -- for any applicative functor,
=> (a -> b) -- take a normal function,
-> f (a -> b) -- and put it in a container
In actuality, the type signature is simpler: a -> f a
. Since a
literally means “any type”, it can certainly represent the type (a -> b)
too.
pure :: Applicative f => a -> f a
Understanding this is very important for understanding the usefulness of Applicative
. Even though the type signature for (<*>)
starts with f (a -> b)
, it can also be used with functions taking any number of arguments.
Consider the following:
:: f (a -> b -> c) -> f a -> f (b -> c)
Is this (<*>)
or not?
Instead of writing its signature with b
, lets use a question mark:
(<*>) :: f (a -> ?) -> f a -> f ?
Indeed it is: substitute the type (b -> c)
for every ?
, rather than the simple b
in the actual class definition.
One In, One Out
What you just saw was a very concrete example of the benefits of how (->)
works. When we say “a function of n arguments”, we’re actually lying. All functions in Haskell take exactly one argument. Multi-argument functions are really single-argument functions that return other single-argument functions that accept the remaining arguments via the same process.
Using the question mark approach, we see that multi-argument functions are actually of the form:
And it’s entirely legal for that ?
to be replaced with (b -> ?)
, and for that ?
to be replaced with (c -> ?)
and so on ad infinitum. Thus you have the appearance of multi-argument functions.
As is common with Haskell, this results in what appears to be happy coincidence, but is actually the product of developing a language on top of such a consistent mathematical foundation. You’ll notice that after using (<*>)
on a function of more than one argument, the result is not a wrapped result, but another wrapped function – does that sound familiar? Exactly, it’s an applicative functor.
Let me say that again: if you supply a function of more than one argument and a single wrapped value to (<*>)
, you end up with another applicative functor which can be given to (<*>)
yet again with another wrapped value to supply the remaining argument to that original function. This can continue as long as the function needs more arguments. Exactly like normal function application.
A “Concrete” Example
Consider what this might look like if you start with a plain old function that (conceptually) takes more than one argument, but the values that it wants to operate on are wrapped in some container.
-- A normal function
f :: (a -> b -> c)
f = -- ...
-- One contained value, suitable for its first argument
x :: Applicative f => f a
x = -- ...
-- Another contained value, suitable for its second
y :: Applicative f => f b
y = -- ...
How do we pass x
and y
to f
to get some overall result? You wrap the function with pure
then use (<*>)
repeatedly:
result :: Applicative f => f c
result = pure f <*> x <*> y
The first portion of that expression is very interesting: pure f <*> x
. What is this bit doing? It’s taking a normal function and applying it to a contained value. Wait a second, normal functors know how to do that!
Since in Haskell every Applicative
is also a Functor
, that means it could be rewritten equivalently as fmap f x
, turning the whole expression into fmap f x <*> y
.
Never satisfied, Haskell introduced a function called (<$>)
which is just fmap
but infix. With this alias, we can write:
Not only is this epically concise, but it looks exactly like f x y
which is how this code would be written if there were no containers involved. Here we have another, more powerful step towards the goal of writing code that has to deal with some context (in our case, still that container) without actually having to care about that context. You write your function like you normally would, then add (<$>)
and (<*>)
between the arguments.
What’s the Point?
With all of this background knowledge, I came to a simple mental model for applicative functors vs monads: Monad is for series where Applicative is for parallel. This has nothing to do with concurrency or evaluation order, this is only a concept I use to judge when a particular abstraction is better suited to the problem at hand.
Let’s walk through a real example.
Building a User
In an application I’m working on, I’m doing OAuth based authentication. My domain has the following (simplified) user type:
data User = User
{ userFirstName :: Text
, userLastName :: Text
, userEmail :: Text
}
During the process of authentication, an OAuth endpoint provides me with some profile data which ultimately comes back as an association list:
type Profile = [(Text, Text)]
-- Example:
-- [ ("first_name", "Pat" )
-- , ("last_name" , "Brisbin" )
-- , ("email" , "me@pbrisbin.com")
-- ]
Within this list, I can find user data via the lookup
function which takes a key and returns a Maybe
value. I had to write the function that builds a User
out of this list of profile values. I also had to propagate any Maybe
values by returning Maybe User
.
First, let’s write this without exploiting the fact that Maybe
is a monad or an applicative:
buildUser :: Profile -> Maybe User
buildUser p =
case lookup "first_name" p of
Nothing -> Nothing
Just fn -> case lookup "last_name" p of
Nothing -> Nothing
Just ln -> case lookup "email" p of
Nothing -> Nothing
Just e -> Just $ User fn ln e
Oof.
Treating Maybe
as a Monad
makes this much, much cleaner:
buildUser :: Profile -> Maybe User
buildUser p = do
fn <- lookup "first_name" p
ln <- lookup "last_name" p
e <- lookup "email" p
return $ User fn ln e
Up until a few weeks ago, I would’ve stopped there and been extremely proud of myself and Haskell. Haskell for supplying such a great abstraction for potential failed lookups, and myself for knowing how to use it.
Hopefully, the content of this blog post has made it clear that we can do better.
Series vs Parallel
Using Monad
means we have the ability to access the values returned by earlier lookup
expressions in later ones. That ability is often critical, but not always. In many cases (like here), we do nothing but pass them all as-is to the User
constructor “at once” as a last step.
This is Applicative
, I know this.
-- f :: a -> b -> c -> d
User :: Text -> Text -> Text -> User
-- x :: f a
lookup "first_name" p :: Maybe Text
-- y :: f b
lookup "last_name" p :: Maybe Text
-- z :: f c
lookup "email" p :: Maybe Text
-- result :: f d
-- result = f <$> x <*> y <*> z
buildUser :: Profile -> Maybe User
buildUser p = User
<$> lookup "first_name" p
<*> lookup "last_name" p
<*> lookup "email" p
And now, I understand when to reach for Applicative
over Monad
. Perhaps you do too?
30 Mar 2014, tagged with haskell
Lately at work, I’ve been fortunate enough to work on a JSON API which I was given the freedom to write in Yesod. I was a bit hesitant at first since my only Yesod experience has been richer html-based sites and I wasn’t sure what support (if any) there was for strictly JSON APIs. Rails has a number of conveniences for writing concise controllers and standing up APIs quickly – I was afraid Yesod may be lacking.
I quickly realized my hesitation was unfounded. The process was incredibly smooth and Yesod comes with just as many niceties that allow for rapid development and concise code when it comes to JSON-only API applications. Couple this with all of the benefits inherent in using Haskell, and it becomes clear that Yesod is well-suited to sites of this nature.
In this post, I’ll outline the process of building such a site, explain some conventions I’ve landed on, and discuss one possible pitfall when dealing with model relations.
Note: The code in this tutorial was extracted from a current project and is in fact working there. However, I haven’t test-compiled the examples exactly as they appear in the post. It’s entirely possible there are typos and the like. Please reach out on Twitter or via email if you run into any trouble with the examples.
What We Won’t Cover
This post assumes you’re familiar with Haskell and Yesod. It also won’t cover some important but un-interesting aspects of API design. We’ll give ourselves arbitrary requirements and I’ll show only the code required to meet those.
Specifically, the following will not be discussed:
- Haskell basics
- Yesod basics
- Authentication
- Embedding relations or side-loading
- Dealing with created-at or updated-at fields
Getting Started
To begin, let’s get a basic Yesod site scaffolded out. How you do this is up to you, but here’s my preferred steps:
$ mkdir ./mysite && cd ./mysite
$ cabal sandbox init
$ cabal install alex happy yesod-bin
$ yesod init --bare
$ cabal install --dependencies-only
$ yesod devel
The scaffold comes with a number of features we won’t need. You don’t have to remove them, but if you’d like to, here they are:
- Any existing models
- Any existing routes/templates
- Authentication
- Static file serving
Models
For our API example, we’ll consider a site with posts and comments. We’ll keep things simple, additional models or attributes would just mean more lines in our JSON instances or more handlers of the same basic form. This would result in larger examples, but not add any value to the tutorial.
Let’s go ahead and define the models:
config/models
Post
title Text
content Text
Comment
post PostId
content Text
JSON
It’s true that we can add a json
keyword in our model definition and get derived ToJSON
/FromJSON
instances for free on all of our models; we won’t do that though. I find these JSON instances, well, ugly. You’ll probably want your JSON to conform to some conventional format, be it jsonapi or Active Model Serializers. Client side frameworks like Ember or Angular will have better built-in support if your API conforms to something conventional. Writing the instances by hand is also more transparent and easily customized later.
Since what we do doesn’t much matter, only that we do it, I’m going to write JSON instances and endpoints to appear as they would in a Rails project using Active Model Serializers.
Model.hs
share [mkPersist sqlSettings, mkMigrate "migrateAll"]
$(persistFileWith lowerCaseSettings "config/models")
-- { "id": 1, "title": "A title", "content": "The content" }
instance ToJSON (Entity Post) where
toJSON (Entity pid p) = object
[ "id" .= (String $ toPathPiece pid)
, "title" .= postTitle p
, "content" .= postContent p
]
instance FromJSON Post where
parseJSON (Object o) = Post
<$> o .: "title"
<*> o .: "content"
parseJSON _ = mzero
-- { "id": 1, "post_id": 1, "content": "The comment content" }
instance ToJSON (Entity Comment) where
toJSON (Entity cid c) = object
[ "id" .= (String $ toPathPiece cid)
, "post_id" .= (String $ toPathPiece $ commentPost c)
, "content" .= commentContent c
]
-- We'll talk about this later
--instance FromJSON Comment where
Routes and Handlers
Let’s start with a RESTful endpoint for posts:
config/routes
/posts PostsR GET POST
/posts/#PostId PostR GET PUT DELETE
Since our API should return proper status codes, let’s add the required functions to Import.hs
, making them available everywhere:
Import.hs
import Network.HTTP.Types as Import
( status200
, status201
, status400
, status403
, status404
)
Next we write some handlers:
Handlers/Posts.hs
getPostsR :: Handler Value
getPostsR = do
posts <- runDB $ selectList [] [] :: Handler [Entity Post]
return $ object ["posts" .= posts]
postPostsR :: Handler ()
postPostsR = do
post <- requireJsonBody :: Handler Post
_ <- runDB $ insert post
sendResponseStatus status201 ("CREATED" :: Text)
You’ll notice we need to add a few explicit type annotations. Normally, Haskell can infer everything for us, but in this case the reason for the annotations is actually pretty interesting. The selectList
function will return any type that’s persistable. Normally we would simply treat the returned records as a particular type and Haskell would say, “Aha! You wanted a Post” and then, as if by time travel, selectList
would give us appropriate results.
In this case, all we do with the returned posts
is pass them to object
. Since object
can work with any type than can be represented as JSON, Haskell doesn’t know which type we mean. We must remove the ambiguity with a type annotation somewhere.
Handlers/Post.hs
getPostR :: PostId -> Handler Value
getPostR pid = do
post <- runDB $ get404 pid
return $ object ["post" .= (Entity pid post)]
putPostR :: PostId -> Handler Value
putPostR pid = do
post <- requireJsonBody :: Handler Post
runDB $ replace pid post
sendResponseStatus status200 ("UPDATED" :: Text)
deletePostR :: PostId -> Handler Value
deletePostR pid = do
runDB $ delete pid
sendResponseStatus status200 ("DELETED" :: Text)
I love how functions like get404
and requireJsonBody
allow these handlers to be completely free of any error-handling concerns, but still be safe and well-behaved.
There’s going to be a small annoyance in our comment handlers which I alluded to earlier by omitting the FromJSON
instance on Comment
. Before we get to that, let’s take care of the easy stuff:
config/routes
/posts/#PostId/comments CommentsR GET POST
/posts/#PostId/comments/#CommentId CommentR GET PUT DELETE
Handlers/Comments.hs
getCommentsR :: PostId -> Handler Value
getCommentsR pid = do
comments <- runDB $ selectList [CommentPost ==. pid] []
return $ object ["comments" .= comments]
-- We'll talk about this later
--postCommentsR :: PostId -> Handler ()
For the single-resource handlers, we’re going to assume that a CommentId
is unique across posts, so we can ignore the PostId
in these handlers.
Handlers/Comment.hs
getCommentR :: PostId -> CommentId -> Handler Value
getCommentR _ cid = do
comment <- runDB $ get404 cid
return $ object ["comment" .= (Entity cid comment)]
-- We'll talk about this later
--putCommentR :: PostId -> CommentId -> Handler ()
deleteCommentR :: PostId -> CommentId -> Handler ()
deleteCommentR _ cid = do
runDB $ delete cid
sendResponseStatus status200 ("DELETED" :: Text)
Handling Relations
Up until now, we’ve been able to define JSON instances for our model, use requireJsonBody
, and insert
the result. In this case however, the request body will be lacking the Post ID (since it’s in the URL). This means we need to parse a different but similar data type from the JSON, then use that and the URL parameter to build a Comment
.
Helpers/Comment.hs
-- This datatype would be richer if Comment had more attributes. For now
-- we only have to deal with content, so I can use a simple newtype.
newtype CommentAttrs = CommentAttrs Text
instance FromJSON CommentAttrs where
parseJSON (Object o) = CommentAttrs <$> o .: "content"
parseJSON _ = mzero
toComment :: PostId -> CommentAttrs -> Comment
toComment pid (CommentAttrs content) = Comment
{ commentPost = pid
, commentContent = content
}
This may seem a bit verbose and even redundant, and there’s probably a more elegant way to get around this situation. Lacking that, I think the additional safety (vs the obvious solution of making commentPost
a Maybe
) and separation of concerns (vs putting this in the model layer) is worth the extra typing. It’s also very easy to use:
Handlers/Comments.hs
import Helpers.Comment
postCommentsR :: PostId -> Handler ()
postCommentsR pid = do
_ <- runDB . insert . toComment pid =<< requireJsonBody
sendResponseStatus status201 ("CREATED" :: Text)
Handlers/Comment.hs
import Helpers.Comment
putCommentR :: PostId -> CommentId -> Handler ()
putCommentR pid cid = do
runDB . replace cid . toComment pid =<< requireJsonBody
sendResponseStatus status200 ("UPDATED" :: Text)
We don’t need a type annotation on requireJsonBody
in this case. Since the result is being passed to toComment pid
, Haskell knows we want a CommentAttrs
and uses its parseJSON
function within requireJsonBody
Conclusion
With a relatively small amount of time and code, we’ve written a fully-featured JSON API using Yesod. I think the JSON instances and API handlers are more concise and readable than the analogous Rails serializers and controllers. Our system is also far safer thanks to the type system and framework-provided functions like get404
and requireJsonBody
without us needing to explicitly deal with any of that.
I hope this post has shown that Yesod is indeed a viable option for projects of this nature.
22 Feb 2014, tagged with haskell, yesod
In lecture 5A of Structure & Interpretation of Computer Programs, Gerald Sussman introduces the idea of assignments, side effects and state. Before that, they had been working entirely in purely functional Lisp which could be completely evaluated and reasoned about using the substitution model. He states repeatedly that this is a horrible thing as it requires a far more complex view of programs. At the end of the lecture, he shows a compelling example of why we must introduce this horrible thing anyway; without it, we cannot decouple parts of our algorithms cleanly and would be reduced to huge single-function programs in some critical cases.
The example chosen in SICP is estimating π using Cesaro’s method. The method states that the probability that any two random numbers’ greatest common divisor equals 1 is itself equal to 6/π2.
Since I know Ruby better than Lisp (and I’d venture my readers do too), here’s a ported version:
def estimate_pi(trials)
p = monte_carlo(trials) { cesaro }
Math.sqrt(6 / p)
end
def cesaro
rand.gcd(rand) == 1
end
def monte_carlo(trials, &block)
iter = ->(trials, passed) do
if trials == 0
passed
else
if block.call
iter.call(trials - 1, passed + 1)
else
iter.call(trials - 1, passed)
end
end
end
iter.call(trials, 0) / trials.to_f
end
I’ve written this code to closely match the Lisp version which used a recursive iterator. Unfortunately, this means that any reasonable number of trials will exhaust Ruby’s stack limit.
The code above also assumes a rand
function which will return different random integers on each call. To do so, it must employ mutation and hold internal state:
def rand
@x ||= random_init
@x = random_update(@x)
@x
end
Here I assume the same primitives as Sussman does, though it wouldn’t be difficult to wrap Ruby’s built-in rand
to return integers instead of floats. The important thing is that this function needs to hold onto the previously returned random value in order to provide the next.
Sussman states that without this impure rand
function, it would be very difficult to decouple the cesaro
function from the monte_carlo
one. Without utilizing (re)assignment and mutation, we would have to write our estimation function as one giant blob:
def estimate_pi(trials)
iter = ->(trials, passed, x1, x2) do
if trials == 0
passed
else
x1_ = rand_update(x2)
x2_ = rand_update(x1_)
if x1.gcd(x2) == 1
iter.call(trials - 1, passed + 1, x1_, x2_)
else
iter.call(trials - 1, passed, x1_, x2_)
end
end
end
x1 = rand_init
x2 = rand_update(x1)
p = iter.call(trials, 0, x1, x2) / trials.to_f
Math.sqrt(6 / p)
end
Ouch.
It’s at this point Sussman stops, content with his justification for adding mutability to Lisp. I’d like to explore a bit further: what if remaining pure were non-negotiable? Are there other ways to make decoupled systems and elegant code without sacrificing purity?
RGen
Let’s start with a non-mutating random number generator:
class RGen
def initialize(seed = nil)
@seed = seed || random_init
end
def next
x = random_update(@seed)
[x, RGen.new(x)]
end
end
def rand(g)
g.next
end
This allows for the following implementation:
def estimate_pi(trials)
p = monte_carlo(trials) { |g| cesaro(g) }
Math.sqrt(6 / p)
end
def cesaro(g)
x1, g1 = rand(g)
x2, g2 = rand(g1)
[x1.gcd(x2) == 1, g2]
end
def monte_carlo(trials, &block)
iter = ->(trials, passed, g) do
if trials == 0
passed
else
ret, g_ = block.call(g)
if ret
iter.call(trials - 1, passed + 1, g_)
else
iter.call(trials - 1, passed, g_)
end
end
end
iter.call(trials, 0, RGen.new) / trials.to_f
end
We’ve moved out of the single monolithic function, which is a step in the right direction. The additional generator arguments being passed all over the place makes for some readability problems though. The reason for that is a missing abstraction; one that’s difficult to model in Ruby. To clean this up further, we’ll need to move to a language where purity was in fact non-negotiable: Haskell.
In Haskell, the type signature of our current monte_carlo
function would be:
monteCarlo :: Int -- number of trials
-> (RGen -> (Bool, RGen)) -- the experiment
-> Double -- result
Within monte_carlo
, we need to repeatedly call the block with a fresh random number generator. Calling RGen#next
gives us an updated generator along with the next random value, but that must happen within the iterator block. In order to get it out again and pass it into the next iteration, we need to return it. This is why cesaro
has the type that it does:
cesaro :: RGen -> (Bool, RGen)
cesaro
depends on some external state so it accepts it as an argument. It also affects that state so it must return it as part of its return value. monteCarlo
is responsible for creating an initial state and “threading” it though repeated calls to the experiment given. Mutable state is “faked” by passing a return value as argument to each computation in turn.
You’ll also notice this is a similar type signature as our rand
function:
rand :: RGen -> (Int, RGen)
This similarity and process is a generic concern which has nothing to do with Cesaro’s method or performing Monte Carlo tests. We should be able to leverage the similarities and separate this concern out of our main algorithm. Monadic state allows us to do exactly that.
RGenState
For the Haskell examples, I’ll be using System.Random.StdGen
in place of the RGen
class we’ve been working with so far. It is exactly like our RGen
class above in that it can be initialized with some seed, and there is a random
function with the type StdGen -> (Int, StdGen)
.
The abstract thing we’re lacking is a way to call those function successively, passing the StdGen
returned from one invocation as the argument to the next invocation, all the while being able to access that a
(the random integer or experiment outcome) whenever needed. Haskell, has just such an abstraction, it’s in Control.Monad.State
.
First we’ll need some imports.
import System.Random
import Control.Monad.State
Notice that we have a handful of functions with similar form.
What Control.Monad.State
provides is a type that looks awfully similar.
data State s a = State { runState :: (s -> (a, s)) }
Let’s declare a type synonym which fixes that s
type variable to the state we care about: a random number generator.
type RGenState a = State StdGen a
By replacing the s
in State
with our StdGen
type, we end up with a more concrete type that looks as if we had written this:
data RGenState a = RGenState
{ runState :: (StdGen -> (a, StdGen)) }
And then went on to write all the various instances that make this type useful. By using such a type synonym, we get all those instances and functions for free.
Our first example:
rand :: RGenState Int
rand = state random
We can “evaluate” this action with one of a number of functions provided by the library, all of which require some initial state. runState
will literally just execute the function and return the result and the updated state (in case you missed it, it’s just the record accessor for the State
type). evalState
will execute the function, discard the updated state, and give us only the result. execState
will do the inverse: execute the function, discard the result, and give us only the updated state.
We’ll be using evalState
exclusively since we don’t care about how the random number generator ends up after these actions, only that it gets updated and passed along the way. Let’s wrap that up in a function that both provides the initial state and evaluates the action.
runRandom :: RGenState a -> a
runRandom f = evalState f (mkStdGen 1)
-- runRandom rand
-- => 7917908265643496962
Unfortunately, the result will be the same every time since we’re using a constant seed. You’ll see soon that this is an easy limitation to address after the fact.
With this bit of glue code in hand, we can re-write our program in a nice modular way without any actual mutable state or re-assignment.
estimatePi :: Int -> Double
estimatePi n = sqrt $ 6 / (monteCarlo n cesaro)
cesaro :: RGenState Bool
cesaro = do
x1 <- rand
x2 <- rand
return $ gcd x1 x2 == 1
monteCarlo :: Int -> RGenState Bool -> Double
monteCarlo trials experiment = runRandom $ do
outcomes <- replicateM trials experiment
return $ (length $ filter id outcomes) `divide` trials
where
divide :: Int -> Int -> Double
divide a b = fromIntegral a / fromIntegral b
Even with a constant seed, it works pretty well:
main = print $ estimatePi 1000
-- => 3.149183286488868
And For My Last Trick
It’s easy to fall into the trap of thinking that Haskell’s type system is limiting in some way. The monteCarlo
function above can only work with random-number-based experiments? Pretty weak.
Consider the following refactoring:
estimatePi :: Int -> RGenState Double
estimatePi n = do
p <- monteCarlo n cesaro
return $ sqrt (6 / p)
cesaro :: RGenState Bool
cesaro = do
x1 <- rand
x2 <- rand
return $ gcd x1 x2 == 1
monteCarlo :: Monad m => Int -> m Bool -> m Double
monteCarlo trials experiment = do
outcomes <- replicateM trials experiment
return $ (length $ filter id outcomes) `divide` trials
where
divide :: Int -> Int -> Double
divide a b = fromIntegral a / fromIntegral b
main :: IO ()
main = print $ runRandom $ estimatePi 1000
The minor change made was moving the call to runRandom
all the way up to main
. This allows us to pass stateful computations throughout our application without ever caring about that state except at this highest level.
This would make it simple to add true randomness (which requires IO
) by replacing the call to runRandom
with something that pulls entropy in via IO
rather than using mkStdGen
.
runTrueRandom :: RGenState a -> IO a
runTrueRandom f = do
s <- newStdGen
evalState f s
main = print =<< runTrueRandom (estimatePi 1000)
One could even do this conditionally so that your random-based computations became deterministic during tests.
Another important point here is that monteCarlo
can now work with any Monad! This makes perfect sense: The purpose of this function is to run experiments and tally outcomes. The idea of an experiment only makes sense if there’s some outside force which might change the results from run to run, but who cares what that outside force is? Haskell don’t care. Haskell requires we only specify it as far as we need to: it’s some Monad m
, nothing more.
This means we can run IO-based experiments via the Monte Carlo method with the same monteCarlo
function just by swapping out the monad:
What if Cesaro claimed the probability that the current second is an even number is equal to 6/π2? Seems reasonable, let’s model it:
-- same code, different name / type
estimatePiIO :: Int -> IO Double
estimatePiIO n = do
p <- monteCarlo n cesaroIO
return $ sqrt (6 / p)
cesaroIO :: IO Bool
cesaroIO = do
t <- getCurrentTime
return $ even $ utcDayTime t
monteCarlo :: Monad m => Int -> m Bool -> m Double
monteCarlo trials experiment = -- doesn't change at all!
main :: IO ()
main = print =<< estimatePiIO 1000
I find the fact that this expressiveness, generality, and polymorphism can share the same space as the strictness and incredible safety of this type system fascinating.
09 Feb 2014, tagged with haskell