Saturday, December 6, 2014

A very general API for relational joins

Maps and tuples are useful data types for modeling relational operations. For example, suppose we have the following table, indexed by the Id column:

| Id | First Name | Last Name |
|----|------------|-----------|
|  0 | Gabriel    | Gonzalez  |
|  1 | Oscar      | Boykin    |
|  2 | Edgar      | Codd      |

We can model that as a Map where the key is the Id column and the value is a tuple of FirstName and LastName:

import Data.Map (Map) -- from the `containers` library
import qualified Data.Map as Map

type Id        = Int
type FirstName = String
type LastName  = String

names :: Map Id (FirstName, LastName)
names = Map.fromList
    [ (0, ("Gabriel", "Gonzalez"))
    , (1, ("Oscar"  , "Boykin"  ))
    , (2, ("Edgar"  , "Codd"    ))
    ]

Now suppose we have another table containing Twitter handles, also indexed by Id:

| Id | Twitter Handle |
|----|----------------|
| 0  |  GabrielG439   |
| 1  |  posco         |
| 3  |  avibryant     |

We can also encode that as a Map:

type TwitterHandle = String

handles :: Map Id TwitterHandle
handles = Map.fromList
    [ (0, "GabrielG439")
    , (1, "posco"      )
    , (3, "avibryant"  )
    ]

One of the nice properties of Maps is that you can join them together:

import Control.Applicative

-- I'm using `join2` to avoid confusion with `Control.Monad.join`
join2 :: Map k v1 -> Map k v2 -> Map k (v1, v2)
join2 = ... -- Implementation elided for now

What if we could generalize join2 to work on types other than Map. Perhaps we could use the Applicative interface to implement join2:

join2 :: Applicative f => f a -> f b -> f (a, b)
join2 = liftA2 (,)

However, the Map type cannot implement Applicative in its current form. The reason why is that there is no sensible definition for pure:

pure :: v -> Map k v
pure = ???

This would require a Map that was defined for every key, which we cannot encode. Or can we?

Tables

Well, who says we need to use the Map type from containers? What if I were to encode my Map this way:

import Prelude hiding (lookup)

-- | A map encoded as a lookup function
newtype Table k v = Table { lookup :: k -> Maybe v }

-- | Encode a traditional map as a lookup function
from :: Ord k => Map k v -> Table k v
from m = Table (\k -> Map.lookup k m)

This new type of Map only permits a single operation: lookup, but because we constrain our API to this single operation we can now implement the full Applicative interface:

instance Functor (Table k) where
    fmap f (Table g) = Table (\k -> fmap f (g k))
           -- Same as: Table (fmap (fmap f) g)

instance Applicative (Table k) where
    pure v            = Table (\k -> Just v)
            -- Same as: Table (pure (pure v))

    Table f <*> Table x = Table (\k -> f k <*> x k)
              -- Same as: Table (liftA2 (<*>) f x)

We can promote conventional Maps to this new Table type using the above from function:

names' :: Table Id (FirstName, LastName)
names' = from names

handles' :: Table Id TwitterHandle
handles' = from handles

... and now the more general Applicative join2 will work on these two tables:

>>> let table = join2 names' handles'
>>> :type table
table :: Table Id ((FirstName, LastName), TwitterHandle)
>>> lookup table 0
Just (("Gabriel","Gonzalez"),"GabrielG439")
>>> lookup table 1
Just (("Oscar","Boykin"),"posco")
>>> lookup table 2
Nothing

However, in its present form we can't dump the table's contents because we don't know which keys are present in the table. Let's fix that by adding an additional field to the Table type listing the keys. We will treat functions as being defined for all keys:

import Data.Set (Set)
import qualified Data.Set as Set 

data Keys k = All | Some (Set k)

instance Ord k => Num (Keys k) where
    fromInteger 0         = Some Set.empty
    fromInteger n | n > 0 = All

    All     + _       = All
    _       + All     = All
    Some s1 + Some s2 = Some (Set.union s1 s2)

    All     * ks      = ks
    ks      * All     = ks
    Some s1 * Some s2 = Some (Set.intersection s1 s2)

-- | A map encoded as a lookup function
data Table k v = Table
    { keys   :: Keys k
    , lookup :: k -> Maybe v
    }

-- | Encode a traditional map as a lookup function
from :: Ord k => Map k v -> Table k v
from m = Table
    { keys   = Some (Set.fromList (Map.keys m))
    , lookup = \k -> Map.lookup k m
    }

Even after extending Table with keys we can still implement the Applicative interface:

instance Functor (Table k) where
    fmap f (Table ks g) = Table ks (fmap (fmap f) g)

instance Ord k => Applicative (Table k) where
    pure v =
        Table 1 (pure (pure v))

    Table s1 f <*> Table s2 x =
        Table (s1 * s2) (liftA2 (<*>) f x)

... and now we can add a Show instance, too!

instance (Show k, Show v) => Show (Table k v) where
    show (Table ks f) = case ks of
        All    -> "<function>"
        Some s -> unlines (do
            k <- Set.toList s
            let Just v = f k
            return (show (k, v)) )

Let's give it a test drive:

>>> names'
(0,("Gabriel","Gonzalez"))
(1,("Oscar","Boykin"))
(2,("Edgar","Codd"))

>>> handles'
(0,"GabrielG439")
(1,"posco")
(3,"avibryant")

>>> join2 names' handles'
(0,(("Gabriel","Gonzalez"),"GabrielG439"))
(1,(("Oscar","Boykin"),"posco"))

So far, so good!

Alternative

However, we need to support more than just inner joins. We'd also like to support left, right, and outer joins, too.

Conceptually, a left join is one in which values from the right table may be optionally present. One way we could implement this would be to define a function that converts a finite map to a function defined on all keys. This function will return Nothing for keys not present in the original finite map and Just for keys that were present:

optional :: Table k v -> Table k (Maybe v)
optional (Table ks f) =
    Table All (\k -> fmap Just (f k) <|> pure Nothing)

Then we could define leftJoin in terms of join2 and optional:

leftJoin :: Table k a -> Table k b -> Table k (a, Maybe b)
leftJoin t1 t2 = join2 t1 (optional t2)

However, if we try to compile the above code, the compiler will give us a really interesting error message:

Ambiguous occurrence ‘optional’
It could refer to either ‘Main.optional’,
                      or ‘Control.Applicative.optional’

Apparently, Control.Applicative has an optional function, too. Let's pause to check out the type of this mysterious function:

optional :: Alternative f => f v -> f (Maybe v)

Wow! That type signature is suprisingly similar to the one we wrote. In fact, if Table k implemented the Alternative interface, the types would match.

Alternative is a type class (also provided by Control.Applicative) that greatly resembles the Monoid type class:

class Applicative f => Alternative f where
    empty :: f a

    (<|>) :: f a -> f a -> f a

... and the core Alternative laws are identical to the Monoid laws:

x <|> empty = x

empty <|> x = x

(x <|> y) <|> z = x <|> (y <|> z)

Let's dig even deeper and see how optional uses this Alternative type class:

optional v = fmap Just v <|> pure Nothing

Even the implementation is eerily similar! This strongly suggests that we should find a way to make our Table type implement Alternative. Perhaps something like this would work:

instance Ord k => Alternative (Table k) where
    empty =
        Table 0 (pure empty)

    Table ks1 f1 <|> Table ks2 f2 =
        Table (ks1 + ks2) (liftA2 (<|>) f1 f2)

Compare this to our Applicative instance (duplicated here):

instance Ord k => Applicative (Table k) where
    pure v =
        Table 1 (pure (pure v))

    Table s1 f <*> Table s2 x =
        Table (s1 * s2) (liftA2 (<*>) f x)

The Alternative instance mirrors our Applicative instance except that we:

  • replace pure v with empty
  • replace (<*>) with (<|>)
  • replace 1 with 0
  • replace (*) with (+)

... and what's really neat is that now the optional function from Control.Applicative behaves just like the original optional function we wrote. (Exercise: Use equational reasoning to verify this)

Derived joins

Armed with this Alternative instance, we can now implement leftJoin in terms of the optional provided by Control.Applicative:

leftJoin t1 t2 = join2 t1 (optional t2)

Sure enough, it works:

>>> leftJoin names' handles'
(0,(("Gabriel","Gonzalez"),Just "GabrielG439"))
(1,(("Oscar","Boykin"),Just "posco"))
(2,(("Edgar","Codd"),Nothing)

Let's check out the type that the compiler infers for leftJoin:

>>> :type leftJoin
leftJoin :: Alternative f => f a -> f b -> f (a, Maybe b)

Notice how there's no longer anything Table-specific about leftJoin. It works for anything that implements the Alternative interface. I could leftJoin two Maybes if I really wanted to:

>>> leftJoin (Just 1) (Just 2)
Just (1,Just 2)
>>> leftJoin (Just 1) Nothing
Just (1,Nothing)
>>> leftJoin Nothing (Just 1)
Nothing

... or two lists:

>>> leftJoin [1, 2] [3, 4]
[(1,Just 3),(1,Just 4),(1,Nothing),(2,Just 3),(2,Just 4),(2,
Nothing)]

In fact, I don't even really need specialized leftJoin or rightJoin functions. optional is sufficiently light-weight that I could inline a right join on the fly:

>>> join2 (optional names') handles'
(0,(Just ("Gabriel","Gonzalez"),"GabrielG439"))
(1,(Just ("Oscar","Boykin"),"posco"))
(3,(Nothing,"avibryant"))

What happens if I try to do an "outer join"?

>>> -- DISCLAIMER: Technically not the same as an SQL outer join
>>> let o = join2 (optional names') (optional handles')
>>> o
<function>

The above "outer join" is defined for all keys (because both sides are optional), so we get back a function! While we can't list the Table (because it's conceptually infinite), we can still perform lookups on it:

>>> lookup o 0
Just (Just ("Gabriel","Gonzalez"),Just "GabrielG439")
>>> lookup o 2
Just (Just ("Edgar","Codd"),Nothing)
>>> lookup o 3
Just (Nothing,Just "avibryant")
>>> lookup o 4
Just (Nothing, Nothing)

... and if we were to join our "infinite" table against a finite table, we get back a finite table (Exercise: Try it! Define a new finite table to join against o and see what happens)

What's nice about optional is that we can easily left-join or right-join in multiple tables at a time. If I had four tables of types:

t1 :: Table k a
t2 :: Table k b
t3 :: Table k c
t4 :: Table k d

... I could left join t2, t3, and t4 into t1 by just writing:

liftA4 (,,,) t1 (optional t2) (optional t3) (optional t4)
    :: Table k (a, Maybe b, Maybe c, Maybe d)

Now that I think about it, I don't even really need to provide join2/join3/join4/join5 since they are not much shorter than using the liftA family of functions in Control.Applicative:

-- Exercise: What would `join1` be?
join2 = liftA2 (,)
join3 = liftA3 (,,)
join4 = liftA4 (,,,)
join5 = liftA5 (,,,,)

In other words, I can implement almost any imaginable join just by using liftA{n} and some permutation of optionals. I don't even know what I'd call this join:

liftA5 (,,,,) t1 (optional t2) t3 (optional t4) t5

... but the beauty is that I don't have to give a name for it. I can easily write anonymous joins on the fly using the Control.Applicative module. Moreover, the above code will work for anything that implements the Alternative interface.

Conclusion

Control.Applicative provides a very general API for relational joins: the Alternative type class (which includes Applicative, since Applicative is a super-class of Alternative). Perhaps Control.Applicative could be improved slightly by providing the join{n} family of functions listed above, but it's still highly usable in its present state.

Note that this trick only works for relational abstractions embedded within Haskell. This API can be generalized for external relational data stores (i.e. Postgres), which I will cover in a subsequent post.

Sunday, November 23, 2014

How to build library-agnostic streaming sources

The Haskell ecosystem has numerous libraries for effectful stream programming, including, but not limited to:

  • List
  • conduit
  • enumerator
  • io-streams
  • iterIO
  • iteratee
  • list-t
  • logict
  • machines
  • pipes

Unfortunately, this poses a problem for library writers. Which streaming library should they pick when providing effectful streaming components?

Sometimes the correct answer is: none of the above! We can often build streaming and effectful generators using only the base and transformers libraries!

The trick is to build polymorphic generators using only the MonadPlus and MonadTrans type classes. These generators can then be consumed by any library that provides an implementation of ListT that implements MonadPlus and MonadTrans.

MonadPlus

I like to think of MonadPlus as the "list building" type class. This is because you can assemble a list using return, mzero, and mplus:

>>> import Control.Monad (MonadPlus(..))
>>> mzero :: [Int]
[]
>>> return 1 :: [Int]
[1]
>>> return 1 `mplus` return 2 :: [Int]  -- [1] ++ [2]
[1, 2]

In other words, mzero is analogous to [], mplus is analogous to (++), and return builds a singleton list.

However, many things other than lists implement MonadPlus, including every implementation of ListT in the wild. Therefore, if we build collections using MonadPlus operations these collections will type-check as ListT streams as well.

Let's provide the following helper function to convert a list to this more general MonadPlus type:

select :: MonadPlus m => [a] -> m a
select  []    = mzero
select (x:xs) = return x `mplus` select xs

-- or: select = foldr (\x m -> return x `mplus` m) mzero

Note that this select function has some nice mathematical properties:

select (xs `mplus` ys) = (select xs) `mplus` (select ys)
select  mzero          = mzero

-- This assumes the distributivity law for `MonadPlus`
select . (f >=> g) = (select . f) >=> (select . g)
select . return    = return

MonadTrans

Using select and liftIO (from MonadIO), we can build list comprehensions that interleave effects, like this:

example :: (MonadIO m, MonadPlus m) => m ()
example = do
    x <- select [1, 2, 3]
    liftIO (putStrLn ("x = " ++ show x))
    y <- select [4, 5, 6]
    liftIO (putStrLn ("y = " ++ show y))

You can read the above code as saying:

  • Let x range from 1 to 3
  • Print x every time it selects a new value
  • For each value of x, let y range from 4 to 6
  • Print y every time it selects a new value

Notice how example doesn't depend on any particular streaming library, so we can run it with diverse ListT implementations, all of which implement MonadPlus and MonadIO:

>>> Pipes.runListT example  -- This requires `pipes-4.1.4`
x = 1
y = 4
y = 5
y = 6
x = 2
y = 4
y = 5
y = 6
x = 3
y = 4
y = 5
y = 6
>>> _ <- Control.Monad.Logic.observeAllT example
<Exact same output>
>>> _ <- ListT.toList example
<Exact same output>

However, we can use this trick for more than just list comprehensions. We can build arbitrary lazy and effectful streams this way!

Generators

Here's an example of a generator that lazily emits lines read from standard input:

import Control.Monad (MonadPlus(..))
import Control.Monad.Trans.Class (MonadTrans(..))
import System.IO (isEOF)

stdinLn :: (MonadIO m, MonadPlus m) => m String
stdinLn = do
    eof <- liftIO isEOF
    if eof
        then mzero
        else liftIO getLine `mplus` stdinLn

You can read the above code as saying:

  • Check if we are at the end of the file
  • If we're done, then return an empty list
  • If we're not done, prepend a getLine onto a recursive call to stdinLn

We can prove this works by writing a program that forwards these lines to standard output:

echo :: (MonadIO m, MonadPlus m) => m ()
echo = do
    str <- stdinLn
    liftIO (putStrLn str)

Now we can run echo using any of our favorite ListT implementations and it will do the right thing, streaming lines lazily from standard input to standard output in constant space:

>>> Pipes.runListT echo
Test<Enter>
Test
ABC<Enter>
ABC
42<Enter>
42
<Ctrl-D>
>>>

The exception is the transformers library, whose ListT implementation does not stream or run in constant space.

Combinators

We can also implement lazy variations on Control.Monad combinators using this interface.

For example, we can implement a lazy variation on replicateM using just select and replicate:

replicateM' :: MonadPlus m => Int -> m a -> m a
replicateM' n m = do
    m' <- select (replicate n m)
    m'

-- or: replicateM' n = join . select . replicate n

We can use this lazy replicateM' to lazily echo 10 lines from standard input to standard output:

example :: (MonadIO m, MonadPlus m) => m ()
example = do
    str <- replicateM' 10 (lift getLine)
    liftIO (putStrLn str)

We can also implement a lazy mapM and forM, too, except now their implementations are so trivial that they don't even deserve their own functions:

mapM' :: Monad m => (a -> m b) -> m a -> m b
mapM' = (=<<)

forM' :: MOnad m => m a -> (a -> m b) -> m a
forM' = (>>=)

example :: (MonadIO m, MonadPlus m) => m ()
example = mapM' (liftIO . print) (replicateM' 10 (liftIO getLine))

Similarly, a lazy sequence just becomes join.

Conclusion

The following streaming libraries already provide their own implementation of ListT compatible with the above trick:

  • List
  • list-t
  • LogicT
  • pipes

The other streaming libraries do not currently provide a ListT implementation, but there is no reason why they couldn't! For example, conduit could implement an internal ListT type of its own and use that as an intermediate type for converting the above abstraction to the public Source type:

convert :: Monad m
        => Data.Conduit.Internal.ListT m a -> Source m a

This polymorphic API obviously does not capture all possible streaming patterns. For example, you can not implement something like take using this API. However, I suspect that a significant number of streaming components can be written using this dependency-free interface.

Edit: Thank you to Twan van Laarhoven, who pointed out that you can sometimes use MonadIO instead of MonadTrans, which produces nicer constraints. I updated this article to incorporate his suggestion.

Sunday, October 26, 2014

How to desugar Haskell code

Haskell's core language is very small, and most Haskell code desugars to either:

  • lambdas / function application,
  • algebraic data types / case expressions,
  • recursive let bindings,
  • type classes and specialization, or:
  • Foreign function calls

Once you understand those concepts you have a foundation for understanding everything else within the language. As a result, the language feels very small and consistent.

I'll illustrate how many higher-level features desugar to the same set of lower-level primitives.

if

if is equivalent to a case statement:

if b then e1 else e2

-- ... is equivalent to:
case b of
    True  -> e1
    False -> e2

This works because Bools are defined within the language:

data Bool = False | True

Multi-argument lambdas

Lambdas of multiple arguments are equivalent to nested lambdas of single arguments:

\x y z -> e

-- ... is equivalent to:
\x -> \y -> \z -> e

Functions

Functions are equivalent to lambdas:

f x y z = e

-- ... is equivalent to:
f = \x y z -> e

-- ... which in turn desugars to:
f = \x -> \y -> \z -> e

As a result, all functions of multiple arguments are really just nested functions of one argument each. This trick is known as "currying".

Infix functions

You can write functions of at least two arguments in infix form using backticks:

x `f` y

-- ... desugars to:
f x y

Operators

Operators are just infix functions of two arguments that don't need backticks. You can write them in prefix form by surrounding them with parentheses:

x + y

-- ... desugars to:
(+) x y

The compiler distinguishes operators from functions by reserving a special set of punctuation characters exclusively for operators.

Operator parameters

The parentheses trick for operators works in other contexts, too. You can bind parameters using operator-like names if you surround them with parentheses:

let f (%) x y = x % y
in  f (*) 1 2

-- ... desugars to:
(\(%) x y -> x % y) (*) 1 2

-- ... reduces to:
1 * 2

Operator sections

You can partially apply operators to just one argument using a section:

(1 +)

-- desugars to:
\x -> 1 + x

This works the other way, too:

(+ 1)

-- desugars to:
\x -> x + 1

This also works with infix functions surrounded by backticks:

(`f` 1)

-- desugars to:
\x -> x `f` 1

-- desugars to:
\x -> f x 1

Pattern matching

Pattern matching on constructors desugars to case statements:

f (Left  l) = eL
f (Right r) = eR

-- ... desugars to:
f x = case x of
    Left  l -> eL
    Right r -> eR

Pattern matching on numeric or string literals desugars to equality tests:

f 0 = e0
f _ = e1

-- ... desugars to:
f x = if x == 0 then e0 else e1

-- ... desugars to:
f x = case x == 0 of
    True  -> e0
    False -> e1

Non-recursive let / where

Non-recursive lets are equivalent to lambdas:

let x = y in z

-- ... is equivalent to:
(\x -> z) y

Same thing for where, which is identical in purpose to let:

z where x = y

-- ... is equivalent to:
(\x -> z) y
Actually, that's not quite true, because of let generalization, but it's close to the truth.

Recursive let / where cannot be desugared like this and should be treated as a primitive.

Top-level functions

Multiple top-level functions can be thought of as one big recursive let binding:

f0 x0 = e0

f1 x1 = e1

main = e2


-- ... is equivalent to:
main = let f0 x0 = e0
           f1 x1 = e1
       in  e2

In practice, Haskell does not desugar them like this, but it's a useful mental model.

Imports

Importing modules just adds more top-level functions. Importing modules has no side effects (unlike some languages), unless you use Template Haskell.

Type-classes

Type classes desugar to records of functions under the hood where the compiler implicitly threads the records throughout the code for you.

class Monoid m where
    mappend :: m -> m -> m
    mempty :: m

instance Monoid Int where
    mappend = (+)
    mempty  = 0

f :: Monoid m => m -> m
f x = mappend x x


-- ... desugars to:
data Monoid m = Monoid
    { mappend :: m -> m -> m
    , mempty  :: m
    }

intMonoid :: Monoid Int
intMonoid = Monoid
    { mappend = (+)
    , mempty  = 0
    }

f :: Monoid m -> m -> m
f (Monoid p z) x = p x x

... and specializing a function to a particular type class just supplies the function with the appropriate record:

g :: Int -> Int
g = f

-- ... desugars to:
g = f intMonoid

Two-line do notation

A two-line do block desugars to the infix (>>=) operator:

do x <- m
   e

-- ... desugars to:
m >>= (\x ->
e )

One-line do notation

For a one-line do block, you can just remove the do:

main = do putStrLn "Hello, world!"

-- ... desugars to:
main = putStrLn "Hello, world!"

Multi-line do notation

do notation of more than two lines is equivalent to multiple nested dos:

do x <- mx
   y <- my
   z

-- ... is equivalent to:
do x <- mx
   do y <- my
      z

-- ... desugars to:
mx >>= (\x ->
my >>= (\y ->
z ))

let in do notation

Non-recursive let in a do block desugars to a lambda:

do let x = y
   z

-- ... desugars to:
(\x -> z) y

ghci

The ghci interactive REPL is analogous to one big do block (with lots and lots of caveats):

$ ghci
>>> str <- getLine
>>> let str' = str ++ "!"
>>> putStrLn str'

-- ... is equivalent to the following Haskell program:
main = do
    str <- getLine
    let str' = str ++ "!"
    putStrLn str'

-- ... desugars to:
main = do
    str <- getLine
    do let str' = str ++ "!"
       putStrLn str'

-- ... desugars to:
main =
    getLine >>= (\str ->
    do let str' = str ++ "!"
       putStrLn str' )

-- ... desugars to:
main =
    getLine >>= (\str ->
    (\str' -> putStrLn str') (str ++ "!") )

-- ... reduces to:
main =
    getLine >>= (\str ->
    putStrLn (str ++ "!") )

List comprehensions

List comprehensions are equivalent to do notation:

[ (x, y) | x <- mx, y <- my ]

-- ... is equivalent to:

do x <- mx
   y <- my
   return (x, y)

-- ... desugars to:
mx >>= (\x -> my >>= \y -> return (x, y))

-- ... specialization to lists:
concatMap (\x -> concatMap (\y -> [(x, y)]) my) mx

The real desugared code is actually more efficient, but still equivalent.

The MonadComprehensions language extension generalizes list comprehension syntax to work with any Monad. For example, you can write an IO comprehension:

>>> :set -XMonadComprehensions
>>> [ (str1, str2) | str1 <- getLine, str2 <- getLine ]
Line1<Enter>
Line2<Enter>
("Line1", "Line2")

Numeric literals

Integer literals are polymorphic by default and desugar to a call to fromIntegral on a concrete Integer:

1 :: Num a => a

-- desugars to:
fromInteger (1 :: Integer)

Floating point literals behave the same way, except they desugar to fromRational:

1.2 :: Fractional a => a

-- desugars to:
fromRational (1.2 :: Rational)

IO

You can think of IO and all foreign function calls as analogous to building up a syntax tree describing all planned side effects:

main = do
    str <- getLine
    putStrLn str
    return 1


-- ... is analogous to:
data IO r
    = PutStrLn String (IO r)
    | GetLine (String -> IO r)
    | Return r

instance Monad IO where
    (PutStrLn str io) >>= f = PutStrLn str (io >>= f)
    (GetLine k      ) >>= f = GetLine (\str -> k str >>= f)
    Return r          >>= f = f r

main = do
    str <- getLine
    putStrLn str
    return 1

-- ... desugars and reduces to:
main =
    GetLine (\str ->
    PutStrLn str (
    Return 1 ))

This mental model is actually very different from how IO is implemented under the hood, but it works well for building an initial intuition for IO.

For example, one intuition you can immediately draw from this mental model is that order of evaluation in Haskell has no effect on order of IO effects, since evaluating the syntax tree does not actually interpret or run the tree. The only way you can actually animate the syntax tree is to define it to be equal to main.

Conclusion

I haven't covered everything, but hopefully that gives some idea of how Haskell translates higher level abstractions into lower-level abstractions. By keeping the core language small, Haskell can ensure that language features play nicely with each other.

Note that Haskell also has a rich ecosystem of experimental extensions, too. Many of these are experimental because each new extension must be vetted to understand how it interacts with existing language features.

Friday, September 12, 2014

Morte: an intermediate language for super-optimizing functional programs

The Haskell language provides the following guarantee (with caveats): if two programs are equal according to equational reasoning then they will behave the same. On the other hand, Haskell does not guarantee that equal programs will generate identical performance. Consequently, Haskell library writers must employ rewrite rules to ensure that their abstractions do not interfere with performance.

Now suppose there were a hypothetical language with a stronger guarantee: if two programs are equal then they generate identical executables. Such a language would be immune to abstraction: no matter how many layers of indirection you might add the binary size and runtime performance would be unaffected.

Here I will introduce such an intermediate language named Morte that obeys this stronger guarantee. I have not yet implemented a back-end code generator for Morte, but I wanted to pause to share what I have completed so far because Morte uses several tricks from computer science that I believe deserve more attention.

Morte is nothing more than a bare-bones implementation of the calculus of constructions, which is a specific type of lambda calculus. The only novelty is how I intend to use this lambda calculus: as a super-optimizer.

Normalization

The typed lambda calculus possesses a useful property: every term in the lambda calculus has a unique normal form if you beta-reduce everything. If you're new to lambda calculus, normalizing an expression equates to indiscriminately inlining every function call.

What if we built a programming language whose intermediate language was lambda calculus? What if optimization was just normalization of lambda terms (i.e. indiscriminate inlining)? If so, then we would could abstract freely, knowing that while compile times might increase, our final executable would never change.

Recursion

Normally you would not want to inline everything because infinitely recursive functions would become infinitely large expressions. Fortunately, we can often translate recursive code to non-recursive code!

I'll demonstrate this trick first in Haskell and then in Morte. Let's begin from the following recursive List type along with a recursive map function over lists:

import Prelude hiding (map, foldr)

data List a = Cons a (List a) | Nil

example :: List Int
example = Cons 1 (Cons 2 (Cons 3 Nil))

map :: (a -> b) -> List a -> List b
map f  Nil       = Nil
map f (Cons a l) = Cons (f a) (map f l)

-- Argument order intentionally switched
foldr :: List a -> (a -> x -> x) -> x -> x
foldr  Nil       c n = n
foldr (Cons a l) c n = c a (foldr l c n)

result :: Int
result = foldr (map (+1) example) (+) 0

-- result = 9

Now imagine that we disable all recursion in Haskell: no more recursive types and no more recursive functions. Now we must reject the above program because:

  • the List data type definition recursively refers to itself

  • the map and foldr functions recursively refer to themselves

Can we still encode lists in a non-recursive dialect of Haskell?

Yes, we can!

-- This is a valid Haskell program

{-# LANGUAGE RankNTypes #-}

import Prelude hiding (map, foldr)

type List a = forall x . (a -> x -> x) -> x -> x

example :: List Int
example = \cons nil -> cons 1 (cons 2 (cons 3 nil))

map :: (a -> b) -> List a -> List b
map f l = \cons nil -> l (\a x -> cons (f a) x) nil

foldr :: List a -> (a -> x -> x) -> x -> x
foldr l = l

result :: Int
result = foldr (map (+ 1) example) (+) 0

-- result = 9

Carefully note that:

  • List is no longer defined recursively in terms of itself

  • map and foldr are no longer defined recursively in terms of themselves

Yet, we somehow managed to build a list, map a function over the list, and fold the list, all without ever using recursion! We do this by encoding the list as a fold, which is why foldr became the identity function.

This trick works for more than just lists. You can take any recursive data type and mechanically transform the type into a fold and transform functions on the type into functions on folds. If you want to learn more about this trick, the specific name for it is "Boehm-Berarducci encoding". If you are curious, this in turn is equivalent to an even more general concept from category theory known as "F-algebras", which let you encode inductive things in a non-inductive way.

Non-recursive code greatly simplifies equational reasoning. For example, we can easily prove that we can optimize map id l to l:

map id l

-- Inline: map f l = \cons nil -> l (\a x -> cons (f a) x) nil
= \cons nil -> l (\a x -> cons (id a) x) nil

-- Inline: id x = x
= \cons nil -> l (\a x -> cons a x) nil

-- Eta-reduce
= \cons nil -> l cons nil

-- Eta-reduce
= l

Note that we did not need to use induction to prove this optimization because map is no longer recursive. The optimization became downright trivial, so trivial that we can automate it!

Morte optimizes programs using this same simple scheme:

  • Beta-reduce everything (equivalent to inlining)
  • Eta-reduce everything

To illustrate this, I will desugar our high-level Haskell code to the calculus of constructions. This desugaring process is currently manual (and tedious), but I plan to automate this, too, by providing a front-end high-level language similar to Haskell that compiles to Morte:

-- mapid.mt

(    \(List : * -> *)
->   \(  map
     :   forall (a : *)
     ->  forall (b : *)
     -> (a -> b) -> List a -> List b
     )
->   \(id : forall (a : *) -> a -> a)

    ->   \(a : *) -> map a a (id a)
)

-- List
(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)

-- map
(   \(a : *)
->  \(b : *)
->  \(f : a -> b)
->  \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
->  \(x : *)
->  \(Cons : b -> x -> x)
->  \(Nil: x)
->  l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
)

-- id
(\(a : *) -> \(va : a) -> va)

This line of code is the "business end" of the program:

\(a : *) -> map a a (id a)

The extra 'a' business is because in any polymorphic lambda calculus you explicitly accept polymorphic types as arguments and specialize functions by applying them to types. Higher-level functional languages like Haskell or ML use type inference to automatically infer and supply type arguments when possible.

We can compile this program using the morte executable, which accepts a Morte program on stdin, outputs the program's type stderr, and outputs the optimized program on stdout:

$ morte < id.mt
∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a 
→ x → x) → x → x

λ(a : *) → λ(l : ∀(x : *) → (a → x → x) → x → x) → l

The first line is the type, which is a desugared form of:

forall a . List a -> List a

The second line is the program, which is the identity function on lists. Morte optimized away the map completely, the same way we did by hand.

Morte also optimized away the rest of the code, too. Dead-code elimination is just an emergent property of Morte's simple optimization scheme.

Equality

We could double-check our answer by asking Morte to optimize the identity function on lists:

-- idlist.mt

(    \(List : * -> *)
->   \(id   : forall (a : *) -> a -> a)

    ->   \(a : *) -> id (List a)
)

-- List
(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)

-- id
(\(a : *) -> \(va : a) -> va)

Sure enough, Morte outputs an alpha-equivalent result (meaning the same up to variable renaming):

$ ~/.cabal/bin/morte < idlist.mt
∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a 
→ x → x) → x → x

λ(a : *) → λ(va : ∀(x : *) → (a → x → x) → x → x) → va

We can even use the morte library to mechanically check if two Morte expressions are alpha-, beta-, and eta- equivalent. We can parse our two Morte files into Morte's Expr type and then use the Eq instance for Expr to test for equivalence:

$ ghci
Prelude> import qualified Data.Text.Lazy.IO as Text
Prelude Text> txt1 <- Text.readFile "mapid.mt"
Prelude Text> txt2 <- Text.readFile "idlist.mt"
Prelude Text> import Morte.Parser (exprFromText)
Prelude Text Morte.Parser> let e1 = exprFromText txt1
Prelude Text Morte.Parser> let e2 = exprFromText txt2
Prelude Text Morte.Parser> import Control.Applicative (liftA2)
Prelude Text Morte.Parser Control.Applicative> liftA2 (==) e1 e2
Right True
$ -- `Right` means both expressions parsed successfully
$ -- `True` means they are alpha-, beta-, and eta-equivalent

We can use this to mechanically verify that two Morte programs optimize to the same result.

Compile-time computation

Morte can compute as much (or as little) at compile as you want. The more information you encode directly within lambda calculus, the more compile-time computation Morte will perform for you. For example, if we translate our Haskell List code entirely to lambda calculus, then Morte will statically compute the result at compile time.

-- nine.mt

(   \(Nat : *)
->  \(zero : Nat)
->  \(one : Nat)
->  \((+) : Nat -> Nat -> Nat)
->  \((*) : Nat -> Nat -> Nat)
->  \(List : * -> *)
->  \(Cons : forall (a : *) -> a -> List a -> List a)
->  \(Nil  : forall (a : *)                -> List a)
->  \(  map
    :   forall (a : *) -> forall (b : *)
    ->  (a -> b) -> List a -> List b
    )
->  \(  foldr
    :   forall (a : *)
    ->  List a
    ->  forall (r : *)
    ->  (a -> r -> r) -> r -> r
    )
->  (    \(two   : Nat)
    ->   \(three : Nat)
    ->   (    \(example : List Nat)

         ->   foldr Nat (map Nat Nat ((+) one) example) Nat (+) zero
         )

         -- example
         (Cons Nat one (Cons Nat two (Cons Nat three (Nil Nat))))
    )

    -- two
    ((+) one one)

    -- three
    ((+) one ((+) one one))
)

-- Nat
(   forall (a : *)
->  (a -> a)
->  a
->  a
)

-- zero
(   \(a : *)
->  \(Succ : a -> a)
->  \(Zero : a)
->  Zero
)

-- one
(   \(a : *)
->  \(Succ : a -> a)
->  \(Zero : a)
->  Succ Zero
)

-- (+)
(   \(m : forall (a : *) -> (a -> a) -> a -> a)
->  \(n : forall (a : *) -> (a -> a) -> a -> a)
->  \(a : *)
->  \(Succ : a -> a)
->  \(Zero : a)
->  m a Succ (n a Succ Zero)
)

-- (*)
(   \(m : forall (a : *) -> (a -> a) -> a -> a)
->  \(n : forall (a : *) -> (a -> a) -> a -> a)
->  \(a : *)
->  \(Succ : a -> a)
->  \(Zero : a)
->  m a (n a Succ) Zero
)

-- List
(   \(a : *)
->  forall (x : *)
->  (a -> x -> x)  -- Cons
->  x              -- Nil
->  x
)

-- Cons
(   \(a : *)
->  \(va  : a)
->  \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
->  \(x : *)
->  \(Cons : a -> x -> x)
->  \(Nil  : x)
->  Cons va (vas x Cons Nil)
)

-- Nil
(   \(a : *)
->  \(x : *)
->  \(Cons : a -> x -> x)
->  \(Nil  : x)
->  Nil
)

-- map
(   \(a : *)
->  \(b : *)
->  \(f : a -> b)
->  \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
->  \(x : *)
->  \(Cons : b -> x -> x)
->  \(Nil: x)
->  l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
)

-- foldr
(   \(a : *)
->  \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
->  vas
)

The relevant line is:

foldr Nat (map Nat Nat ((+) one) example) Nat (+) zero

If you remove the type-applications to Nat, this parallels our original Haskell example. We can then evaluate this expression at compile time:

$ morte < nine.mt
∀(a : *) → (a → a) → a → a

λ(a : *) → λ(Succ : a → a) → λ(Zero : a) → Succ (Succ (Succ 
(Succ (Succ (Succ (Succ (Succ (Succ Zero))))))))

Morte reduces our program to a church-encoded nine.

Run-time computation

Morte does not force you to compute everything using lambda calculus at compile time. Suppose that we wanted to use machine arithmetic at run-time instead. We can do this by parametrizing our program on:

  • the Int type,
  • operations on Ints, and
  • any integer literals we use

We accept these "foreign imports" as ordinary arguments to our program:

-- foreign.mt

-- Foreign imports
    \(Int : *)                      -- Foreign type
->  \((+) : Int -> Int -> Int)      -- Foreign function
->  \((*) : Int -> Int -> Int)      -- Foreign function
->  \(lit@0 : Int)  -- Literal "1"  -- Foreign data
->  \(lit@1 : Int)  -- Literal "2"  -- Foreign data
->  \(lit@2 : Int)  -- Literal "3"  -- Foreign data
->  \(lit@3 : Int)  -- Literal "1"  -- Foreign data
->  \(lit@4 : Int)  -- Literal "0"  -- Foreign data

-- The rest is compile-time lambda calculus
->  (   \(List : * -> *)
    ->  \(Cons : forall (a : *) -> a -> List a -> List a)
    ->  \(Nil  : forall (a : *)                -> List a)
    ->  \(  map
        :   forall (a : *)
        ->  forall (b : *)
        ->  (a -> b) -> List a -> List b
        )
    ->  \(  foldr
        :   forall (a : *)
        ->  List a
        ->  forall (r : *)
        ->  (a -> r -> r) -> r -> r
        )
        ->   (    \(example : List Int)

             ->   foldr Int (map Int Int ((+) lit@3) example) Int (+) lit@4
             )

             -- example
             (Cons Int lit@0 (Cons Int lit@1 (Cons Int lit@2 (Nil Int))))
    )

    -- List
    (   \(a : *)
    ->  forall (x : *)
    ->  (a -> x -> x)  -- Cons
    ->  x              -- Nil
    ->  x
    )

    -- Cons
    (   \(a : *)
    ->  \(va  : a)
    ->  \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
    ->  \(x : *)
    ->  \(Cons : a -> x -> x)
    ->  \(Nil  : x)
    ->  Cons va (vas x Cons Nil)
    )

    -- Nil
    (   \(a : *)
    ->  \(x : *)
    ->  \(Cons : a -> x -> x)
    ->  \(Nil  : x)
    ->  Nil
    )

    -- map
    (   \(a : *)
    ->  \(b : *)
    ->  \(f : a -> b)
    ->  \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
    ->  \(x : *)
    ->  \(Cons : b -> x -> x)
    ->  \(Nil: x)
    ->  l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
    )

    -- foldr
    (   \(a : *)
    ->  \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
    ->  vas
    )

We can use Morte to optimize the above program and Morte will reduce the program to nothing but foreign types, operations, and values:

$ morte < foreign.mt
∀(Int : *) → (Int → Int → Int) → (Int → Int → Int) → Int →
Int → Int → Int → Int → Int

λ(Int : *) → λ((+) : Int → Int → Int) → λ((*) : Int → Int → 
Int) → λ(lit : Int) → λ(lit@1 : Int) → λ(lit@2 : Int) → 
λ(lit@3 : Int) → λ(lit@4 : Int) → (+) ((+) lit@3 lit) ((+) 
((+) lit@3 lit@1) ((+) ((+) lit@3 lit@2) lit@4))

If you study that closely, Morte adds lit@3 (the "1" literal) to each literal of the list and then adds them up. We can then pass this foreign syntax tree to our machine arithmetic backend to transform those foreign operations to efficient operations.

Morte lets you choose how much information you want to encode within lambda calculus. The more information you encode in lambda calculus the more Morte can optimize your program, but the slower your compile times will get, so it's a tradeoff.

Corecursion

Corecursion is the dual of recursion. Where recursion works on finite data types, corecursion works on potentially infinite data types. An example would be the following infinite Stream in Haskell:

data Stream a = Cons a (Stream a)

numbers :: Stream Int
numbers = go 0
  where
    go n = Cons n (go (n + 1))

-- numbers = Cons 0 (Cons 1 (Cons 2 (...

map :: (a -> b) -> Stream a -> Stream b
map f (Cons a l) = Cons (f a) (map f l)

example :: Stream Int
example = map (+ 1) numbers

-- example = Cons 1 (Cons 2 (Cons 3 (...

Again, pretend that we disable any function from referencing itself so that the above code becomes invalid. This time we cannot reuse the same trick from previous sections because we cannot encode numbers as a fold without referencing itself. Try this if you don't believe me.

However, we can still encode corecursive things in a non-corecursive way. This time, we encode our Stream type as an unfold instead of a fold:

-- This is also valid Haskell code

{-# LANGUAGE ExistentialQuantification #-}

data Stream a = forall s . MkStream
    { seed :: s
    , step :: s -> (a, s)
    }

numbers :: Stream Int
numbers = MkStream 0 (\n -> (n, n + 1))

map :: (a -> b) -> Stream a -> Stream b
map f (MkStream s0 k) = MkStream s0 k'
  where
    k' s = (f a, s')
      where (a, s') = k s 

In other words, we store an initial seed of some type s and a step function of type s -> (a, s) that emits one element of our Stream. The type of our seed s can be anything and in our numbers example, the type of the internal state is Int. Another stream could use a completely different internal state of type (), like this:

-- ones = Cons 1 ones

ones :: Stream Int
ones = MkStream () (\_ -> (1, ()))

The general name for this trick is an "F-coalgebra" encoding of a corecursive type.

Once we encode our infinite stream non-recursively, we can safely optimize the stream by inlining and eta reduction:

map id l

-- l = MkStream s0 k
= map id (MkStream s0 k)

-- Inline definition of `map`
= MkStream s0 k'
  where
    k' = \s -> (id a, s')
      where
        (a, s') = k s

-- Inline definition of `id`
= MkStream s0 k'
  where
    k' = \s -> (a, s')
      where
        (a, s') = k s

-- Inline: (a, s') = k s
= MkStream s0 k'
  where
    k' = \s -> k s

-- Eta reduce
= MkStream s0 k'
  where
    k' = k

-- Inline: k' = k
= MkStream s0 k

-- l = MkStream s0 k
= l

Now let's encode Stream and map in Morte and compile the following four expressions:

map id

id

map f . map g

map (f . g)

Save the following Morte file to stream.mt and then uncomment the expression you want to test:

(   \(id : forall (a : *) -> a -> a)
->  \(  (.)
    :   forall (a : *)
    ->  forall (b : *)
    ->  forall (c : *)
    ->  (b -> c)
    ->  (a -> b)
    ->  (a -> c)
    )
->  \(Pair : * -> * -> *)
->  \(P : forall (a : *) -> forall (b : *) -> a -> b -> Pair a b)
->  \(  first
    :   forall (a : *)
    ->  forall (b : *)
    ->  forall (c : *)
    ->  (a -> b)
    ->  Pair a c
    ->  Pair b c
    )

->  (   \(Stream : * -> *)
    ->  \(  map
        :   forall (a : *)
        ->  forall (b : *)
        ->  (a -> b)
        ->  Stream a
        ->  Stream b
        )

        -- example@1 = example@2
    ->  (   \(example@1 : forall (a : *) -> Stream a -> Stream a)
        ->  \(example@2 : forall (a : *) -> Stream a -> Stream a)

        -- example@3 = example@4
        ->  \(  example@3
            :   forall (a : *)
            ->  forall (b : *)
            ->  forall (c : *)
            ->  (b -> c)
            ->  (a -> b)
            ->  Stream a
            ->  Stream c
            )

        ->  \(  example@4
            :   forall (a : *)
            ->  forall (b : *)
            ->  forall (c : *)
            ->  (b -> c)
            ->  (a -> b)
            ->  Stream a
            ->  Stream c
            )

        -- Uncomment the example you want to test
        ->  example@1
--      ->  example@2
--      ->  example@3
--      ->  example@4
        )

        -- example@1
        (\(a : *) -> map a a (id a))

        -- example@2
        (\(a : *) -> id (Stream a))

        -- example@3
        (   \(a : *)
        ->  \(b : *)
        ->  \(c : *)
        ->  \(f : b -> c)
        ->  \(g : a -> b)
        ->  map a c ((.) a b c f g)
        )

        --  example@4
        (   \(a : *)
        ->  \(b : *)
        ->  \(c : *)
        ->  \(f : b -> c)
        ->  \(g : a -> b)
        ->  (.) (Stream a) (Stream b) (Stream c) (map b c f) (map a b g)
        )
    )

    -- Stream
    (   \(a : *)
    ->  forall (x : *)
    ->  (forall (s : *) -> s -> (s -> Pair a s) -> x)
    ->  x
    )

    -- map
    (   \(a : *)
    ->  \(b : *)
    ->  \(f : a -> b)
    ->  \(  st
        :   forall (x : *)
        -> (forall (s : *) -> s -> (s -> Pair a s) -> x)
        -> x
        )
    ->  \(x : *)
    ->  \(S : forall (s : *) -> s -> (s -> Pair b s) -> x)
    ->  st
        x
        (   \(s : *)
        ->  \(seed : s)
        ->  \(step : s -> Pair a s)
        ->  S
            s
            seed
            (\(seed@1 : s) -> first a b s f (step seed@1))
        )
    )
)

-- id
(\(a : *) -> \(va : a) -> va)

-- (.)
(   \(a : *)
->  \(b : *)
->  \(c : *)
->  \(f : b -> c)
->  \(g : a -> b)
->  \(va : a)
->  f (g va)
)

-- Pair
(\(a : *) -> \(b : *) -> forall (x : *) -> (a -> b -> x) -> x)

-- P
(   \(a : *)
->  \(b : *)
->  \(va : a)
->  \(vb : b)
->  \(x : *)
->  \(P : a -> b -> x)
->  P va vb
)

-- first
(   \(a : *)
->  \(b : *)
->  \(c : *)
->  \(f : a -> b)
->  \(p : forall (x : *) -> (a -> c -> x) -> x)
->  \(x : *)
->  \(Pair : b -> c -> x)
->  p x (\(va : a) -> \(vc : c) -> Pair (f va) vc)
)

Both example@1 and example@2 will generate alpha-equivalent code:

$ morte < example1.mt
∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → 
s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x
 : *) → (a → s → x) → x) → x) → x

λ(a : *) → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) →
 (a → s → x) → x) → x) → x) → st

$ morte < example2.mt
∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → 
s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x
 : *) → (a → s → x) → x) → x) → x

λ(a : *) → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) →
 (a → s → x) → x) → x) → x) → va

Similarly, example@3 and example@4 will generate alpha-equivalent code:

$ morte < example3.mt
∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x : 
*) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) →
 x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x)
 → x) → x) → x

λ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b)
 → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s 
→ x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀
(x : *) → (c → s → x) → x) → x) → st x (λ(s : *) → λ(seed : 
s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(
seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x
 (λ(va : a) → Pair (f (g va)))))

$ morte < example4.mt
∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x : 
*) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) →
 x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x)
 → x) → x) → x

λ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b)
 → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s 
→ x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀
(x : *) → (c → s → x) → x) → x) → va x (λ(s : *) → λ(seed : 
s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(
seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x
 (λ(va : a) → Pair (f (g va))))

We inadvertently proved stream fusion for free, but we're still not done, yet! Everything we learn about recursive and corecursive sequences can be applied to model recursive and corecursive effects!

Effects

I will conclude this post by showing how to model both recursive and corecursive programs that have side effects. The recursive program will echo ninety-nine lines from stdin to stdout. The equivalent Haskell program is in the comment header:

-- recursive.mt

-- The Haskell code we will translate to Morte:
--
--     import Prelude hiding (
--         (+), (*), IO, putStrLn, getLine, (>>=), (>>), return )
-- 
--     -- Simple prelude
--
--     data Nat = Succ Nat | Zero
--
--     zero :: Nat
--     zero = Zero
--
--     one :: Nat
--     one = Succ Zero
--
--     (+) :: Nat -> Nat -> Nat
--     Zero   + n = n
--     Succ m + n = m + Succ n
--
--     (*) :: Nat -> Nat -> Nat
--     Zero   * n = Zero
--     Succ m * n = n + (m * n)
--
--     foldNat :: Nat -> (a -> a) -> a -> a
--     foldNat  Zero    f x = x
--     foldNat (Succ m) f x = f (foldNat m f x)
--
--     data IO r
--         = PutStrLn String (IO r)
--         | GetLine (String -> IO r)
--         | Return r
--
--     putStrLn :: String -> IO U
--     putStrLn str = PutStrLn str (Return Unit)
--
--     getLine :: IO String
--     getLine = GetLine Return
--
--     return :: a -> IO a
--     return = Return
--
--     (>>=) :: IO a -> (a -> IO b) -> IO b
--     PutStrLn str io >>= f = PutStrLn str (io >>= f)
--     GetLine k       >>= f = GetLine (\str -> k str >>= f)
--     Return r        >>= f = f r
--
--     -- Derived functions
--
--     (>>) :: IO U -> IO U -> IO U
--     m >> n = m >>= \_ -> n
--
--     two :: Nat
--     two = one + one
--
--     three :: Nat
--     three = one + one + one
--
--     four :: Nat
--     four = one + one + one + one
--
--     five :: Nat
--     five = one + one + one + one + one
--
--     six :: Nat
--     six = one + one + one + one + one + one
--
--     seven :: Nat
--     seven = one + one + one + one + one + one + one
--
--     eight :: Nat
--     eight = one + one + one + one + one + one + one + one
--
--     nine :: Nat
--     nine = one + one + one + one + one + one + one + one + one
--
--     ten :: Nat
--     ten = one + one + one + one + one + one + one + one + one + one
--
--     replicateM_ :: Nat -> IO U -> IO U
--     replicateM_ n io = foldNat n (io >>) (return Unit)
--
--     ninetynine :: Nat
--     ninetynine = nine * ten + nine
--
--     main_ :: IO U
--     main_ = replicateM_ ninetynine (getLine >>= putStrLn)

-- "Free" variables
(   \(String : *   )
->  \(U : *)
->  \(Unit : U)

    -- Simple prelude
->  (   \(Nat : *)
    ->  \(zero : Nat)
    ->  \(one : Nat)
    ->  \((+) : Nat -> Nat -> Nat)
    ->  \((*) : Nat -> Nat -> Nat)
    ->  \(foldNat : Nat -> forall (a : *) -> (a -> a) -> a -> a)
    ->  \(IO : * -> *)
    ->  \(return : forall (a : *) -> a -> IO a)
    ->  \((>>=)
        :   forall (a : *)
        ->  forall (b : *)
        ->  IO a
        ->  (a -> IO b)
        ->  IO b
        )
    ->  \(putStrLn : String -> IO U)
    ->  \(getLine : IO String)

        -- Derived functions
    ->  (   \((>>) : IO U -> IO U -> IO U)
        ->  \(two   : Nat)
        ->  \(three : Nat)
        ->  \(four  : Nat)
        ->  \(five  : Nat)
        ->  \(six   : Nat)
        ->  \(seven : Nat)
        ->  \(eight : Nat)
        ->  \(nine  : Nat)
        ->  \(ten   : Nat)
        ->  (   \(replicateM_ : Nat -> IO U -> IO U)
            ->  \(ninetynine : Nat)

            ->  replicateM_ ninetynine ((>>=) String U getLine putStrLn)
            )

            -- replicateM_
            (   \(n : Nat)
            ->  \(io : IO U)
            ->  foldNat n (IO U) ((>>) io) (return U Unit)
            )

            -- ninetynine
            ((+) ((*) nine ten) nine)
        )

        -- (>>)
        (   \(m : IO U)
        ->  \(n : IO U)
        ->  (>>=) U U m (\(_ : U) -> n)
        )

        -- two
        ((+) one one)

        -- three
        ((+) one ((+) one one))

        -- four
        ((+) one ((+) one ((+) one one)))

        -- five
        ((+) one ((+) one ((+) one ((+) one one))))

        -- six
        ((+) one ((+) one ((+) one ((+) one ((+) one one)))))

        -- seven
        ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))

        -- eight
        ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))))
        -- nine
        ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))))

        -- ten
        ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))))))
    )

    -- Nat
    (   forall (a : *)
    ->  (a -> a)
    ->  a
    ->  a
    )

    -- zero
    (   \(a : *)
    ->  \(Succ : a -> a)
    ->  \(Zero : a)
    ->  Zero
    )

    -- one
    (   \(a : *)
    ->  \(Succ : a -> a)
    ->  \(Zero : a)
    ->  Succ Zero
    )

    -- (+)
    (   \(m : forall (a : *) -> (a -> a) -> a -> a)
    ->  \(n : forall (a : *) -> (a -> a) -> a -> a)
    ->  \(a : *)
    ->  \(Succ : a -> a)
    ->  \(Zero : a)
    ->  m a Succ (n a Succ Zero)
    )

    -- (*)
    (   \(m : forall (a : *) -> (a -> a) -> a -> a)
    ->  \(n : forall (a : *) -> (a -> a) -> a -> a)
    ->  \(a : *)
    ->  \(Succ : a -> a)
    ->  \(Zero : a)
    ->  m a (n a Succ) Zero
    )

    -- foldNat
    (   \(n : forall (a : *) -> (a -> a) -> a -> a)
    ->  n
    )

    -- IO
    (   \(r : *)
    ->  forall (x : *)
    ->  (String -> x -> x)
    ->  ((String -> x) -> x)
    ->  (r -> x)
    ->  x
    )

    -- return
    (   \(a : *)
    ->  \(va : a)
    ->  \(x : *)
    ->  \(PutStrLn : String -> x -> x)
    ->  \(GetLine : (String -> x) -> x)
    ->  \(Return : a -> x)
    ->  Return va
    )

    -- (>>=)
    (   \(a : *)
    ->  \(b : *)
    ->  \(m : forall (x : *)
        ->  (String -> x -> x)
        ->  ((String -> x) -> x)
        ->  (a -> x)
        ->  x
        )
    ->  \(f : a
        ->  forall (x : *)
        -> (String -> x -> x)
        -> ((String -> x) -> x)
        -> (b -> x)
        -> x
        )
    ->  \(x : *)
    ->  \(PutStrLn : String -> x -> x)
    ->  \(GetLine : (String -> x) -> x)
    ->  \(Return : b -> x)
    ->  m x PutStrLn GetLine (\(va : a) -> f va x PutStrLn GetLine Return)
    )

    -- putStrLn
    (   \(str : String)
    ->  \(x : *)
    ->  \(PutStrLn : String -> x -> x  )
    ->  \(GetLine  : (String -> x) -> x)
    ->  \(Return   : U -> x)
    ->  PutStrLn str (Return Unit)
    )

    -- getLine
    (   \(x : *)
    ->  \(PutStrLn : String -> x -> x  )
    ->  \(GetLine  : (String -> x) -> x)
    ->  \(Return   : String -> x)
    -> GetLine Return
    )
)

This program will compile to a completely unrolled read-write loop, as most recursive programs will:

$ morte < recursive.mt
∀(String : *) → ∀(U : *) → U → ∀(x : *) → (String → x → x) →
 ((String → x) → x) → (U → x) → x

λ(String : *) → λ(U : *) → λ(Unit : U) → λ(x : *) → λ(PutStr
Ln : String → x → x) → λ(GetLine : (String → x) → x) → λ(Ret
urn : U → x) → GetLine (λ(va : String) → PutStrLn va (GetLin
e (λ(va@1 : String) → PutStrLn va@1 (GetLine (λ(va@2 : Strin
g) → PutStrLn va@2 (GetLine (λ(va@3 : String) → PutStrLn ...
 <snip>
... GetLine (λ(va@92 : String) → PutStrLn va@92 (GetLine (λ(
va@93 : String) → PutStrLn va@93 (GetLine (λ(va@94 : String)
 → PutStrLn va@94 (GetLine (λ(va@95 : String) → PutStrLn va@
95 (GetLine (λ(va@96 : String) → PutStrLn va@96 (GetLine (λ(
va@97 : String) → PutStrLn va@97 (GetLine (λ(va@98 : String)
 → PutStrLn va@98 (Return Unit))))))))))))))))))))))))))))))
))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
))))))))))))))))))))))))))))))))))))))))))))))))

In contrast, if we encode the effects corecursively we can express a program that echoes indefinitely from stdin to stdout:

-- corecursive.mt

-- data IOF r s
--     = PutStrLn String s
--     | GetLine (String -> s)
--     | Return r
--
-- data IO r = forall s . MkIO s (s -> IOF r s)
--
-- main = MkIO
--     Nothing
--     (maybe (\str -> PutStrLn str Nothing) (GetLine Just))

(   \(String : *)
->  (   \(Maybe : * -> *)
    ->  \(Just : forall (a : *) -> a -> Maybe a)
    ->  \(Nothing : forall (a : *) -> Maybe a)
    ->  \(  maybe
        :   forall (a : *)
        ->  Maybe a
        ->  forall (x : *)
        ->  (a -> x)
        ->  x
        ->  x
        )
    ->  \(IOF : * -> * -> *)
    ->  \(  PutStrLn
        :   forall (r : *)
        ->  forall (s : *)
        ->  String
        ->  s
        ->  IOF r s
        )
    ->  \(  GetLine
        :   forall (r : *)
        ->  forall (s : *)
        ->  (String -> s)
        ->  IOF r s
        )
    ->  \(  Return
        :   forall (r : *)
        ->  forall (s : *)
        ->  r
        ->  IOF r s
        )
    ->  (   \(IO : * -> *)
        ->  \(  MkIO
            :   forall (r : *)
            ->  forall (s : *)
            ->  s
            ->  (s -> IOF r s)
            ->  IO r
            )
        ->  (   \(main : forall (r : *) -> IO r)
            ->  main
            )

            -- main
            (   \(r : *)
            ->  MkIO
                r
                (Maybe String)
                (Nothing String)
                (   \(m : Maybe String)
                ->  maybe
                        String
                        m
                        (IOF r (Maybe String))
                        (\(str : String) ->
                            PutStrLn
                                r
                                (Maybe String)
                                str
                                (Nothing String)
                        )
                        (GetLine r (Maybe String) (Just String))
                )
            )
        )

        -- IO
        (   \(r : *)
        ->  forall (x : *)
        ->  (forall (s : *) -> s -> (s -> IOF r s) -> x)
        ->  x
        )

        -- MkIO
        (   \(r : *)
        ->  \(s : *)
        ->  \(seed : s)
        ->  \(step : s -> IOF r s)
        ->  \(x : *)
        ->  \(k : forall (s : *) -> s -> (s -> IOF r s) -> x)
        ->  k s seed step
        )
    )

    -- Maybe
    (\(a : *) -> forall (x : *) -> (a -> x) -> x -> x)

    -- Just
    (   \(a : *)
    ->  \(va : a)
    ->  \(x : *)
    ->  \(Just : a -> x)
    ->  \(Nothing : x)
    ->  Just va
    )

    -- Nothing
    (   \(a : *)
    ->  \(x : *)
    ->  \(Just : a -> x)
    ->  \(Nothing : x)
    ->  Nothing
    )

    -- maybe
    (   \(a : *)
    ->  \(m : forall (x : *) ->  (a -> x) ->  x-> x)
    ->  m
    )

    -- IOF
    (   \(r : *)
    ->  \(s : *)
    ->  forall (x : *)
    ->  (String -> s -> x)
    ->  ((String -> s) -> x)
    ->  (r -> x)
    ->  x
    )

    -- PutStrLn
    (   \(r : *)
    ->  \(s : *)
    ->  \(str : String)
    ->  \(vs : s)
    ->  \(x : *)
    ->  \(PutStrLn : String -> s -> x)
    ->  \(GetLine : (String -> s) -> x)
    ->  \(Return : r -> x)
    ->  PutStrLn str vs
    )

    -- GetLine
    (   \(r : *)
    ->  \(s : *)
    ->  \(k : String -> s)
    ->  \(x : *)
    ->  \(PutStrLn : String -> s -> x)
    ->  \(GetLine : (String -> s) -> x)
    ->  \(Return : r -> x)
    ->  GetLine k
    )

    -- Return
    (   \(r : *)
    ->  \(s : *)
    ->  \(vr : r)
    ->  \(x : *)
    ->  \(PutStrLn : String -> s -> x)
    ->  \(GetLine : (String -> s) -> x)
    ->  \(Return : r -> x)
    ->  Return vr
    )

)

This compiles to a state machine that we can unfold one step at a time:

$ morte < corecursive.mt
∀(String : *) → ∀(r : *) → ∀(x : *) → (∀(s : *) → s → (s → ∀
(x : *) → (String → s → x) → ((String → s) → x) → (r → x) → 
x) → x) → x

λ(String : *) → λ(r : *) → λ(x : *) → λ(k : ∀(s : *) → s → (
s → ∀(x : *) → (String → s → x) → ((String → s) → x) → (r → 
x) → x) → x) → k (∀(x : *) → (String → x) → x → x) (λ(x : *)
 → λ(Just : String → x) → λ(Nothing : x) → Nothing) (λ(m : ∀
(x : *) → (String → x) → x → x) → m (∀(x : *) → (String → (∀
(x : *) → (String → x) → x → x) → x) → ((String → ∀(x : *) →
 (String → x) → x → x) → x) → (r → x) → x) (λ(str : String) 
→ λ(x : *) → λ(PutStrLn : String → (∀(x : *) → (String → x) 
→ x → x) → x) → λ(GetLine : (String → ∀(x : *) → (String → x
) → x → x) → x) → λ(Return : r → x) → PutStrLn str (λ(x : *)
 → λ(Just : String → x) → λ(Nothing : x) → Nothing)) (λ(x : 
*) → λ(PutStrLn : String → (∀(x : *) → (String → x) → x → x)
 → x) → λ(GetLine : (String → ∀(x : *) → (String → x) → x → 
x) → x) → λ(Return : r → x) → GetLine (λ(va : String) → λ(x 
: *) → λ(Just : String → x) → λ(Nothing : x) → Just va))

I don't expect you to understand that output other than to know that we can translate the output to any backend that provides functions, and primitive read/write operations.

Conclusion

If you would like to use Morte, you can find the library on both Github and Hackage. I also provide a Morte tutorial that you can use to learn more about the library.

Morte is dependently typed in theory, but in practice I have not exercised this feature so I don't understand the implications of this. If this turns out to be a mistake then I will downgrade Morte to System Fw, which has higher-kinds and polymorphism, but no dependent types.

Additionally, Morte might be usable to transmit code in a secure and typed way in distributed environment or to share code between diverse functional language by providing a common intermediate language. However, both of those scenarios require additional work, such as establishing a shared set of foreign primitives and creating Morte encoders/decoders for each target language.

Also, there are additional optimizations which Morte might implement in the future. For example, Morte could use free theorems (equalities you deduce from the types) to simplify some code fragments even further, but Morte currently does not do this.

My next goals are:

  • Add a back-end to compile Morte to LLVM
  • Add a front-end to desugar a medium-level Haskell-like language to Morte

Once those steps are complete then Morte will be a usable intermediate language for writing super-optimizable programs.

Also, if you're wondering, the name Morte is a tribute to a talking skull from the game Planescape: Torment, since the Morte library is a "bare-bones" calculus of constructions.

Literature

If this topic interests you more, you may find the following links helpful, in roughly increasing order of difficulty:

Sunday, August 10, 2014

managed-1.0.0: A monad for managed resources

I'm splitting off the Managed type from the mvc library into its own stand-alone library. I've wanted to use this type outside of mvc for some time now, because it's an incredibly useful Applicative that I find myself reaching for in my own code whenever I need to acquire resources.

If you're not familiar with the Managed type, it's simple:

-- The real implementation uses smart constructors
newtype Managed a =
    Managed { with :: forall r . (a -> IO r) -> IO r }

-- It's a `Functor`/`Applicative`/`Monad`
instance Functor     Managed where ...
instance Applicative Managed where ...
instance Monad       Managed where ...

-- ... and also implements `MonadIO`
instance MonadIO Managed where ...

Here's an example of mixing the Managed monad with pipes to copy one file to another:

import Control.Monad.Managed
import System.IO
import Pipes
import qualified Pipes.Prelude as Pipes

main = runManaged $ do
    hIn  <- managed (withFile "in.txt" ReadMode)
    hOut <- managed (withFile "out.txt" WriteMode)
    liftIO $ runEffect $
        Pipes.fromHandle hIn >-> Pipes.toHandle hOut

However, this is not much more concise than the equivalent callback-based version. The real value of the Managed type is its Applicative instance, which you can use to lift operations from values that it wraps.

Equational reasoning

My previous post on equational reasoning at scale describes how you can use Applicatives to automatically extend Monoids while preserving the Monoid operations. The Managed Applicative is no different and provides the following type class instance that automatically lifts Monoid operations:

instance Monoid a => Monoid (Managed a)

Therefore, you can treat the Managed Applicative as yet another useful building block in your Monoid tool box.

However, Applicatives can do more than extend Monoids; they can extend Categorys, too. Given any Category, if you extend it with an Applicative you can automatically derive a new Category. Here's the general solution:

import Control.Applicative
import Control.Category 
import Prelude hiding ((.), id)

newtype Extend f c a b = Extend (f (c a b))

instance (Applicative f, Category c)
  => Category (Extend f c) where
    id = Extend (pure id)

    Extend f . Extend g = Extend (liftA2 (.) f g)

So let's take advantage of this fact to extend one of the pipes categories with simple resource management. All we have to do is wrap the pull-based pipes category in a bona-fide Category instance:

import Pipes

newtype Pull m a b = Pull (Pipe a b m ()) 

instance Monad m => Category (Pull m) where
    id = Pull cat

    Pull p1 . Pull p2 = Pull (p1 <-< p2)

Now we can automatically define resource-managed pipes by Extending them with the Managed Applicative:

import Control.Monad.Managed
import qualified Pipes.Prelude as Pipes
import System.IO

fromFile :: FilePath -> Extend Managed (Pull IO) () String
fromFile filePath = Extend $ do
    handle <- managed (withFile filePath ReadMode)
    return (Pull (Pipes.fromHandle handle))

toFile :: FilePath -> Extend Managed (Pull IO) String X
toFile filePath = Extend $ do
    handle <- managed (withFile filePath WriteMode)
    return (Pull (Pipes.toHandle handle))

All we need is a way to run Extended pipes and then we're good to go:

runPipeline :: Extend Managed (Pull IO) () X -> IO ()
runPipeline (Extend mp) = runManaged $ do
    Pull p <- mp
    liftIO $ runEffect (return () >~ p)

If we compose and run these Extended pipes they just "do the right thing":

main :: IO ()
main = runPipeline (fromFile "in.txt" >>> toFile "out.txt")

Let's check it out:

$ cat in.txt
1
2
3
$ ./example
$ cat out.txt
1
2
3

We can even reuse existing pipes, too:

reuse :: Monad m => Pipe a b m () -> Extend Managed (Pull m) a b
reuse = Extend . pure . Pull

main = runPipeline $
    fromFile "in.txt" >>> reuse (Pipes.take 2) >>> toFile "out.txt"

... and reuse does the right thing:

$ ./example
$ cat out.txt
1
2

What does it mean for reuse to "do the right thing"? Well, we can specify the correctness conditions for reuse as the following functor laws:

reuse (p1 >-> p2) = reuse p1 >>> reuse p2

reuse cat = id

These two laws enforce that reuse is "well-behaved" in a rigorous sense.

This is just one example of how you can use the Managed type to extend an existing Category. As an exercise, try to take other categories and extend them this way and see what surprising new connectable components you can create.

Conclusion

Experts will recognize that Managed is a special case of Codensity or ContT. The reason for defining a separate type is:

  • simpler inferred types,
  • additional type class instances, and:
  • a more beginner-friendly name.

Managed is closely related in spirit to the Resource monad, which is now part of resourcet. The main difference between the two is:

  • Resource preserves the open and close operations
  • Managed works for arbitrary callbacks, even unrelated to resources

This is why I view the them as complementary Monads.

Like all Applicatives, the Managed type is deceptively simple. This type does not do much in isolation, but it grows in power the more you compose it with other Applicatives to generate new Applicatives.

Sunday, July 20, 2014

Equational reasoning at scale

Haskell programmers care about the correctness of their software and they specify correctness conditions in the form of equations that their code must satisfy. They can then verify the correctness of these equations using equational reasoning to prove that the abstractions they build are sound. To an outsider this might seem like a futile, academic exercise: proving the correctness of small abstractions is difficult, so what hope do we have to prove larger abstractions correct? This post explains how to do precisely that: scale proofs to large and complex abstractions.

Purely functional programming uses composition to scale programs, meaning that:

  • We build small components that we can verify correct in isolation
  • We compose smaller components into larger components

If you saw "components" and thought "functions", think again! We can compose things that do not even remotely resemble functions, such as proofs! In fact, Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:

  • We build small proofs that we can verify correct in isolation
  • We compose smaller proofs into larger proofs

The following sections illustrate in detail how this works in practice, using Monoids as the running example. We will prove the Monoid laws for simple types and work our way up to proving the Monoid laws for much more complex types. Along the way we'll learn how to keep the proof complexity flat as the types grow in size.

Monoids

Haskell's Prelude provides the following Monoid type class:

class Monoid m where
    mempty  :: m
    mappend :: m -> m -> m

-- An infix operator equivalent to `mappend`
(<>) :: Monoid m => m -> m -> m
x <> y = mappend x y

... and all Monoid instances must obey the following two laws:

mempty <> x = x                -- Left identity

x <> mempty = x                -- Right identity

(x <> y) <> z = x <> (y <> z)  -- Associativity

For example, Ints form a Monoid:

-- See "Appendix A" for some caveats
instance Monoid Int where
    mempty  =  0
    mappend = (+)

... and the Monoid laws for Ints are just the laws of addition:

0 + x = x

x + 0 = x

(x + y) + z = x + (y + z)

Now we can use (<>) and mempty instead of (+) and 0:

>>> 4 <> 2
6
>>> 5 <> mempty <> 5
10

This appears useless at first glance. We already have (+) and 0, so why are we using the Monoid operations?

Extending Monoids

Well, what if I want to combine things other than Ints, like pairs of Ints. I want to be able to write code like this:

>>> (1, 2) <> (3, 4)
(4, 6)

Well, that seems mildly interesting. Let's try to define a Monoid instance for pairs of Ints:

instance Monoid (Int, Int) where
    mempty = (0, 0)
    mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

Now my wish is true and I can "add" binary tuples together using (<>) and mempty:

>>> (1, 2) <> (3, 4)
(4, 6)
>>> (1, 2) <> (3, mempty) <> (mempty, 4)
(4, 6)
>>> (1, 2) <> mempty <> (3, 4)
(4, 6)

However, I still haven't proven that this new Monoid instance obeys the Monoid laws. Fortunately, this is a very simple proof.

I'll begin with the first Monoid law, which requires that:

mempty <> x = x

We will begin from the left-hand side of the equation and try to arrive at the right-hand side by substituting equals-for-equals (a.k.a. "equational reasoning"):

-- Left-hand side of the equation
mempty <> x

-- x <> y = mappend x y
= mappend mempty x

-- `mempty = (0, 0)`
= mappend (0, 0) x

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (0, 0) (xL, xR)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= (0 + xL, 0 + xR)

-- 0 + x = x
= (xL, xR)

-- x = (xL, xR)
= x

The proof for the second Monoid law is symmetric

-- Left-hand side of the equation
= x <> mempty

-- x <> y = mappend x y
= mappend x mempty

-- mempty = (0, 0)
= mappend x (0, 0)

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (xL, xR) (0, 0)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= (xL + 0, xR + 0)

-- x + 0 = x
= (xL, xR)

-- x = (xL, xR)
= x

The third Monoid law requires that (<>) is associative:

(x <> y) <> z = x <> (y <> z)

Again I'll begin from the left side of the equation:

-- Left-hand side
(x <> y) <> z

-- x <> y = mappend x y
= mappend (mappend x y) z

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)
= mappend (xL + yL, xR + yR) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)
= mappend ((xL + yL) + zL, (xR + yR) + zR)

-- (x + y) + z = x + (y + z)
= mappend (xL + (yL + zL), xR + (yR + zR))

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= mappend (xL, xR) (yL + zL, yR + zR)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= mappend (xL, xR) (mappend (yL, yR) (zL, zR))

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend x (mappend y z)

-- x <> y = mappend x y
= x <> (y <> z)

That completes the proof of the three Monoid laws, but I'm not satisfied with these proofs.

Generalizing proofs

I don't like the above proofs because they are disposable, meaning that I cannot reuse them to prove other properties of interest. I'm a programmer, so I loathe busy work and unnecessary repetition, both for code and proofs. I would like to find a way to generalize the above proofs so that I can use them in more places.

We improve proof reuse in the same way that we improve code reuse. To see why, consider the following sort function:

sort :: [Int] -> [Int]

This sort function is disposable because it only works on Ints. For example, I cannot use the above function to sort a list of Doubles.

Fortunately, programming languages with generics let us generalize sort by parametrizing sort on the element type of the list:

sort :: Ord a => [a] -> [a]

That type says that we can call sort on any list of as, so long as the type a implements the Ord type class (a comparison interface). This works because sort doesn't really care whether or not the elements are Ints; sort only cares if they are comparable.

Similarly, we can make the proof more "generic". If we inspect the proof closely, we will notice that we don't really care whether or not the tuple contains Ints. The only Int-specific properties we use in our proof are:

0 + x = x

x + 0 = x

(x + y) + z = x + (y + z)

However, these properties hold true for all Monoids, not just Ints. Therefore, we can generalize our Monoid instance for tuples by parametrizing it on the type of each field of the tuple:

instance (Monoid a, Monoid b) => Monoid (a, b) where
    mempty = (mempty, mempty)

    mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

The above Monoid instance says that we can combine tuples so long as we can combine their individual fields. Our original Monoid instance was just a special case of this instance where both the a and b types are Ints.

Note: The mempty and mappend on the left-hand side of each equation are for tuples. The memptys and mappends on the right-hand side of each equation are for the types a and b. Haskell overloads type class methods like mempty and mappend to work on any type that implements the Monoid type class, and the compiler distinguishes them by their inferred types.

We can similarly generalize our original proofs, too, by just replacing the Int-specific parts with their more general Monoid counterparts.

Here is the generalized proof of the left identity law:

-- Left-hand side of the equation
mempty <> x

-- x <> y = mappend x y
= mappend mempty x

-- `mempty = (mempty, mempty)`
= mappend (mempty, mempty) x

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (mempty, mempty) (xL, xR)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= (mappend mempty xL, mappend mempty xR)

-- Monoid law: mappend mempty x = x
= (xL, xR)

-- x = (xL, xR)
= x

... the right identity law:

-- Left-hand side of the equation
= x <> mempty

-- x <> y = mappend x y
= mappend x mempty

-- mempty = (mempty, mempty)
= mappend x (mempty, mempty)

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (xL, xR) (mempty, mempty)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= (mappend xL mempty, mappend xR mempty)

-- Monoid law: mappend x mempty = x
= (xL, xR)

-- x = (xL, xR)
= x

... and the associativity law:

-- Left-hand side
(x <> y) <> z

-- x <> y = mappend x y
= mappend (mappend x y) z

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)
= mappend (mappend xL yL, mappend xR yR) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)
= (mappend (mappend xL yL) zL, mappend (mappend xR yR) zR)

-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)
= (mappend xL (mappend yL zL), mappend xR (mappend yR zR))

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= mappend (xL, xR) (mappend yL zL, mappend yR zR)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= mappend (xL, xR) (mappend (yL, yR) (zL, zR))

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend x (mappend y z)

-- x <> y = mappend x y
= x <> (y <> z)

This more general Monoid instance lets us stick any Monoids inside the tuple fields and we can still combine the tuples. For example, lists form a Monoid:

-- Exercise: Prove the monoid laws for lists
instance Monoid [a] where
    mempty = []

    mappend = (++)

... so we can stick lists inside the right field of each tuple and still combine them:

>>> (1, [2, 3]) <> (4, [5, 6])
(5, [2, 3, 5, 6])
>>> (1, [2, 3]) <> (4, mempty) <> (mempty, [5, 6])
(5, [2, 3, 5, 6])
>>> (1, [2, 3]) <> mempty <> (4, [5, 6])
(5, [2, 3, 5, 6])

Why, we can even stick yet another tuple inside the right field and still combine them:

>>> (1, (2, 3)) <> (4, (5, 6))
(5, (7, 9))

We can try even more exotic permutations and everything still "just works":

>>> ((1,[2, 3]), ([4, 5], 6)) <> ((7, [8, 9]), ([10, 11), 12)
((8, [2, 3, 8, 9]), ([4, 5, 10, 11], 18))

This is our first example of a "scalable proof". We began from three primitive building blocks:

  • Int is a Monoid
  • [a] is a Monoid
  • (a, b) is a Monoid if a is a Monoid and b is a Monoid

... and we connected those three building blocks to assemble a variety of new Monoid instances. No matter how many tuples we nest the result is still a Monoid and obeys the Monoid laws. We don't need to re-prove the Monoid laws every time we assemble a new permutation of these building blocks.

However, these building blocks are still pretty limited. What other useful things can we combine to build new Monoids?

IO

We're so used to thinking of Monoids as data, so let's define a new Monoid instance for something entirely un-data-like:

-- See "Appendix A" for some caveats
instance Monoid b => Monoid (IO b) where
    mempty = return mempty

    mappend io1 io2 = do
        a1 <- io1
        a2 <- io2
        return (mappend a1 a2)

The above instance says: "If a is a Monoid, then an IO action that returns an a is also a Monoid". Let's test this using the getLine function from the Prelude:

-- Read one line of input from stdin
getLine :: IO String

String is a Monoid, since a String is just a list of characters, so we should be able to mappend multiple getLine statements together. Let's see what happens:

>>> getLine  -- Reads one line of input
Hello<Enter>
"Hello"
>>> getLine <> getLine
ABC<Enter>
DEF<Enter>
"ABCDEF"
>>> getLine <> getLine <> getLine
1<Enter>
23<Enter>
456<Enter>
"123456"

Neat! When we combine multiple commands we combine their effects and their results.

Of course, we don't have to limit ourselves to reading strings. We can use readLn from the Prelude to read in anything that implements the Read type class:

-- Parse a `Read`able value from one line of stdin
readLn :: Read a => IO a

All we have to do is tell the compiler which type a we intend to Read by providing a type signature:

>>> readLn :: IO (Int, Int)
(1, 2)<Enter>
(1 ,2)
>>> readLn <> readLn :: IO (Int, Int)
(1,2)<Enter>
(3,4)<Enter>
(4,6)
>>> readLn <> readLn <> readLn :: IO (Int, Int)
(1,2)<Enter>
(3,4)<Enter>
(5,6)<Enter>
(9,12)

This works because:

  • Int is a Monoid
  • Therefore, (Int, Int) is a Monoid
  • Therefore, IO (Int, Int) is a Monoid

Or let's flip things around and nest IO actions inside of a tuple:

>>> let ios = (getLine, readLn) :: (IO String, IO (Int, Int))
>>> let (getLines, readLns) = ios <> ios <> ios
>>> getLines
1<Enter>
23<Enter>
456<Enter>
123456
>>> readLns
(1,2)<Enter>
(3,4)<Enter>
(5,6)<Enter>
(9,12)

We can very easily reason that the type (IO String, IO (Int, Int)) obeys the Monoid laws because:

  • String is a Monoid
  • If String is a Monoid then IO String is also a Monoid
  • Int is a Monoid
  • If Int is a Monoid, then (Int, Int) is also a `Monoid
  • If (Int, Int) is a Monoid, then IO (Int, Int) is also a Monoid
  • If IO String is a Monoid and IO (Int, Int) is a Monoid, then (IO String, IO (Int, Int)) is also a Monoid

However, we don't really have to reason about this at all. The compiler will automatically assemble the correct Monoid instance for us. The only thing we need to verify is that the primitive Monoid instances obey the Monoid laws, and then we can trust that any larger Monoid instance the compiler derives will also obey the Monoid laws.

The Unit Monoid

Haskell Prelude also provides the putStrLn function, which echoes a String to standard output with a newline:

putStrLn :: String -> IO ()

Is putStrLn combinable? There's only one way to find out!

>>> putStrLn "Hello" <> putStrLn "World"
Hello
World

Interesting, but why does that work? Well, let's look at the types of the commands we are combining:

putStrLn "Hello" :: IO ()
putStrLn "World" :: IO ()

Well, we said that IO b is a Monoid if b is a Monoid, and b in this case is () (pronounced "unit"), which you can think of as an "empty tuple". Therefore, () must form a Monoid of some sort, and if we dig into Data.Monoid, we will discover the following Monoid instance:

-- Exercise: Prove the monoid laws for `()`
instance Monoid () where
    mempty = ()

    mappend () () = ()

This says that empty tuples form a trivial Monoid, since there's only one possible value (ignoring bottom) for an empty tuple: (). Therefore, we can derive that IO () is a Monoid because () is a Monoid.

Functions

Alright, so we can combine putStrLn "Hello" with putStrLn "World", but can we combine naked putStrLn functions?

>>> (putStrLn <> putStrLn) "Hello"
Hello
Hello

Woah, how does that work?

We never wrote a Monoid instance for the type String -> IO (), yet somehow the compiler magically accepted the above code and produced a sensible result.

This works because of the following Monoid instance for functions:

instance Monoid b => Monoid (a -> b) where
    mempty = \_ -> mempty

    mappend f g = \a -> mappend (f a) (g a)

This says: "If b is a Monoid, then any function that returns a b is also a Monoid".

The compiler then deduced that:

  • () is a Monoid
  • If () is a Monoid, then IO () is also a Monoid
  • If IO () is a Monoid then String -> IO () is also a Monoid

The compiler is a trusted friend, deducing Monoid instances we never knew existed.

Monoid plugins

Now we have enough building blocks to assemble a non-trivial example. Let's build a key logger with a Monoid-based plugin system.

The central scaffold of our program is a simple main loop that echoes characters from standard input to standard output:

main = do
    hSetEcho stdin False
    forever $ do
        c <- getChar
        putChar c

However, we would like to intercept key strokes for nefarious purposes, so we will slightly modify this program to install a handler at the beginning of the program that we will invoke on every incoming character:

install :: IO (Char -> IO ())
install = ???

main = do
    hSetEcho stdin False
    handleChar <- install
    forever $ do
        c <- getChar
        handleChar c
        putChar c

Notice that the type of install is exactly the correct type to be a Monoid:

  • () is a Monoid
  • Therefore, IO () is also a Monoid
  • Therefore Char -> IO () is also a Monoid
  • Therefore IO (Char -> IO ()) is also a Monoid

Therefore, we can combine key logging plugins together using Monoid operations. Here is one such example:

type Plugin = IO (Char -> IO ())

logTo :: FilePath -> Plugin
logTo filePath = do
    handle <- openFile filePath WriteMode
    hSetBuffering handle NoBuffering
    return (hPutChar handle)

main = do
    hSetEcho stdin False
    handleChar <- logTo "file1.txt" <> logTo "file2.txt"
    forever $ do
        c <- getChar
        handleChar c
        putChar c

Now, every key stroke will be recorded to both file1.txt and file2.txt. Let's confirm that this works as expected:

$ ./logger
Test<Enter>
ABC<Enter>
42<Enter>
<Ctrl-C>
$ cat file1.txt
Test
ABC
42
$ cat file2.txt
Test
ABC
42

Try writing your own Plugins and mixing them in with (<>) to see what happens. "Appendix C" contains the complete code for this section so you can experiment with your own Plugins.

Applicatives

Notice that I never actually proved the Monoid laws for the following two Monoid instances:

instance Monoid b => Monoid (a -> b) where
    mempty = \_ -> mempty
    mappend f g = \a -> mappend (f a) (g a)

instance Monoid a => Monoid (IO a) where
    mempty = return mempty

    mappend io1 io2 = do
        a1 <- io1
        a2 <- io2
        return (mappend a1 a2)

The reason why is that they are both special cases of a more general pattern. We can detect the pattern if we rewrite both of them to use the pure and liftA2 functions from Control.Applicative:

import Control.Applicative (pure, liftA2)

instance Monoid b => Monoid (a -> b) where
    mempty = pure mempty

    mappend = liftA2 mappend

instance Monoid b => Monoid (IO b) where
    mempty = pure mempty

    mappend = liftA2 mappend

This works because both IO and functions implement the following Applicative interface:

class Functor f => Applicative f where
    pure  :: a -> f a
    (<*>) :: f (a -> b) -> f a -> f b

-- Lift a binary function over the functor `f`
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
liftA2 f x y = (pure f <*> x) <*> y

... and all Applicative instances must obey several Applicative laws:

pure id <*> v = v

((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

pure f <*> pure x = pure (f x)

u <*> pure x = pure (\f -> f y) <*> u

These laws may seem a bit adhoc, but this paper explains that you can reorganize the Applicative class to this equivalent type class:

class Functor f => Monoidal f where
    unit  :: f ()
    (#) :: f a -> f b -> f (a, b)

Then the corresponding laws become much more symmetric:

fmap snd (unit # x) = x                 -- Left identity

fmap fst (x # unit) = x                 -- Right identity

fmap assoc ((x # y) # z) = x # (y # z)  -- Associativity
  where
    assoc ((a, b), c) = (a, (b, c))

fmap (f *** g) (x # y) = fmap f x # fmap g y  -- Naturality
  where
    (f *** g) (a, b) = (f a, g b)

I personally prefer the Monoidal formulation, but you go to war with the army you have, so we will use the Applicative type class for this post.

All Applicatives possess a very powerful property: they can all automatically lift Monoid operations using the following instance:

instance (Applicative f, Monoid b) => Monoid (f b) where
    mempty = pure mempty

    mappend = liftA2 mappend

This says: "If f is an Applicative and b is a Monoid, then f b is also a Monoid." In other words, we can automatically extend any existing Monoid with some new feature f and get back a new Monoid.

Note: The above instance is bad Haskell because it overlaps with other type class instances. In practice we have to duplicate the above code once for each Applicative. Also, for some Applicatives we may want a different Monoid instance.

We can prove that the above instance obeys the Monoid laws without knowing anything about f and b, other than the fact that f obeys the Applicative laws and b obeys the Applicative laws. These proofs are a little long, so I've included them in Appendix B.

Both IO and functions implement the Applicative type class:

instance Applicative IO where
    pure = return

    iof <*> iox = do
        f <- iof
        x <- iox
        return (f x)

instance Applicative ((->) a) where
    pure x = \_ -> x

    kf <*> kx = \a ->
        let f = kf a
            x = kx a
        in  f x

This means that we can kill two birds with one stone. Every time we prove the Applicative laws for some functor F:

instance Applicative F where ...

... we automatically prove that the following Monoid instance is correct for free:

instance Monoid b => Monoid (F b) where
    mempty = pure mempty

    mappend = liftA2 mappend

In the interest of brevity, I will skip the proofs of the Applicative laws, but I may cover them in a subsequent post.

The beauty of Applicative Functors is that every new Applicative instance we discover adds a new building block to our Monoid toolbox, and Haskell programmers have already discovered lots of Applicative Functors.

Revisiting tuples

One of the very first Monoid instances we wrote was:

instance (Monoid a, Monoid b) => Monoid (a, b) where
    mempty = (mempty, mempty)

    mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

Check this out:

instance (Monoid a, Monoid b) => Monoid (a, b) where
    mempty = pure mempty

    mappend = liftA2 mappend

This Monoid instance is yet another special case of the Applicative pattern we just covered!

This works because of the following Applicative instance in Control.Applicative:

instance Monoid a => Applicative ((,) a) where
    pure b = (mempty, b)

    (a1, f) <*> (a2, x) = (mappend a1 a2, f x)

This instance obeys the Applicative laws (proof omitted), so our Monoid instance for tuples is automatically correct, too.

Composing applicatives

In the very first section I wrote:

Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:

  • We build small proofs that we can verify correct in isolation
  • We compose smaller proofs into larger proofs

I don't like to use the word compose lightly. In the context of category theory, compose has a very rigorous meaning, indicating composition of morphisms in some category. This final section will show that we can actually compose Monoid proofs in a very rigorous sense of the word.

We can define a category of Monoid proofs:

So in our Plugin example, we began on the proof that () was a Monoid and then composed three Applicative morphisms to prove that Plugin was a Monoid. I will use the following diagram to illustrate this:

+-----------------------+
|                       |
| Legend:  * = Object   |
|                       |
|          v            |
|          | = Morphism |
|          v            |
|                       |
+-----------------------+

* `()` is a `Monoid`

v
| IO
v

* `IO ()` is a `Monoid`

v
| ((->) String)
v

* `String -> IO ()` is a `Monoid`

v
| IO
v

* `IO (String -> IO ())` (i.e. `Plugin`) is a `Monoid`

Therefore, we were literally composing proofs together.

Conclusion

You can equationally reason at scale by decomposing larger proofs into smaller reusable proofs, the same way we decompose programs into smaller and more reusable components. There is no limit to how many proofs you can compose together, and therefore there is no limit to how complex of a program you can tame using equational reasoning.

This post only gave one example of composing proofs within Haskell. The more you learn the language, the more examples of composable proofs you will encounter. Another common example is automatically deriving Monad proofs by composing monad transformers.

As you learn Haskell, you will discover that the hard part is not proving things. Rather, the challenge is learning how to decompose proofs into smaller proofs and you can cultivate this skill by studying category theory and abstract algebra. These mathematical disciplines teach you how to extract common and reusable proofs and patterns from what appears to be disposable and idiosyncratic code.

Appendix A - Missing Monoid instances

These Monoid instance from this post do not actually appear in the Haskell standard library:

instance Monoid b => Monoid (IO b)

instance Monoid Int

The first instance was recently proposed here on the Glasgow Haskell Users mailing list. However, in the short term you can work around it by writing your own Monoid instances by hand just by inserting a sufficient number of pures and liftA2s.

For example, suppose we wanted to provide a Monoid instance for Plugin. We would just newtype Plugin and write:

newtype Plugin = Plugin { install :: IO (String -> IO ()) }

instance Monoid Plugin where
    mempty = Plugin (pure (pure (pure mempty)))

    mappend (Plugin p1) (Plugin p2) =
        Plugin (liftA2 (liftA2 (liftA2 mappend)) p1 p2)

This is what the compiler would have derived by hand.

Alternatively, you could define an orphan Monoid instance for IO, but this is generally frowned upon.

There is no default Monoid instance for Int because there are actually two possible instances to choose from:

-- Alternative #1
instance Monoid Int where
    mempty = 0

    mappend = (+)

-- Alternative #2
instance Monoid Int where
    mempty = 1

    mappend = (*)

So instead, Data.Monoid sidesteps the issue by providing two newtypes to distinguish which instance we prefer:

newtype Sum a = Sum { getSum :: a }

instance Num a => Monoid (Sum a)

newtype Product a = Product { getProduct :: a}

instance Num a => Monoid (Product a)

An even better solution is to use a semiring, which allows two Monoid instances to coexist for the same type. You can think of Haskell's Num class as an approximation of the semiring class:

class Num a where
    fromInteger :: Integer -> a

    (+) :: a -> a -> a

    (*) :: a -> a -> a

    -- ... and other operations unrelated to semirings

Note that we can also lift the Num class over the Applicative class, exactly the same way we lifted the Monoid class. Here's the code:

instance (Applicative f, Num a) => Num (f a) where
    fromInteger n = pure (fromInteger n)

    (+) = liftA2 (+)

    (*) = liftA2 (*)

    (-) = liftA2 (-)

    negate = fmap negate

    abs = fmap abs

    signum = fmap signum

This lifting guarantees that if a obeys the semiring laws then so will f a. Of course, you will have to specialize the above instance to every concrete Applicative because otherwise you will get overlapping instances.

Appendix B

These are the proofs to establish that the following Monoid instance obeys the Monoid laws:

instance (Applicative f, Monoid b) => Monoid (f b) where
    mempty = pure mempty

    mappend = liftA2 mappend

... meaning that if f obeys the Applicative laws and b obeys the Monoid laws, then f b also obeys the Monoid laws.

Proof of the left identity law:

mempty <> x

-- x <> y = mappend x y
= mappend mempty x

-- mappend = liftA2 mappend
= liftA2 mappend mempty x

-- mempty = pure mempty
= liftA2 mappend (pure mempty) x

-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> pure mempty) <*> x

-- Applicative law: pure f <*> pure x = pure (f x)
= pure (mappend mempty) <*> x

-- Eta conversion
= pure (\a -> mappend mempty a) <*> x

-- mappend mempty x = x
= pure (\a -> a) <*> x

-- id = \x -> x
= pure id <*> x

-- Applicative law: pure id <*> v = v
= x

Proof of the right identity law:

x <> mempty = x

-- x <> y = mappend x y
= mappend x mempty

-- mappend = liftA2 mappend
= liftA2 mappend x mempty

-- mempty = pure mempty
= liftA2 mappend x (pure mempty)

-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> x) <*> pure mempty

-- Applicative law: u <*> pure y = pure (\f -> f y) <*> u
= pure (\f -> f mempty) <*> (pure mappend <*> x)

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (.) <*> pure (\f -> f mempty)) <*> pure mappend) <*> x

-- Applicative law: pure f <*> pure x = pure (f x)
= (pure ((.) (\f -> f mempty)) <*> pure mappend) <*> x

-- Applicative law : pure f <*> pure x = pure (f x)
= pure ((.) (\f -> f mempty) mappend) <*> x

-- `(.) f g` is just prefix notation for `f . g`
= pure ((\f -> f mempty) . mappend) <*> x

-- f . g = \x -> f (g x)
= pure (\x -> (\f -> f mempty) (mappend x)) <*> x

-- Apply the lambda
= pure (\x -> mappend x mempty) <*> x

-- Monoid law: mappend x mempty = x
= pure (\x -> x) <*> x

-- id = \x -> x
= pure id <*> x

-- Applicative law: pure id <*> v = v
= x

Proof of the associativity law:

(x <> y) <> z

-- x <> y = mappend x y
= mappend (mappend x y) z

-- mappend = liftA2 mappend
= liftA2 mappend (liftA2 mappend x y) z

-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> ((pure mappend <*> x) <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (((pure (.) <*> pure mappend) <*> (pure mappend <*> x)) <*> y) <*> z

-- Applicative law: pure f <*> pure x = pure (f x)
= ((pure f <*> (pure mappend <*> x)) <*> y) <*> z
  where
    f = (.) mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((((pure (.) <*> pure f) <*> pure mappend) <*> x) <*> y) <*> z
  where
    f = (.) mappend

-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure f <*> pure mappend) <*> x) <*> y) <*> z
  where
    f = (.) ((.) mappend)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((pure f <*> x) <*> y) <*> z
  where
    f = (.) ((.) mappend) mappend

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
  where
    f = ((.) mappend) . mappend

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
  where
    f x = (((.) mappend) . mappend) x

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
  where
    f x = (.) mappend (mappend x)

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
  where
    f x = mappend . (mappend x)

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
  where
    f x y = (mappend . (mappend x)) y

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
  where
    f x y = mappend (mappend x y)

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
  where
    f x y z = mappend (mappend x y) z

-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)
= ((pure f <*> x) <*> y) <*> z
  where
    f x y z = mappend x (mappend y z)

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
  where
    f x y z = (mappend x . mappend y) z

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
  where
    f x y = mappend x . mappend y

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
  where
    f x y = (.) (mappend x) (mappend y)

-- (f . g) x = f
= ((pure f <*> x) <*> y) <*> z
  where
    f x y = (((.) . mappend) x) (mappend y)

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
  where
    f x y = ((((.) . mappend) x) . mappend) y

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
  where
    f x = (((.) . mappend) x) . mappend

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
  where
    f x = (.) (((.) . mappend) x) mappend

-- Lambda abstraction
= ((pure f <*> x) <*> y) <*> z
  where
    f x = (\k -> k mappend) ((.) (((.) . mappend) x))

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
  where
    f x = (\k -> k mappend) (((.) . ((.) . mappend)) x)

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
  where
    f = (\k -> k mappend) . ((.) . ((.) . mappend))

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
  where
    f = (.) (\k -> k mappend) ((.) . ((.) . mappend))

-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure g <*> pure f) <*> x) <*> y) <*> z
  where
    g = (.) (\k -> k mappend)
    f = (.) . ((.) . mappend)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure (.) <*> pure (\k -> k mappend)) <*> pure f) <*> x) <*> y) <*> z
  where
    f = (.) . ((.) . mappend)

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (\k -> k mappend) <*> (pure f <*> x)) <*> y) <*> z
  where
    f = (.) . ((.) . mappend)

-- u <*> pure y = pure (\k -> k y) <*> u
= (((pure f <*> x) <*> pure mappend) <*> y) <*> z
  where
    f = (.) . ((.) . mappend)


-- (.) f g = f . g
= (((pure f <*> x) <*> pure mappend) <*> y) <*> z
  where
    f = (.) (.) ((.) . mappend)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure g <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z
  where
    g = (.) (.)
    f = (.) . mappend

-- Applicative law: pure f <*> pure x = pure (f x)
= (((((pure (.) <*> pure (.)) <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z
  where
    f = (.) . mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (((pure (.) <*> (pure f <*> x)) <*> pure mappend) <*> y) <*> z
  where
    f = (.) . mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z
  where
    f = (.) . mappend

-- (.) f g = f . g
= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z
  where
    f = (.) (.) mappend

-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure f <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z
  where
    f = (.) (.)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure (.) <*> pure (.)) <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (.) <*> (pure mappend <*> x)) <*> (pure mappend <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (pure mappend <*> x) <*> ((pure mappend <*> y) <*> z)

-- liftA2 f x y = (pure f <*> x) <*> y
= liftA2 mappend x (liftA2 mappend y z)

-- mappend = liftA2 mappend
= mappend x (mappend y z)

-- x <> y = mappend x y
= x <> (y <> z)

Appendix C: Monoid key logging

Here is the complete program for a key logger with a Monoid-based plugin system:

import Control.Applicative (pure, liftA2)
import Control.Monad (forever)
import Data.Monoid
import System.IO

instance Monoid b => Monoid (IO b) where
    mempty = pure mempty

    mappend = liftA2 mappend

type Plugin = IO (Char -> IO ())

logTo :: FilePath -> Plugin
logTo filePath = do
    handle <- openFile filePath WriteMode
    hSetBuffering handle NoBuffering
    return (hPutChar handle)

main = do
    hSetEcho stdin False
    handleChar <- logTo "file1.txt" <> logTo "file2.txt"
    forever $ do
        c <- getChar
        handleChar c
        putChar c