Wednesday, September 19, 2012

The MonadTrans class is missing a method

My work on pipes-2.4 leads me to the inescapable conclusion that the MonadTrans class is incomplete. In an ideal world, this is what it should actually look like:
{-# LANGUAGE Rank2Types #-}

class MonadTrans t where
    lift  :: (Monad m, Monad (t m)) => m a -> t m a
    embed :: (Monad m, Monad (t m), Monad (t n))
          => (forall a .   m a -> t n a)
          -> (forall b . t m b -> t n b)
          -- This last forall is optional

(>|>)
 :: (Monad f, Monad g, Monad (t g), Monad (t h),
     MonadTrans t)
 => (forall a . f a -> t g a)
 -> (forall b . g b -> t h b)
 -> (forall c . f c -> t h c) -- This last forall is optional
(f >|> g) x = embed g (f x)

squash :: (Monad (t (t m)), Monad (t m), MonadTrans t)
       => t (t m) a -> t m a
squash = embed id

mapT
 :: (Monad m, Monad n, Monad (t m), Monad (t n), MonadTrans t)
 => (forall a . m a -> n a) -> t m b -> t n b
mapT morph = embed (lift . morph)
I can justify this additional method just by changing the names around and using a type operator:
{-# LANGUAGE Rank2Types, TypeOperators #-}

type a :~> b = forall r . a r -> b r

class MonadM m where
    returnM :: (Monad a, Monad (m a))
            =>  a :~> m a
    bindM   :: (Monad a, Monad (m a), Monad (m b))
            => (a :~> m b) -> (m a :~> m b)

(>|>) :: (Monad a, Monad b, Monad (m b), Monad (m c),
          MonadTrans m)
      => (a :~> m b) -> (b :~> m c) -> (a :~> m c)
(f >|> g) x = bindM g (f x)

joinM :: (Monad (m (m a)), Monad (m a), MonadTrans m)
       => m (m a) :~> m a
joinM = bindM id

fmapM
 :: (Monad a, Monad b, Monad (m a), Monad (m b), MonadTrans m)
 => (a :~> b) -> (m a :~> m b)
fmapM f = bindM (returnM . f)

In otherwords, I've stolen a page from Conor McBride's notebook and defined lift and embed as a higher-order monad in the category of monad morphisms. Going back to the previous names, we can establish that certain laws must hold:
-- Categorical version
lift >|> f = f
f >|> lift = f
(f >|> g) >|> h = f >|> (g >|> h)

-- bind/return version
embed lift m = m
embed f (lift m) = f m
embed g (embed f m) = embed (\x -> embed g (f x)) m

-- join/return version
squash (lift m) = m
squash (mapT lift m) = m
squash (squash m) = squash (mapT squash m)
Obviously, I won't suggest we break the existing MonadTrans class by adding an additional method. All we have to do is simply define a new MonadM class and make all existing monad transformers instances of it and possibly make MonadTrans a super-class of it.

I'll bet more experienced Haskell programmers have wanted mapT or squash in one form or another. The above type-class provides a uniform interface to these operations, so that you don't have to rely on transformer-specific functions like mapStateT or mapMaybeT.

Note that all monad transformers have a sensible instance for MonadM that obeys the above laws. Usually the easiest route is to first define squash (i.e. joinM) and mapT (i.e. fmapM). mapT is usually very straight-forward to write and simply involves type-chasing. squash simply takes the inner monad transformer and combines its operations with the outer monad transformer. Once you define these two, then you can easily define embed:
-- i.e.: (bindM f) = joinM . fmapM f
embed f = squash . mapT f
In the near future I will release a package containing this type-class and appropriate instances for the standard monad transformers. Additionally, the pipes-2.4 release will include an extra motivation for defining the above type-class, besides the obvious utility of having mapT and squash functions.

Saturday, September 15, 2012

The functor design pattern

This post builds on my previous post on the category design pattern and this time I will discuss the functor design pattern. If you are an intermediate Haskell programmer and you think you already understand functors, then think again, because I promise you this post will turn most of your preconceptions about functors on their head and show you how functors are much more powerful and generally applicable than you might realize.


Mixing features


So let's pretend that my previous post on categories inflamed your interest in compositional programming. The first thing you might ask is "Which category do I use?". This seems like a perfectly reasonable question, since you'd like to pick a category that:
  • attracts a lot of mindshare,
  • contains a large library of reusable components,
  • boasts many features, and
  • is simple to use.
Unfortunately, reality says that we seldom get all of the above qualities and they often conflict with one another. For example, let's compare two categories I previously discussed:
  • Ordinary functions of type a -> b that you compose using: (.)
  • Monadic functions of type a -> m b that you compose using: (<=<)
Ordinary functions are simpler to read, write, use, and you can reason about their behavior more easily. However, monadic functions boast more useful features, some of which are indispensable (such as side effects when we use the IO monad). We really need some way to mix these two categories together to get the best of both worlds.

Fortunately, programmers solve compatibility problems like this all the time. We often have tools written in different languages or different frameworks and if we want to mix features from multiple frameworks we write code to bridge between them. So let's solve our category mixing problem by writing an adapter layer between the two categories. We write some function that transforms all the components in one category into components in the other category, so that we can then freely mix components from the two categories.

Typically, one category will be more featureful than the other, so the transformation is unidirectional. Using the above example, monadic functions are strictly more featureful and powerful than ordinary functions. Fortunately, we can promote all ordinary functions to monadic functions using the following map function:
-- This "map"s an ordinary function to a monadic function
map :: (Monad m) => (a -> b) -> (a -> m b)
map f = return . f
... but we cannot write the reverse function and automatically map every monadic function to an ordinary function.

We use map to combine a pure function f and a monadic function g. To do this, we promote f using map and then combine both of them using Kleisli composition:
f     ::              a ->   b
map f :: (Monad m) => a -> m b

g     :: (Monad m) => b -> m c


g <=< map f :: (Monad m) => a -> m c
Perfect! Now we can reuse all of our ordinary functions within this Kleisli category and not have to rewrite anything!

However, there's still a problem. Monad binds are not free and sometimes they get in the way of compiler optimization, so you can imagine that it would be wasteful if we lifted two pure functions in a row:
f     ::              a ->   b
map f :: (Monad m) => a -> m b

g     ::              b ->   c
map g :: (Monad m) => b -> m c

h     :: (Monad m) => c -> m d

-- Wasteful!
h <=< map g <=< map f :: (Monad m) => a -> m d
However, we're smart and we know that we can just optimize those two ordinary functions by using ordinary function composition first before lifting them with map:
-- Smarter!
h <=< map (g . f)
In other words, we assumed that the following transformation should be safe:
map g <=< map f = map (g . f)
Similarly, we expect that if we lift an identity function into a chain of Kleisli compositions:
g <=< map id <=< f
... then it should have no effect. Well, we can easily prove that because:
map id = return . id = return
.. and return is the identity of Kleisli composition, therefore:
f :: (Monad m) => a -> m b
g :: (Monad m) => b -> m c

map id :: (Monad m) => b -> m b

g <=< map id <=< f
= g <=< return <=< f
= g <=< f  :: (Monad m) => a -> m c
Well, we just unwittingly defined our first functor! But where is the functor?


Functors


A functor transforms one category into another category. In the previous section we transformed the category of Haskell functions into the category of monadic functions and that transformation is our functor.

I will notationally distinguish between the two categories in question so I can be crystal clear about the mathematical definition of a functor. I will denote our "source" category's identity as idA and its composition as (._A), and these must obey the category laws:
-- Yes, "._A" is ugly, I know
idA ._A f = f                      -- Left identity

f ._A idA = f                      -- Right identity

(f ._A g) ._A h = f ._A (g ._A h)  -- Associativity
Similarly, I denote the "destination" category's identity as idB and its composition as (._B), which also must obey the category laws:
idB ._B f = f                      -- Left identity

f ._B idB = f                      -- Right identity

(f ._B g) ._B h = f ._B (g ._B h)  -- Associativity
Then a functor uses a function that we will call map to convert every component in the source category into a component in the destination category.

We expect this map function to satisfy two rules:

Rule #1: map must transform the composition operator in the source category to the composition operator in the destination category:
map (f ._A g) = map f ._B map g
This is the "composition law".

Rule #2: map must transform the identity in the source category to the identity in the destination category:
map idA = idB
This is the "identity law".

Together these two rules are the "functor laws" (technically, the covariant functor laws).

In the last section, our source category "A" was the category of ordinary functions:
idA   = id
(._A) = (.)
... and our destination category "B" was the Kleisli category:
idB   = return
(._B) = (<=<)
... and our map function obeyed the functor laws:
map id = return
map (f . g) = map f <=< map g
In other words, functors serve as adapters between categories that promote code written for the source category to be automatically compatible with the destination category. Functors arise every time we write compatibility layers and adapters between different pieces of software.


Functors hidden everywhere


I'll provide a few more examples of functors to tickle people's brains and show how functors arise all the time in your code without you even realizing it. For example, consider the length function:
length :: [a] -> Int
We can treat list concatenation as a category, where:
(.) = (++)
id  = []

[] ++ x = x                    -- Left  identity
x ++ [] = x                    -- Right identity
(x ++ y) ++ z = x ++ (y ++ z)  -- Associativity
Similarly, we can treat addition as a category, where:
(.) = (+)
id  = 0

0 + x = x                  -- Left  identity
x + 0 = x                  -- Right identity
(x + y) + z = x + (y + z)  -- Associativity
Then length is a functor from the category of list concatentation to the category of integer addition:
-- Composition law
length (xs ++ ys) = length xs + length ys

-- Identity law
length [] = 0
Or consider the pipe function from Control.Pipe:
pipe :: (Monad m) => (a -> b) -> Pipe a b m r

-- Composition law
pipe (f . g) = pipe f <+< pipe g

-- Identity law
pipe id = idP
Also, concat defines a functor from one list concatenation to another:
-- Composition
concat (x ++ y) = concat x ++ concat y

-- Identity
concat [] = []
So don't assume that the Functor class is in any way representative of the full breadth of functors.


The Functor class


So far I've deliberately picked examples that do not fit within the mold of Haskell's Functor class to open people's minds about functors. A lot of new Haskell programmers mistakenly believe that functors only encompass "container-ish" things and I hope the previous examples dispel that notion.

However, the Functor class still behaves the same way as the functors I've already discussed. The only restriction is that the Functor class only encompass the narrow case where the source and target categories are both categories of ordinary functions:
class Functor f where
    fmap :: (a -> b) -> (f a -> f b)

fmap (f . g) = fmap f . fmap g  -- Composition law

fmap id = id                    -- Identity law
Haskell Functors recapitulate the themes of compatibility between categories and component reuse. For example, we might have several ordinary functions lying around in our toolbox:
f :: a -> b
g :: b -> c
.. but we need to manipulate lists using functions of type:
h :: [a] -> [c]
Rather than rewrite all our old functions to work on lists, we can instead automatically promote all of them to work on lists using the map function from the Prelude:
map :: (a -> b) -> ([a] -> [b])

map f :: [a] -> [b]
map g :: [b] -> [c]

h = map f . map g :: [a] -> [c]
We know that we can combine two passes over a list into a single pass:
h = map (f . g) :: [a] -> [c]
.. and doing nothing to each element does nothing to the list:
map id = id
Once again, we've just stated the functor laws:
map (f . g) = map f . map g  -- Composition law

map id = id                  -- Identity law
Notice that functors free us from having to write code that targets some monolithic category. Instead, we write all our code using whatever category we deem most appropriate and then promote it as necessary to whatever other categories might need the code we just wrote. This lets us work within focused and specialized categories suitable for their respective tasks rather than waste our time arguing over what category to standardize on.

Another benefit of functors is that they make our code automatically future-proof. We write our components using whatever category we have at our disposal and then as new categories arise we just define new functors to promote our existing code to work within those new categories.


Monad morphisms


Compatibility issues arise all the time between various Haskell frameworks. For example, let's assume I have a sizeable code-base written using the iteratee library, but then I find a really useful library on hackage using enumerator. I would rather not rewrite the enumerator-based library to use iteratee so I instead choose to write an adapter function that allows me to mix the two. I have to define some function, morph, that transforms Iteratees from the iteratee library into Iteratees from the enumerator library:
import qualified Data.Enumerator as E
import qualified Data.Iteratee.Base as I

morph :: I.Iteratee a m b -> E.Iteratee a m b
However, I might suspect that the iteratee library has a faster Monad instance since it uses continuation-passing style (disclaimer: I have no idea if this is true, it's just a hypothetical example). This means that I would like to be able to factor code to use the iteratee library's monad whenever possible:
f :: a -> I.Iteratee s m b

g :: b -> I.Iteratee s m c

h :: c -> E.Iteratee s m d

-- Hypothetically slower, since it uses E.Iteratee's bind
code1 :: a -> E.Iteratee s m d
code1 a = do b <- morph $ f a
             c <- morph $ g b
             h c

-- Hypothetically faster, since it uses I.Iteratee's bind
code2 :: a -> E.Iteratee s m d
code2 a = do c <- morph $ do b <- f a
                             g b
             h c
I would also expect that if I do nothing using enumerator, that it's equivalent to doing nothing using iteratee:
morph $ return x
= return x
Interestingly, we encounter a pattern when we write the above functions using a point-free style:
code1 = h <=< (morph . g) <=< (morph . f)

code2 = h <=< (morph . (g <=< f))

morph . return = return
This pattern seems so familiar...
map :: (a -> I.Iteratee s m b) -> (a -> E.Iteratee s m b)
map = (morph .)

map (f <=< g) = map f <=< map g  -- Composition law

map return = return              -- Identity law
Oh, I've accidentally defined a functor! This time both the source and destination categories are Kleisli categories and the functor preserves both the composition and identity correctly.

Category theorists have a very specific name for the above pattern: a monad morphism. Specifically, a monad morphism is any function:
morph :: (Monad m, Monad n) => forall r . m r -> n r
... such that map = (morph .) defines a functor between two Kleisli categories:
map :: (Monad m, Monad n) => (a -> m b) -> (a -> n b)
map = (morph .)
Also, intermediate Haskell programmers will recognize a subtle variation on this pattern:
lift :: (Monad m, MonadTrans t) => m r -> t m r

(lift .) :: (Monad m, MonadTrans t) => (a -> m b) -> (a -> t m b)

-- Identity law
(lift .) return = return

-- Composition law
(lift .) (f >=> g) = (lift .) f >=> (lift .) g
These are just the monad transformer laws! However, they are usually written in this form:
lift $ return x = return x

lift $ do y <- f x
          g y
= do y <- lift $ f x
     lift $ g y
In other words, monad transformers are a special subset of monad morphisms and the monad transformer laws are just the functor laws in disguise!

Now, every time you use a monad transformer you can appreciate that you are using a functor as an adapter layer between two categories: the base monad's Kleisli category and the transformed monad's Kleisli category.


Conclusion


The functor design pattern embodies a philosophy of programming that emphasizes:
  • compatibility over standardization,
  • specialization over monolithic frameworks, and
  • short-term completion over future-proofing.
However, the above tenets, while popular, haven't completely taken hold because we associate:
  • compatibility with cruft,
  • specialization with fragmentation, and
  • short-term completion with lack of foresight.
In a future post I will discuss how the functor laws mitigate these problems and allow you to layer on as many abstractions over as many tools as you wish without the abstractions collapsing under their own weight.

Functors don't even necessarily need to be within a single programming language. A programmer could even use the category design pattern in completely separate programming languages and then use the functor design pattern to bridge components written in one language to another. Please don't limit your imagination to just the examples I gave!

However, the functor design pattern doesn't work at all if you aren't using categories in the first place. This is why you should structure your tools using the compositional category design pattern so that you can take advantage of functors to easily mix your tools together. This is true whether you are programming in one language or several languages. As long as each tool forms its own category and you obey the functor laws when switching between them, you can be very confident that all your tools will mix correctly.

Friday, September 7, 2012

Concurrency = Lists of Kleisli arrows

I spend a lot of time thinking about concurrency and the more I study it the more I discover that concurrency is just a fancy name for merging lists. I'm going to use my Proxy type to demonstrate that this is not just a superficial analogy.


Merging lists


Let's imagine that a thread is just a list of values:
type Thread a = [a]
Each a represents one atomic step in our thread.

Now let's assume we have two such threads and we need to schedule them. The simplest way to schedule them would be to interleave them:
zip :: Thread a -> Thread a -> Thread a
zip [] ys = ys
zip xs [] = xs
zip (x:xs) (y:ys) = x:y:zip xs ys
However, I can think of three obvious problems with this approach. First, it is not associative:
(xs `zip` ys) `zip` zs /= xs `zip` (ys `zip` zs)
Second, it assumes that all thread actions have equal priority, which probably isn't the case.

We can fix this by switching to cooperative threads which can either provide a value, yield "left" or yield "right":
data Step a = YieldL | Step a | YieldR

type Thread a = [Step a]
Now, we can merge threads in such a way that respects their yields:
(<>) :: Thread a -> Thread a -> Thread a
        []  <>         ys  = []
(YieldL:xs) <>         ys  = YieldL:(xs <> ys)
(Step a:xs) <>         ys  = Step a:(xs <> ys)
(YieldR:xs) <> (YieldL:ys) =        (xs <> ys)
-- From this point onward, xs = YieldR:xs'
        xs  <> (Step a:ys) = Step a:(xs <> ys)
        xs  <> (YieldR:ys) = YieldR:(xs <> ys)
        xs  <>         []  = []
Interestingly, this new operation is associative:
(xs <> ys) <> zs = xs <> (ys <> zs)
It also has an empty thread which acts like an identity:
mempty = YieldR:YieldL:mempty

mempty <> xs = xs
xs <> mempty = xs

Proxies


While working on Proxys, I discovered they had certain nice mathematical properties:
         return  <-<               g  = return
(respond  >=> f) <-<               g  = respond  >=> (f <-< g)
(lift . k >=> f) <-<               g  = lift . k >=> (f <-< g)
(request  >=> f) <-< (respond  >=> g) =              (f <-< g)
-- For the following equations, f = request >=> f'
              f  <-< (lift . k >=> g) = lift . k >=> (f <-< g)
              f  <-< (request  >=> g) = request  >=> (f <-< g)
              f  <-<          return  = return
Now, where have I seen that before? Why, these are identical the equations for the above list merge, except we need to make the following substitutions to make the analogy complete:
(>=>)     ->  (++)
return    ->  []
respond   ->  [YieldL]
request   ->  [YieldR]
lift . k  ->  [], [Step a], [Step a, Step a'] ...
(<-<)     ->  (<>)
Well, if those substitutions were correct, we'd expect that we could use them to derive the correct form for idT:
mempty = YieldR:YieldL:mempty
mempty = [YieldR] ++  [YieldL] ++  mempty
idT    = request  >=> respond  >=> idT
... and it works!

This is what I mean when I say that Proxy composition is just merging lists of Kleisli arrows.

Conclusions


I was a little bit skeptical at first when I had to give Proxys an extra input parameter to get them to be composable. However, the surprising connection to lists of Kleisli arrows convinced me that the Kleisli arrow is the true currency of concurrency.

Wednesday, September 5, 2012

pipes-2.3 - Bidirectional pipes

One thing I love about Blogger is the detailed traffic information it provides out of the box. I enjoy seeing what keywords direct people to my blog, and one particular search result came up a lot recently, namely bidirectional pipes. Every time I saw somebody searching for bidirectional pipes I would think to myself "You and me both!" since I've been wanting bidirectional pipes for quite some time now to implement features that users have been requesting.

Well, anonymous googlers, today is your day! I'm releasing pipes-2.3 which introduces a new bidirectional pipe type, which I call a Proxy and I've proven the category laws for Proxy composition.

This blog post is not a proper tutorial but rather a meta-discussion of this release. This post discusses context surrounding this release for people who follow iteratee development, so if you just want to see cool examples, then read the Proxy tutorial over at Control.Proxy.Tutorial.

Also, this post is not technically part of my category theory series that I'm writing, but it does fortuitously tie in to it. The Proxy type provides an elegant framework for composing reusable client/proxy/server primitives into powerful applications, so if you started following my blog because of my discussion about compositionality, then I recommend you read the Proxy tutorial.


Generalizing Pipes


The Proxy terminology is built on the client-server metaphor, and if you already understand Pipes the following translations will help you map your Pipe intuition onto Proxy terms:
-- Types
Pipe     -> Proxy

Producer -> Server
Consumer -> Client
Pipeline -> Session

-- commands
await    -> request
yield    -> respond
Clients resemble Consumers, except you replace await with request, which provides an argument to upstream:
myClient () = do
    ...
    answer <- request argument
Servers resemble Producers, except you replace yield with respond. Composition requires a parameter to pass in the first request:
--       +-- 1st request
--       |
--       v
myServer argument = do
    ...
... and every subsequent request is bound to the return value of respond:
myServer argument = do
    x <- computeSomething argument
    -- "respond" binds the next argument
    nextArgument <- respond x
    myServer nextArgument

-- or: myServer = computeSomething >=> respond >=> myServer
I provide the foreverK function which abstracts away this common recursion pattern:
-- i.e. forever 'K'leisli arrow
foreverK f = f >=> foreverK f

myServer = foreverK $ \argument -> do
    result <- computeSomething argument
    respond result

-- or: myServer = foreverK (computeSomething >=> respond)
That looks just like the way you'd write a server's loop: get some argument, compute some result, respond with the result. However, you can do significantly more sophisticated things than just loop.

A Proxy sits between servers and clients. It can query servers on its upstream interface, and respond to clients on its downstream interface:
      | Upstream  | Downstream |
      | interface | interface  |
Proxy   arg1 ret1    arg2 ret2   m r
As with Pipes, the intermediate Proxy type is the unifying compositional type which generalizes the endpoint types. Server and Client are just type synonyms around the Proxy type with one of its two ends closed.

You can then compose as many components as you please into a single Session using composition and then use runSession to convert the results back to the base monad:
runSession $ client <-< proxy_1 <-<  ... <-< proxy_n <-< server
In the following sections, I will motivate this upgrade to bidirectional pipes by providing some examples of trivial problems that have embarrassed the entire iteratee community (myself included) up until now.


Dumb sources


The simplest example is a file reader. Using any iteratee implementation out there, it is very awkward to specify how many bytes you wish to pull from the upstream source on a request-to-request basis. Most implementations either:
  • Hard-code the number of bytes delivered on each request (i.e. conduit/iterIO)
  • Initialize the source with a given buffer size and then fix it from that point onward (i.e. enumerator/iteratee)
Now, there's nothing wrong with hard-coding the size for the read from the file since typically there is an optimum buffer size for disk I/O, but you'd still like to be able to layer another component downstream that can then parcel that out into chunk sizes that the user actually wants.

Unfortunately, the gold standard solution (pushback) is unsatisfactory because it:
  • only solves this narrow use case and does not generalize,
  • cannot push back portions of input without imposing some sort of Monoid restriction on the iteratee type itself, and
  • requires that the user maintain certain invariants to prevent breaking the Category laws.
Wouldn't it be nice if we could just directly tell upstream what we wanted instead of playing all these games? Proxys let you do that through the argument you supply to request.


Remote-procedure call


The next example is interfacing with some server. This is a real-world example from my own work. I've written a protein structural search engine and I've set it up as an RPC service: protein structure goes in, a bunch of search results come out. I'd like to write a Pipes interface to this so I can stream the results coming out of the server, but unfortunately I can't. If I tried, I might do something like this:
searchEngine? :: Pipe Structure [Structure] IO r
I can't really accomplish this because Pipes only permit a unidirectional flow of information. I can't both provide the query and receive the results within the same component without resorting to brittle non-compositional tricks like IORefs that defeat the entire point of the iteratee abstraction. However, with Proxys, the solution is incredibly easy:
The input ---------+-------------------+          +- The results
                   |                   |          |
                   v                   v          v
searchEngine :: Structure -> Server Structure [Structure] IO r
searchEngine = foreverK $ \structure -> do
    -- "search" might send a network query to the actual server
    results <- lift $ search structure
    respond results

-- searchEngine = foreverK ((lift . search) >=> respond)
Note that this time the query and response occupy the same interface, rather than two opposing interfaces, so I can now hook up a Client to it that send in requests and receive responses within the same block of code.

No other iteratee implementation out there can accomplish this. Instead, they restrict us to using blind sources that don't know what downstream actually wants.


Closures


You can also implement imperative-style closures using Proxys. Simply define:
type Closure = Server
... and you are good to go! Consider the Python example from the Wikipedia article on closures:
def counter():
    x = 0
    def increment(y):
        nonlocal x
        x += y
        print(x)
    return increment
We can translate this directly into Proxys:
counter :: Int -> Closure Int () IO r
counter = counter' 0

counter' x y = do
    let x' = x + y
    lift $ print x'
    y' <- respond ()
    counter' x' y'
We can then consume the closure in a structured way using composition:
type Opening = Client -- The opposite of a closure?

useClosure :: () -> Opening Int () IO ()
useClosure () = mapM_ request [1, 7, 1, 1]

main = runSession $ useClosure <-< counter
... or we can manually peel off individual elements from the closure using runFreeT:
pop :: (Monad m)
 => a
 -> Closure a b m r
 -> m (Maybe (b, Closure a b m r))
pop y = do
    f <- runFreeT (counter y)
    case f of
        Pure          _  -> return   Nothing
        Free (Yield x c) -> return $ Just (x, c)
Proxy internals are all exposed without compromising any safety, so if you choose not to buy in to the whole composition framework you can always manually deconstruct Proxys by hand and go along your way.


Compositional message passing


As far as I can tell, this is the only bidirectional message passing framework that satisfies the category laws. This guarantees several nice properties:
  • The identity laws enforce that composition of components must be completely transparent.
  • The associativity law guarantees that each component can be written completely context-free.
Unlike most message passing frameworks, Proxys promote component decoupling by structuring message passing through typed interfaces and composing those interfaces to mix and match components. This promotes code reuse and makes it easy to encapsulate complete functionality into single black-box objects instead of exposing a bunch of initialization/push/pull/finalization routines that your user must worry about threading together correctly with every other component.

When you have compositional components, combining them together is as easy as snapping a bunch of legos together.


Extensions


Another motivation for this upgrade is finalization. With the ability to send information back upstream, I can now implement bidirectional finalization using ordinary monads and not indexed monads. This will replace Frames, which I will deprecate and either remove or migrate to a separate library.


Pipe compatibility


Pipes are a strict subset of Proxys so if you have existing Pipe code you can replace Control.Pipe with Control.Proxy which provides backwards-compatible definitions for all Pipe primitives and your previous code will still work.

You can understand the relationship between Pipes and Proxys by checking out the type synonym for Pipes provided by Control.Proxy:
type Pipe a b = Proxy () a () b
In other words, a Pipe is a Proxy that never sends any information upstream when it requests input.

There is another advantage of Proxys over Pipes, which is that now it is possible to forbid awaits. The Proxy implementation is highly symmetric and fills a lot of elegance holes that Pipes had.

However, if you love Pipes, never fear, because Control.Pipe will never be deprecated, ever. It provides the simplest iteratee API on Hackage, and I plan to continue to upgrade it with all features compatible with the Pipe type.


Kleisli arrow


One of the surprising results of the bidirectional implementation was that it unifies Kleisli composition and Proxy composition, whose arguments overlap. One thing you will discover the more you program with Proxys is that most useful Proxy components end up being Kleisli arrows and you'll often find that a lot of your code simplifies to the following point-free style:
-- Not that I necessarily recommend writing it this way
((p1 <=< p2 <=< p3) <-< (p4 <=< p5)) <=< (p6 <-< p7)
This isn't a coincidence. A very abstract way to understand Proxy composition is that it is just merging lists of Kleisli arrows in a structured way.


Conclusions


I know in the past I've stated that bidirectional information flow does not form a category, so now I'm publicly eating my own words.

There will be two more release in the next two months. The first release will provide the first general mechanism for extending Pipes with your own custom extensions and will include error handling and parsing extensions implemented using this approach.

The second release will provide a second way to customize pipes and will include finalization/reinitialization and stack traces implemented using that approach.