Monday, May 21, 2012

pipes 2.0 - Pipe Finalization

I'm happy to announce that pipes-2.0 now includes an elegant and sound way to finalize resources promptly and deterministically. You can find the latest version of the library here.

Before I continue I want to acknowledge Paolo Capriotti, who contributed a lot of instrumental discussion that led to this solution, particularly for how to manage downstream finalization.

The library introduces a new higher order data-type called a Frame complete with its own Category that you can use to solve all your finalization problems. It's layered on top of ordinary Pipes and it has plenty of nice properties that fall out of enforcing the Category laws. However, this post is not documentation, so I encourage you to read the expanded tutorial, starting from the "Frames" section before you continue.

The real topic of this post is a really quick and dirty meta-discussion containing a lot of comments on the release that didn't belong in the documentation. I will also touch upon issues with finalization that I grappled with when working on finalizing pipes while satisfying the Category laws. I'm hoping it will help guide other iteratee libraries which might have different goals but still deal with the same issues. I will discuss these topics in more depth in future posts, but I felt all of these are worth briefly mentioning right now.


Parametrized Monads


The first big issue is that prompt finalization is inherently not a monad, but rather a parametrized monad. By prompt finalization, I mean the finalizing upstream resources before the current pipe is done. What we'd like is some sort of way to write:
do  someCode
    finalizeUpstream
    codeThatCan'tAwait
The problem is that if it's a monad, there is no restriction on the ordering of commands, so nothing prevents the user from calling an await after upstream is finalized. With a parametrized monad (a.k.a. an indexed monad) and GADTs, you can solve this problem by marking the finalization point with a phantom type that marks upstream has been finalized. There are actually more elegant ways to do it, but that's the most straightforward way. To learn more about parametrized monads, I highly recommend Dan Piponi's introduction to them.

However, for the time being I chose to not use parametrized monads and stuck to just splitting it into two separate monads: one for before finalizing upstream and one for after finalizing upstream. The first monad would would return the second one as a return value, so you have something that looks like:
Pipe a b m (Producer b m r)
The second block is forbidden from awaiting anything other than (), so you can safely finalize upstream at the transition between the two monads.

Besides avoiding extensions, there is a more important reason that I haven't released a version of pipes that uses parametrized monads. It turns out you can use them to communicate more complicated session types than just ordinary streams and I wanted more time to experiment that with generalization of pipes before incorporating it into the library.

The choice not to use parametrized monads complicated the underlying type for Frame slightly, but when I do include parametrized monads, it should clean up the type considerably. What that means for now is that there are two types you switch between, one of which is a Monad (the Ensure type), and the other which is a Category (the Frame) type. Once I figure out the most elegant way to do parametrized monads they should become the same type. Fortunately, it's not hard to switch between them and the documentation explains their relationship, especially Control.Pipe.Final, which goes into more depth about the types.

Directionality


Another issue that I grappled with was whether or not bidirectional pipes can possibly form a Category. Bidirectional pipes would have made finalization a lot easier, but unfortunately I could never come up with a solution that formed a proper Category. I can't definitively rule out the possibility of them, but I am reasonably sure that you can't implement them and retain anything that remotely resembles my original Pipe type. That's obviously a vague statement since I can't quantify my notion of what it means to resemble the original Pipe type.

The key breakthrough in designing finalized pipes was to not fight the unidirectionality of pipes at all and to instead just "go with the flow" (literally). This means that I had to use two separate tricks to ensure that finalization worked in both directions. This was disconcerting at first until I noticed a very general dual pattern underlying the two tricks that emerged...


Monoids


So let's imagine for a second that I'm right and pipes really must be unidirectional if you want to form a category. Now let's try to figure out how to finalize pipes correctly if a downstream pipe terminates before them. Well, if pipes are unidirectional, there is absolutely no way to communicate back upstream that the pipe terminated. Instead, we just "go with the flow" and do the reverse: every time a pipe yields it passes its most up-to-date finalizer alongside the yielded value. Downstream pipes are then responsible for calling the finalizers passed to them if they terminate first before awaiting a new value from the pipe that yielded.

What composition does is remember the last finalizer each frame yielded (defaulting to the empty finalizer for pipes that haven't run yet) then it combines the most current finalizer of each pipe with the most current finalizers of every pipe upstream. So if we had three pipes composed like so:
p1 <-< p2 <-< p3
Then you can make a diagram showing how finalizers are combined automatically by composition behind the scenes:
               p1    <-<     p2   <-<   p3
-----------------------------------------------
               f1            f2         f3
               |             |          |
               v             v          v
<-(f3*f2*f1)-< * <-(f3*f2)-< * <-(f3)-< * <-(1)
... where I'm using (*) to denote mappend (i.e. monoid multiplication), and 1 to denote mempty (i.e. monoid unit). f1 would be the most up-to-date finalizer of pipe p1, f2 would be the most up-to-date finalizer of pipe p2, and f3 would be the most up-to-date finalizer of pipe p3. The monoid in this case is just monad sequencing where (*) = (>>) and 1 = return ().

So let's say that pipe p1 were to close its input end. Composition would then take the collected upstream finalizers coming into pipe p1, which in this case is (f3*f2) = f3 >> f2, and call them since it knows it is safe to finalize them.

All of this occurs completely behind the scenes. From the user's perspective, all they do is yield the individual finalizers alongside ordinary output and the process of collecting finalizers and calling them upon termination is completely a black box:
               p1           p2           p3
----------------------------------------------
               f1           f2           f3
               |            |            |
               v            v            v
*******************BLACK**********BOX*********

So the process of collecting upstream finalizers and running them behaves like a monoid under the hood where it folds all the upstream finalizers towards the pipe that is terminating so that it knows exactly what to call to finalize all of them. Where it gets really interesting is that finalizing downstream behaves dually as a comonoid.


Comonoids


Finalizing downstream can use the ordinary metaphor of exception propagation. It's not quite as trivial as it sounds (and it was in fact the harder of the two halves to get correct, surprisingly), but if you understand exceptions you are 90% of the way towards understanding how to finalize downstream pipes.

Once you get this, it's not hard to see how this behavior is inherently comonoidal. All we have to do is reverse all the arrows in our previous diagram:
          p1  >->   p2  >->   p3
-------------------------------------
          e         e         e 
          ^         ^         ^
          |         |         |
e >-(e)-> * >-(e)-> * >-(e)-> * >-> 1
Now, this time (*) is comultiplication and 1 is counit. Comultiplication consists of splitting off the exceptional value for each pipe to handle, which ensures that no pipe accidentally swallows the exception. Counit discards the exception and doesn't bother to handle it.

Again, the user doesn't see any of this propagation or unfolding and is not responsible for rethrowing the exception. All the user sees is that they await input and get an exception instead. The rest is a black box automatically handled by composition:
          p1  >->   p2  >->   p3
------------------------------------
          e         e         e 
          ^         ^         ^
          |         |         |
*************BLACK******BOX*********

Synthesis


There is another way to think of how this monoid/comonoid duality solves the issue of unidirectionality. For the upstream half of finalization, the finalizer is transmitted to the exception (which is the termination in this case). For the downstream half, the exception (i.e. termination) is transmitted to the finalizer.

If you study the source code, you'll notice that I define composition as follows:
p1 <-< p2 = mult (return ()) p1 <~< comult p2
Where mult is what does all the monoidal folding of the finalizers, comult is what does the comonoidal unfolding of the exception. (<~<) just simulates the parametrized monad. But where are unit and counit? If you've read the documentation, the answer is that they are right there under your nose in the form of yieldF (which is unit) and awaitF (which is counit). And if you check out the definition for the identity frame, it is just:
idF = forever $ awaitF >>= yieldF
In other words, idF is both the unit for the monoidal fold and the counit for the comonoidal unfold. Diagramatically, this looks like:
... <-(f)-< * <-(f)-< ...
            ^
            |
            1
-------------------------
...   <-<  idF  <-<   ...
-------------------------
            1
            ^
            |
... <-(e)-< * <-(e)-< ...
... where the top half is the monoid and the bottom half is the comonoid.


Strictness


One important mistake I made in the first release of my library was releasing the Strict category, which wasn't a true category. It's easy to see this because it violates one of the identity laws:
(p >> discard) <+< idP /= p
So that got removed from the library in this release, however one thing I realized from countless failed attempts to fix it is that the category of pipes is inherently lazy. However, when I finally solved the finalization issue, I discovered that you could implement strictness anyway on top of the lazy core in a way that is very elegant and compositional and I describe this in the tutorial.

The "inherent" laziness of pipes/frames is important for another reason, though. You'll notice that I mentioned the capability to finalize upstream promptly but I never mentioned being able to finalize downstream promptly before the pipe is done. It turns out this is not possible and in the same way that pipe/frame composition inherently prefers to be lazy, it also prefers to order finalizers from upstream to downstream because it's lazy.

To see why, just spend a minute thinking about what it would mean to finalize downstream before a frame was done under lazy semantics.


Speed


One big problem is that the finalization machinery now incurs a large overhead. Some naive benchmarks I ran show that for IO the frame overhead takes about as much time as the IO operations it manages. However, I have done absolutely nothing to optimize the implementation at all (i.e. no Codensity transformation, no INLINE pragmas, no rewrite rules), so if you are good at optimizing Haskell code and are interested in contributing to the pipes library then I could really use your help as improving speed is a major goal for the next patch. However, I might split the faster code into a separate library if it complicates the source code considerably, because I would like to keep the main library as a reference implementation that is easier to understand.


Exceptions


The implementation only covers finalization for now, but the same principles and techniques can be adapted to solve the issue of exceptions as well. For example, while the implementation currently uses Nothing to transmit the termination exception, it can be modified to transmit a value of any kind (although not as trivially as it sounds!). Similarly, the finalizers need not be limited to the base monad, nor do they even need to be finalizers (or even monads!). Really, anything that is a monoid or comonoid can be used. I chose to go with the simplest and most practical implementation that demonstrates the basic underlying principle in order to give other people ideas.


FreeT


FreeT is the monad transformer version of the free monad that I use to refactor the implementation of the Pipe type (and it's quite common in iteratee/coroutine libraries, where it appears under various names). I'm working with Ross to include it in the transformers package, but until that happens I'm rolling it into the pipes library for now. Once it is added to transformers I will remove it from pipes and use the transformers version, so don't make pipes a dependency for FreeT.


Other Changes


Also, if you haven't been following the library closely, pipes is now BSD-licensed, so feel free to use it as you please.


Conclusion


Like with the original pipe composition, I verified that frame composition enforces the category laws (and this was an extraordinarily tedious and headache-inducing proof). This alone is probably the most significant contribution of this library as it is the only existing implementation that has a finalization mechanism that is:
  • Safe: Finalizers never get duplicated or dropped.
  • Modular: It completely decouples the finalization code of each frame and lets you to reason about their finalization properties independently of each other.
  • Scalable: It is very easy to build long pipelines with no increase in complexity at all.
  • Easy: No glue code is required to chain together the finalization mechanisms of pipes.
You don't have to take my word for it, though. Try it out!

6 comments:

  1. Congrats! Re. "Speed" section: I think you should consider leaving more elegant or un-optimized versions of functions/types as comments alongside the optimized versions, as I often see in the base libraries.

    And would it be possible to include some benchmark code in the git repo that you think would be good for people to optimize against?

    ReplyDelete
    Replies
    1. That's one good option. There is only one other problem I could imagine, namely that optimizations will also compromise portability since it will involve using a lot of GHC-specific extensions, but I think that might be okay since if I switch to parametrized monads I will have to do that anyway.

      Delete
  2. Could write a tutorial or two on how to use pipes in a practical use-case? I don't exactly understand where I would use this.

    ReplyDelete
  3. A couple things
    1. This is awesome
    2. Using the free monad transformer is an improvement. It is worth mentioning that Pipes 1.0 was not a monad transformer since the free monad over the M (m a) data constructor does not form a valid transformer since
    lift $ return x = Impure (M (liftM Pure $ return x)) =/= Pure x = return x
    3. Why not make await/yield part of a type class and get rid of awaitF/yieldF?
    4. A safe way to have optimizations and maintain portability is to use the CPP

    ReplyDelete
    Replies
    1. Thanks. Yeah, I was already aware of the monad transformer law violation, which is why I switched to `FreeT`. I actually reported this same bug for the `free` library here:

      https://github.com/ekmett/free/issues/3

      ... and I basically stole `FreeT` from Mario Blazevic's `Coroutine` type since I thought he nailed it, except for the name. :)

      There are a couple of reasons I haven't type-classed `await` and `yield`. The first is because there are some situations where you want to mix both `await`/`awaitF` (like the fold example) and `yield`/`yieldF`. When I generalize the solution to general monoids and comonoids, this will be even more apparent. However, the names could probably be improved.

      Also, whenever I write type-classes I like to include laws along with them (so that people who make their own instances can verify they wrote a correct instance, although declaring instances "correct by observational equality" seems to be all the rage these days). So I don't actually mind doing this, but I would have to first spend some time thinking about what laws it must satisfy.

      In this case, I'm warming up to optimizations more and more since they work amazingly well (especially rewrite rules), so I will definitely try out the preprocessor suggestion.

      Delete