If you've ever used Unix pipes, you'll understand the importance and flexibility of composing small reusable programs to get powerful and emergent behaviors. Similarly, if you program functionally, you'll know how cool it is to compose a bunch of small reusable functions into a fully featured program.
Category theory codifies this compositional style into a design pattern, the category. Moreover, category theory gives us a precise prescription for how to create our own abstractions that follow this design pattern: the category laws. These laws differentiate category theory from other design patterns by providing rigorous criteria for what does and does not qualify as compositional.
One could easily dismiss this compositional ideal as just that: an ideal, something unsuitable for "real-world" scenarios. However, the theory behind category theory provides the meat that shows that this compositional ideal appears everywhere and can rise to the challenge of messy problems and complex business logic.
This post is just one of many posts that I will write over time where I will demonstrate how to practically use this compositional style in your programs, even for things that may seem like they couldn't possibly lend themselves to compositional programming. This first post starts off by introducing the category as a compositional design pattern.
I'm going to give a slightly different introduction to category theory than most people give. I'm going to gloss over the definition of what a morphism or an object is and skip over domains and codomains and instead just go straight to composition, because from a programmer's point of view a category is just a compositional design pattern.
Category theory says that for any given category there must be some sort of composition operator, which I will denote (.). The first rule is that this composition operator is associative:
(f . g) . h = f . (g . h) -- Associativity lawThis is useful because it means we can completely ignore the order of grouping and write it without any parentheses at all:
f . g . hCategory theory also says that this composition operator must have a left and right identity, which I will denote id. Being an identity means that:
id . f = f -- Left identity law f . id = f -- Right identity lawThe associativity law and the two identity laws are known as the category laws.
Notice that the definition of a category does not define:
- what (.) is,
- what id is, or
- what f, g, and h might be.
The brilliance behind the category design pattern is that any composition operator that observes these laws will be:
- easy to use,
- and free from edge cases.
The function category
Let's define our first category: the category of Haskell functions!
id :: (a -> a) id x = x (.) :: (b -> c) -> (a -> b) -> (a -> c) (f . g) x = f (g x)Let's prove to ourselves that these obey the category laws:
-- Left identity: id . f = f id . f = \x -> id (f x) = \x -> f x = f -- Right identity: f . id = f f . id = \x -> f (id x) = \x -> f x = f -- Associativity: (f . g) . h = f . (g . h) (f . g) . h = \x -> (f . g) (h x) = \x -> f (g (h x)) = \x -> f ((g . h) x) = \x -> (f . (g . h)) x = f . (g . h)Function composition is very easy to use, yet so powerful, precisely because it forms a category! This lets us express complex transformations simply by composing a bunch of reusable parts:
bestLangs :: [Language] -> [Language] bestLangs = take 3 . sortBy (comparing speed) . filter isCoolUnfortunately, we can't express all of our programs as chains of ordinary functions. I guess we just give up, right? Wrong!
The Kleisli category
The next most common category we encounter on a daily basis is the category of monadic functions, which generalize ordinary functions:
return :: (Monad m) => (a -> m a) (<=<) :: (Monad m) => (b -> m c) -> (a -> m b) -> (a -> m c)Mathematicians call this the "Kleisli" category, and Control.Monad provides both of the above functions.
Notice how the type signatures of return and (<=<) resemble their functional counterparts:
id :: (a -> a) return :: (Monad m) => (a -> m a) (.) :: (b -> c) -> (a -> b) -> (a -> c) (<=<) :: (Monad m) => (b -> m c) -> (a -> m b) -> (a -> m c)The implementation for (<=<) also resembles the implementation for function composition:
(f . g) x = f (g x) (f <=< g) x = f =<< (g x) -- Note (=<<) is the same as (>>=), but with the arguments flippedNot a coincidence! Monadic functions just generalize ordinary functions and the Kleisli category demonstrates that monadic functions are composable, too. They just use a different composition operator: (<=<), and a different identity: return.
Well, let's assume that category theorists aren't bullshitting us and that (<=<) really is some sort of composition and return really is its identity. If that were true, we'd expect the following laws to hold:
return <=< f = f -- Left identity f <=< return = f -- Right identity (f <=< g) <=< h = f <=< (g <=< h) -- AssociativityWell, we already have the definition for (<=<):
(f <=< g) x = f =<< (g x)... so let's use that definition to expand those laws:
return =<< (f x) = (f x) f =<< (return x) = f x (\y -> f =<< (g y)) =<< h x = f =<< (g =<< (h x))If we simplify those a little and use (>>=) to flip the order of arguments, we get:
m >>= return = m return x >>= f = f x m >>= (\y -> g y >>= f) = (m >>= g) >>= fLook familiar? Those are just the monad laws, which all Monad instances are required to satisfy. If you have ever wondered where those monad laws came from, now you know! They are just the category laws in disguise.
Consequently, every new Monad we define gives us a category for free! Let's try out some of these brave new categories:
-- The Kleisli category for the Maybe monad lookup :: k -> [(k, v)] -> Maybe v maximumByMay :: (a -> a -> Ordering) -> [a] -> Maybe a bestSuitor :: [(String, [Suitor])] -> Maybe Suitor bestSuitor = maximumByMay (comparing handsome) <=< lookup "Tall" -- The Kleisli category for the  monad children :: Person -> [Person] greatGrandChildren :: Person -> [Person] greatGrandChildren = children <=< children <=< children -- The Kleisli category for the IO monad -- * Stolen from /r/haskell today spawn :: IO a -> IO (IO a) mapM spawn :: [IO a] -> IO [IO a] sequence :: [IO a] -> IO [a] concurrentSequence :: [IO a] -> IO [a] concurrentSequence = sequence <=< mapM spawnMonads that don't observe these laws are buggy and unintuitive. Don't believe me? Just ask the people who tried to use ListT , which breaks the monad laws.
The Pipe category
Not all categories are functions. I'll use a primitive version of my Pipe type (from the pipes package) with effects removed to simplify the example:
data Pipe a b r = Pure r | Await (a -> Pipe a b r) | Yield b (Pipe a b r) Pure r <+< _ = Pure r Yield b p1 <+< p2 = Yield b (p1 <+< p2) Await f <+< Yield b p2 = f b <+< p2 p1 <+< Await f = Await $ \a -> p1 <+< f a _ <+< Pure r = Pure r idP = Await $ \a -> Yield a idPLet's check out what types the compiler infers:
idP :: Pipe a a r (<+<) :: Pipe b c r -> Pipe a b r -> Pipe a c rThose look an awful lot like an identity and composition. I leave it as an exercise for the reader to prove that they actually do form a category:
idP <+< p = p -- Right identity p <+< idP = p -- Left identity (p1 <+< p2) <+< p3 = p1 <+< (p2 <+< p3) -- AssociativityPipes show how more complicated things that don't fit neatly into the functional programming paradigm can still be achieved with a compositional programming style. I won't belabor the compositionality of pipes, though, since my tutorial already does that.
So if you find something that doesn't seem like it could be compositional, don't give up! Chances are that a compositional solution exists just beneath the surface!
All category theory says is that composition is the best design pattern, but then leaves it up to you to define what precisely composition is. It's up to you to discover new and interesting ways to compose things besides just composing functions. As long as the composition operator you define obeys the category laws, you're golden.
Also, I'm really glad too see a resurgence in functional programming (since functions form a category), but in the long run we really need to think about more interesting composition operators than just function composition if we are serious about tackling more complicated problem domains.
Hopefully this post gets you a little bit excited about category theory. In future posts, I will expand upon this post with the following topics:
- Why the category laws ensure that code is easy, intuitive, and free of edge cases
- How functors let you mix and match different categories
- How to use categories to optimize your code
- How to use categories to simplify equational reasoning