Thursday, February 14, 2013

Comonads are objects

Haskell programmers popularized the use of monads to structure imperative computations, complete with syntactic sugar in the form of do notation. However, category theorists predict that there is also a completely symmetric concept called a "comonad", which tickles our fancy. After all, if monads are so intriguing, then perhaps comonads are equally intriguing, and who doesn't like twice the intrigue?

However, there's nothing intriguing about comonads. In fact, comonads are so pervasive in modern programming that they need no introduction. I hope to show that you very likely already employ a comonadic programming style, although you probably prefer to call it "object-oriented programming".

In this post I will build the metaphor of comonads as objects and also introduce syntactic sugar for comonads that strongly motivates this object-oriented interpretation.


The Builder Pattern


Let's try to reimplement the objected-oriented builder design pattern in Haskell. For example, let's say that we have a Config value we want to build that just stores a list of program Options:
type Option = String

data Config = MakeConfig [Option] deriving (Show)
We want to encapsulate the Config and prevent the user from disassembling it, so we don't export the MakeConfig constructor to prevent the user from pattern matching on the constructor. Instead, we export a function in place of the constructor so that the user can still create values of type Config:
configBuilder :: [Option] -> Config
configBuilder options = MakeConfig options

-- or: configBuilder = MakeConfig
We could also preset certain options ahead of time for the user:
defaultConfig :: [Option] -> Config
defaultConfig options = MakeConfig (["-Wall"] ++ options)
These functions both have the same type: they take a list of Options and return a Config so let's call any function that has this type a "builder":
someBuilder :: [Option] -> Config
These builders work fine if the user can supply all the options up front, because then we can set the options in one shot:
profile :: ([Option] -> Config) -> Config
profile builder = builder ["-prof", "-auto-all"]
>>> profile defaultConfig
MakeConfig ["-Wall","-prof","-auto-all"]
However, our profile option setter uses up our builder and we cannot supply any further options. For example, I might have another set of options to supply:
goFaster :: ([Option] -> Config) -> Config
goFaster builder = builder ["-O2"]
... but I wouldn't be able to use both profile and goFaster on the same builder.

In object oriented programming you don't have this problem, because you usually structure your builder to append options instead of setting them so that you can feed in additional options later. Then you extract the final configuration when you are done:
builder = new DefaultConfig();  // Begin from default config file
builder.profile();              // Add profiling
builder.goFaster();             // Add optimization
config = builder.extract();     // Extract final config file
However, Haskell has no implicit state, so we cannot mutate the original builder like the objected oriented version does. Even if we could, it's not clear where we would store additional options since our builder is a function, not a record with fields.

Instead, we must cheat and modify our original functions to return a new builder that "precompiles in" the given options:
profile'  :: ([Option] -> Config) -> ([Option] -> Config)
profile' builder =
    \options -> builder (["-prof", "-auto-all"] ++ options)

goFaster' :: ([Option] -> Config) -> ([Option] -> Config)
goFaster' builder =
    \options -> builder (["-O2"] ++ options)
We transformed our option setters into option appenders that leave our builder open just like the object-oriented version. Additionally, we need some sort of extract function that completes the process and returns the final configuration. That's easy since we can flush the builder by supplying it with an empty option list:
extract :: ([Option] -> Config) -> Config
extract builder = builder []
Finally, we would like to emulate the post-fix function application of objected oriented programming, so we define the following infix operator that simulates calling a method on an object:
-- (&) is another popular name for this operator
(#) :: a -> (a -> b) -> b
x # f = f x

infixl 0 #
Using our option appenders and extract, we can make a mockery of objected oriented style:
>>> let builder1 = defaultConfig # profile'
>>> let builder2 = builder1 # goFaster'
>>> builder2 # extract
MakeConfig ["-Wall","-prof","-auto-all","-O2"]
However, going back and upgrading all possible option setters into option appenders is a painstaking and laborious process, especially if we already defined a lot of them. Haskell is a functional language, though, so could we just write a function to automate this upgrade process?

In other words, we already wrote some option setter that closes the builder when done:
setter :: ([Option] -> Config) -> Config
... but we want to extend that setter to behave like an appender that keeps the builder open:
extend setter :: ([Option] -> Config) -> ([Option] -> Config)
This means that our extend function must have the following overall type:
extend :: (([Option] -> Config) ->              Config )
       ->  ([Option] -> Config) -> ([Option] -> Config)
You might expect that this isn't possible. After all, option setters seem to irreversibly use up the builder. However, we can actually cheat by intercepting the setter before applying it to the builder to reserve space for future options:
extend setter builder =
    \opts2 -> setter (\opts1 -> builder (opts1 ++ opts2))
For example, if setter were our goFaster function, then extend would transform it into goFaster':
extend goFaster builder
= \opts2 -> goFaster (\opts1 -> builder (opts1 ++ opts2))
= \opts2 -> builder (["-O2"] ++ opts2)
= goFaster' builder

-- or: extend goFaster = goFaster'
Neat!

However, all we've proven is that extend works for our goFaster function. How could we prove that it always works in every possible case? We might try to write some tests, perhaps by defining some example option setters, upgrading them to appenders, and simulating a few build processes, but these tests would only prove that extend works in those few cases that we tested. Could we instead prove that extend does the right thing in every conceivable scenario? Is that even possible?

Fortunately, Haskell's enforced purity unlocks a more powerful and more general alternative to testing: equational reasoning. We simply define the expected behavior of our extend function in terms of equations and prove that extend satisfies those equations. But what might that expected behavior be?

Well, we expect that if we extend a setter to become an appender, apply it to a builder, and then immediately extract the result, it should behave as if we had directly used the setter. The equation for that would be:
extract (extend setter builder) = setter builder
In other words, there's no point in extending a setter if we plan to extract the result immediately afterwards.

We can prove our equation because:
extract (extend setter builder)

-- Use definition of extend
= extract (\opts2 -> setter (\opts1 -> builder (opts1 ++ opts2)))

-- Use definition of extract
= (\opts2 -> setter (\opts1 -> builder (opts1 ++ opts2))) []

-- Apply the function
= setter (\opts1 -> builder (opts1 ++ []))

-- (opts1 ++ []) = opts1
= setter (\opts1 -> builder opts1)

-- (\opts1 -> builder opts1) = builder
= setter builder
Perfect!

We might also expect that if we extend the empty option setter (i.e. extract), the resulting appender should not change the builder since extract doesn't supply any options:

Exercise: Prove that:
extend extract builder = builder
There's one last property that's really subtle, but important. If we choose not to extend the second of two setters, we can always go back and extend the second setter later as long as we haven't applied them to the builder yet:

Challenge exercise: Prove that:
s1, s2 :: ([Option] -> Config) -> Config  -- setters
b , b' ::  [Option] -> Config             -- builders

extend (\b' -> s2 (extend s1 b')) b
= extend s2 (extend s1 b)
Interestingly, these three equations suffice to completely describe what it means to convert an option setter into an appender. We can formulate any other property in terms of those three properties. This means that we no longer need to test our function at all because we've proven it correct in all possible circumstances.

For example, we might suppose that if we choose not to extend the third of three setters, we can always go back and extend the third setter later on as long as we haven't applied them to the builder yet:
extend (\b' -> s3 (extend s2 (extend s1 b')) b
= extend s3 (extend s2 (extend s1 b))
We can derive that property from the proof of the two-setter case:
extend (\b' -> s3 (extend s2 (extend s1 b')) b

-- Apply proof in reverse to the two inner stages
= extend (\b' -> s3 (extend (\b'' -> s2 (extend s1 b'')) b') b

-- Apply proof to the two outer stages
= extend s3 (extend (\b'' -> s2 (extend s1 b'')) b)

-- Apply proof to the two inner stages
= extend s3 (extend s2 (extend s1 b))
In fact, the proof for the two-setter case generalizes to any number of stages. In other words, if we choose not to extend the last setter in an arbitrarily long chain, we can always go back and extend the last setter later if we haven't applied them to the builder yet.

You might be surprised to learn that we've inadvertently defined our first comonad in our pursuit of simulating object oriented programming, but it might not be clear what exactly a comonad is or why those three equations are the only equations we need to prove.


The Iterator Pattern


Now let's imagine that we're writing a new terminal shell, complete with command line history. We could model history as an infinite iterator that stores events in reverse chronological order:
data Iterator a = a :< (Iterator a)

infixr 5 :<
The history would start off empty, which we model as an infinite stream of empty commands:
initialHistory :: Iterator String
initialHistory = "" :< initialHistory
... however as the user types in more commands, we expect the history to grow:
exampleHistory :: Iterator String
exampleHistory =
       "^D"
    :< "^C"
    :< "eat flaming death"
    :< "hello?"
    :< "bye"
    :< "exit"
    :< "quit"
    :< "?"
    :< "help"
    :< "ed"
    :< initialHistory
We want a structured way to browse history, so we decide to use the object-oriented iterator pattern. We first must define a function which extracts the command below the current cursor:
extract :: Iterator a -> a
extract (cmd :< _) = cmd
>>> extract exampleHistory
"^D"
Now we need a command to get the next element in the iterator (which is the previous user command, since we're storing commands in reverse chronological order):
next :: Iterator a -> a
next (_ :< (cmd :< _)) = cmd
>>> next exampleHistory
"^C"
But what if we want to advance another step? We can't just write:
>>> next (next exampleHistory)  -- Type error!
Our next command uses up our Iterator, so we cannot proceed to the next step.

Fortunately, we are not stupid. We can define a command which shifts the iterator by one step without returning a value:
next' :: Iterator a -> Iterator a
next' (_ :< iterator) = iterator
Now we can get the value two spaces after the cursor by combining a shift and a retrieval:
next2 :: Iterator a -> a
next2 iterator = next (next' iterator)
... which should also be equivalent to shifting twice and extracting the value below the cursor:
next2 iterator = extract (next' (next' iterator))
The latter form starts to resemble an object-oriented approach. Let's humor ourselves and pretend we are object-oriented programmers for a moment:
>>> let history1 = exampleHistory # next'
>>> let history2 = history1 # next'
>>> history2 # extract
"eat flaming death"
Hmmm. This would greatly resemble object-oriented style if we didn't have to pass around intermediate states manually...

What if we define a new retrieval that looks ahead three values?
next3 :: Iterator a -> a
next3 iterator = next (next' (next' iterator)))
Then we'd have to duplicate our function and make a minor change to convert it into a shift:
next3' :: Iterator a -> Iterator a
next3' iterator = next' (next' (next' iterator)))
Duplication with minor changes is a code smell. I would like these two functions to remain consistent such that next3' always shifts to the value that next3 points to, but when I duplicate the code I must manually enforce that consistency between them. The "Don't Repeat Yourself" principle says you should not duplicate code precisely to avoid this problem.

Moreover, even the initial duplication could introduce mistakes, such as accidentally deleting one step while modifying the code:
next3' :: Iterator a -> Iterator a
next3' iterator = next' (next' iterator))
-- Oops!  I deleted the next and my boss
-- distracted me in the middle of the process
-- so I forgot to replace it with next'
We'd prefer to automate the conversion process both to avoid introducing errors and to enforce consistency between the two functions. In other words, I need a function that takes some sort of retrieval:
retrieval :: Iterator a -> a
... and extends the retrieval to become the equivalent shift:
extend retrieval :: Iterator a -> Iterator a
... which means that our extend function has type:
extend :: (Iterator a ->          a)
       ->  Iterator a -> Iterator a


Exercise: Write the extend function.


Solution:


extend f (a :< as) = (f (a :< as)) :< (extend f as)

-- or using Haskell "as-patterns"

extend f iter@(_ :< as) = (f iter) :< (extend f as)
It's not as straightforward to prove that extend next is equivalent to next'. We must settle for proving that they produce the same stream of values:
extend next (_ :< as0@(a1 :< _))

= next (_ :< as0@(a1 :< _)) :< extend next as1

= a1 :< (extend next as0)


-- Now do the same for next'
next' (_ :< as0@(a1 :< _))

= as0

= a1 :< (next' as0)

-- Both functions are of the form:
f (_ :< as0@(a1 :< _) = a1 :< f as0
-- where f is (extend next) or (next')

-- Therefore we tentatively conclude they are the same stream
extend next iterator = next' iterator

-- or just:
extend next = next'
Boy, that's a lot of effort to prove that extend upgrades one retrieval correctly. I'd prefer to pick my proofs wisely so that I can prove that extend upgrades all retrievals correctly. But what proofs suffice to nail down what it means to convert a retrieval into a shift?

Well, I expect that if I upgrade a retrieval into a shift, apply the shift, then extract the value under the cursor, I should get the same value as if I had just used the retrieval directly:

Exercise: Prove that:
extract (extend retrieval iterator) = retrieval iterator
This proof is simple and requires no fancy tricks.

Since extract points to the current location, I expect the equivalent shift to not move the iterator at all:
extend extract iterator = iterator
I'll do this one:
extend extract (a :< as)

= (extract (a :< as)) :< (extend extract as)

= a :< (extend extract as)


-- Haskell's 'id' function produces the same stream:
id a = a

id (a :< as)

= a :< (id as)


-- Both functions are of the form:
f (a :< as) = a :< (f as)
-- where f is (extend extract) or (id)

-- Therefore we tentatively conclude they are the same stream:
extend extract iterator = id iterator = iterator

-- or just:
extend extract = id
Again, there's one last non-trivial property. If I choose not to extend the second of two retrievals, I can still extend the second retrieval later if I haven't applied both of them to an iterator yet:

Challenge Exercise: Prove that:
r1, r2 :: Iterator a -> a  -- retrievals
i , i' :: Iterator a       -- iterators

extend (\i' -> r2 (extend r1 i')) i
= extend r2 (extend r1 i)
Notice how we arrived at the exact same three equations, despite solving two completely unrelated problems. Something very deep seems to lie just beneath the surface.


The Command Pattern


Now imagine that we are trying to build a small DSL for manipulating a thermostat. The thermostat keeps an internal floating point temperature in Kelvin, which it presents to the user as a string representation in Celsius. The thermostat comes with a simple API:
  • Increment the temperature by one degree
  • Decrement the temperature by one degree
This is the classic text-book example of objected-oriented design where we have some object that maintains state and methods on that object. Unfortunately, we're gluttons for punishment and choose to implement this solution in Haskell, one of the most decidedly anti-object-oriented languages in existence.

We choose to model our thermostat as a pair of values:
  • The internal representation of temperature, in Kelvin
  • A function that reads out the temperature into various representations
The Haskell type for that would be:
-- Use a newtype so we don't accidentally mix differerent
-- representations of temperature
newtype Kelvin = Kelvin { getKelvin :: Double }

-- The thermostat type:
(Kelvin, Kelvin -> a)
-- 'a' is the readout type, which may vary
We can define an initial state for our thermostat that begins with a readout that converts the internal temperature in Kelvin to Celsius:
newtype Celsius = Celsius { getCelsius :: Double }
    deriving (Show)

kelvinToCelsius :: Kelvin -> Celsius
kelvinToCelsius (Kelvin t) = Celsius (t - 273.15)

initialThermostat :: (Kelvin, Kelvin -> Celsius)
initialThermostat = (298.15, kelvinToCelsius)
We can always query the readout at any time using a function called extract:
extract :: (Kelvin, Kelvin -> a) -> a
extract (t, f) = f t
>>> extract initialThermostat
25.0
Our client is not the sharpest crayon in the box, so we have to also provide an API for previewing changes in temperature before we make them:
up :: (Kelvin, Kelvin -> a) -> a
up (t, f) = f (t + 1)

down :: (Kelvin, Kelvin -> a) -> a
down (t, f) = f (t - 1)
>>> up initialThermostat
Celsius { getCelsius :: 26.0 }
>>> down initialThermostat
Celsius { getCelsius :: 24.0 }
Similarly we can define a function to convert the current temperature into a user-friendly string:
toString :: (Kelvin, Kelvin -> Celsius) -> String
toString (t, f) = show (getCelsius (f t)) ++ " Celsius"
>>> toString initialThermostat
25.0 Celsius
But if what if I want to convert the previews into user-friendly strings? This won't type-check:
>>> toString (up initialThermostat)  -- Type error
Instead, we must define adjustment functions that modify the internal temperature of the thermostat:
up' :: (Kelvin, Kelvin -> a) -> (Kelvin, Kelvin -> a)
up' (t, f) = (t + 1, f)

down' :: (Kelvin, Kelvin -> a) -> (Kelvin, Kelvin -> a)
down' (t, f) = (t - 1, f)
Now we can simulate an object-oriented API for interacting with the thermostat:
>>> thermostat1 = initialThermostat # up'
>>> thermostat2 = thermostat1 # up'
>>> toString thermostat2
27.0 Celsius
However, there's a potential source of bugs in this approach. When a user selects a given preview, nothing requires us to apply an adjustment that matches the preview. We can only enforce this by being sufficiently diligent programmers. Wouldn't it be nice, though, if we could eliminate an entire class of bugs by automatically converting the preview into the correct corresponding adjustment?

In other words, we have a preview function:
preview :: (Kelvin, Kelvin -> a) -> b
... and we want to extend it to behave like the corresponding adjustment function:
extend preview :: (Kelvin, Kelvin -> a) -> (Kelvin, Kelvin -> b)
... which means that extend has type:
extend :: ((Kelvin, Kelvin -> a) ->                    b)
       ->  (Kelvin, Kelvin -> a) -> (Kelvin, Kelvin -> b)


Exercise: Write the extend function.


Solution:


extend preview (t, f) = (t, \t' -> preview (t', f))
If we apply extend to up, it should give us up':
extend up (t, f)

-- Apply definition of 'extend'
= (t, \t' -> up (t', f))

-- Apply definition of 'up'
= (t, \t' -> f (t' + 1))
Wait, what's this? This doesn't match the definition of up' at all:
up' (t, f) = (t + 1, f)
Which one is the "right" version? Well, we can decide which implementation is the correct one by specifying the desired behavior in the form of equations, and then seeing which implementation satisfies the equations.

We first expect that if you extract the result of an adjustment, it should match the corresponding preview.

Exercise: Prove that:
extract (extend preview thermostat) = preview thermostat
Both extend up and up' satisfy this criterion, so we can't yet say which one is correct.

We also expect that if you extend a preview of the current state (i.e. extract) to become an adjustment, then this adjustment should do nothing.

Exercise: Prove that:
extend extract thermostat = thermostat
This criterion doesn't apply to any previews other than extract, so we can't use it to select between extend up and up'.

This leaves us with our final equation: If you combine an adjustment and preview, you can extend the pair of them into an adjustment, which should be identical to extending the preview alone.

Challenge Exercise: Prove that:
p1, p2 :: (Kelvin, Kelvin -> a) -> b
t , t' :: (Kelvin, Kelvin -> a)

extend (\t' -> p2 (extend p1 t')) t
= extend p2 (extend p1 t')
This proves to be the acid test that eliminates our original implementation of up'.

Challenge Exercise: Prove that our original implementation of up' does not play nice with extend. In other words, find a counter-example that proves the following equation false:
extend (\t' -> p (up' t')) t = extend p (up' t)
Challenge Exercise: What bug does the above counter-example predict may happen when you mix extend with up'?


Comonads


At this point, we've tackled three separate object-oriented problems and in all three of them we define an extract and extend function that seem very similar across all three cases. We already know that these two functions always seem to obey the same set of equations, despite having very different types in each case.

However, we can squint a little bit and realize that the types are not as different as they initially seem. Define:
type Builder a = [Option] -> a

-- Keep 'Iterator' as is

type Thermostat a = (Kelvin, Kelvin -> a)
Using these type synonyms, we can rewrite the type signature for all three extract functions:
extract :: Builder a -> a

extract :: Iterator a -> a

extract :: Thermostat a -> a
Also, it turns out that the type signatures for our extend functions were too specialized. We could actually generalize them to the following more polymorphic types with no change to the code:
extend :: (Builder a -> b) -> Builder a -> Builder b

extend :: (Iterator a -> b) -> Iterator a -> Iterator b

extend :: (Thermostat a -> b) -> Thermostat a -> Thermostat b
In each case, we defined some type constructor w and two functions:
extract :: w a -> a

extend :: (w a -> b) -> w a -> w b
The combination of w, extract, and extend is a comonad. The equations we've been proving are known as the "comonad laws".


Comonad laws


Why do we always seem to come across the same set of equations each time? Well, we can get a clue if we define a derived operator in terms of extend:
(f =>= g) w = g (extend f w)
This operator has the nice property that extract is both its left and right identity:
extract =>= f = f

f =>= extract = f
Also, this operator is associative:
(f =>= g) =>= h = f =>= (g =>= h)
In other words, comonads form a category, specifically a "CoKleisli" category, where (=>=) is the associative composition operator for the category and extract is the identity of this composition. The comonad laws are nothing more than the category laws in disguise.

If you take the last three equations and substitute in the definition of (=>=), you will derive the same three comonad laws we have repeatedly proven.


Object-oriented programming


Every time we try to implement object-oriented programming in Haskell we gravitate, inexorably, to modeling objects as comonads. This suggests that object-oriented programming and comonadic programming are one and the same. Haskell programmers have struggled with comonads because we so militantly rejected object-oriented programming.

This means we should approach object oriented programming with humility instead of disdain and borrow object-oriented insight to devise a suitable syntactic sugar for programming with comonads.


Syntactic sugar for comonads


The object-oriented intuition for comonads suggests the syntactic sugar for comonads, which I call method notation.

Given the following method:
method
    wa> expr1
    wb> expr2
    wc> expr3
... this desugars to:
   \wa ->
let wb =      extend (\this -> expr1) wa
    wc =      extend (\this -> expr2) wb
 in extract $ extend (\this -> expr3) wc
We can simplify the last line because of the comonad laws:
   \wa ->
let wb =      extend (\this -> expr1) wa
    wc =      extend (\this -> expr2) wb
 in                  (\this -> expr3) wc
In other words, method notation extends every function except the last one. This is also equivalent to extending all of them and extracting the result.

You can also leave out the bound variables on the left, which simply does not bring them into scope:
method
    expr1
    expr2
    expr3
=
   \_wa ->
let _wb =      extend (\this -> expr1) _wa
    _wc =      extend (\this -> expr2) _wb
in  extract  $ extend (\this -> expr3) _wc
Using the latter form, the configuration file example becomes:
config :: Config
config = defaultConfig # method
    this # profile   -- no apostrophes, these are setters
    this # goFaster
This desugars to:
config = defaultConfig # \_b0 ->
    let _b1 =     extend (\this -> this # profile ) _b0
    in  extract $ extend (\this -> this # goFaster) _b1

-- equivalent to:
config = defaultConfig # \_b0 ->
    let _b1 = extend profile _b0
     in goFaster _b1

-- which reduces to:
config = goFaster (extend profile defaultConfig)
Notice how it looks like we are declaring an anonymous method and immediately invoking it on the defaultConfig object. Also, we only use setters and yet the method does "the right thing" and does not extend the last setter, as if we were returning the last line directly.

The iterator example works equally well:
next3 :: Iterator a -> a
next3 = method
    this # next  -- Move one step forward
    this # next  -- Move another step forward
    this # next  -- Return the next value

-- desugars to:
next3 =
      \_i0 ->
    let i1 =      extend (\this -> this # next) _i0
        i2 =      extend (\this -> this # next)  i1
        extract $ extend (\this -> this # next)  i2

-- which reduces to:
next3 = \i -> next (extend next (extend next i))
Here we are declaring next3 as if it were part of a class definition. method notation implicitly makes it a function of the corresponding object and brings this into scope, which always implicitly refers to the current state of the object.

If we want to refer to previous states, we just bring the intermediate steps into scope using the "prompt" notation:
next123 :: Iterator a -> [a]
next123 = method
        this # next
    i1> this # next
    i2> this # next
    i3> [i1 # extract, i2 # extract, i3 # extract]

-- desugars to:
next123 =
      \_i0 ->
    let i1 =      extend (\this -> this # next) _i0
        i2 =      extend (\this -> this # next)  i1
        i3 =      extend (\this -> this # next)  i2
     in extract $ extend (\this ->
            [i1 # extract, i2 # extract, i3 # extract]) i3

-- which reduces to:
next123 = \_i0 ->
    [ next _i0
    , next (extend next _i0)
    , next (extend next (extend next i0))
    ]
The above method returns the next three elements of the iterator, using the intermediate steps in the traversal. If we want to call it, we use it just like an object-oriented method:
>>> exampleHistory # next123
["^C","eat flaming death","hello?"]
More over, methods automatically do "the right thing" if you use them within other methods:
next123456 :: Iterator a -> [a]
next123456 = method
          this # next123
    w123> this # next123
    w456> (w123 # extract) ++ (w456 # extract)

-- desugars to:
next123456 =
       \_w0 ->
    let w123 =    extend (\this -> this # next123) _w0
        w456 =    extend (\this -> this # next123) w123
     in extract # extend (\this ->
            (w123 # extract) ++ (w456 # extract)) w456

-- which reduces to:
next123456 = \_w0 ->
    next123 _w0 ++ next123 (extend next123 _w0)
>>> exampleHistory # next123456
["^C","eat flaming death","hello?","bye","exit","quit"]
We don't have to carefully keep track of whether or not we should use them as accessors or "mutators". method notation automatically gets that correct for us.

Notice the duality with do notation and monads. With monads, unwrapped variables are scarce since we can irreversibly convert them to wrapped values, so do notation keeps them in scope as long as possible. With comonads, wrapped variables are scarce since we can irreversibly convert them to unwrapped values, so method notation keeps them in scope for as long as possible.

Our thermostat example looks just like the classic object-oriented example using method notation, except we don't need to say return at the end as method notation always implicitly returns the value of the last line:
up2 :: Thermostat Celsius -> String
up2 = method
    this # up
    this # up
    this # toString

-- desugars to:
up2 =  \_th0 ->
    let _th1 =    extend (\this -> this # up      ) _th0
        _th2 =    extend (\this -> this # up      ) _th1
     in extract $ extend (\this -> this # toString) _th2

-- which reduces to:
up2 = \_th0 -> toString (extend up (extend up _th0))
You can even return the object itself, just by returning this:
up2' :: Thermostat Celsius -> Thermostat Celsius
up2' = method
    this # up
    this # up
    this

up2 :: Thermostat Celsius -> String
up2 th = toString (th # up2')
If we inline the definition of up2' back into up2, we get:
up2 w = toString (w # method
    this # up
    this # up
    this)
Any function we apply to an invoked method can be converted to a post-fix application:
toString (w # method
    this # up
    this # up
    this)
= (w # method
    this  # up
    this  # up
    this) # toString
Observe how suggestive the notation is. When we use post-fix syntax, we can just remove the parentheses and it is still correct:
(w # method
    this  # up
    this  # up
    this) # toString
= w # method
    this  # up
    this  # up
    this  # toString
The post-fix application style plays incredibly nicely with method syntax, even though I never designed it for that purpose. This suggests that the object-oriented tendency towards post-fix function application style falls out naturally from the comonadic style and is not just a quirk of object-oriented tradition.


Comonad laws


The acid test of syntactic sugar is that the notation promotes an intuition for the comonad laws. Let's write the comonad laws using method notation:
extract (extend f w) = f w

w # method          = w # method
    this # f              this # f
    this # extract
This law says that there's no need to "re-extract" the current context at the end. method notation already does that for you.
f (extend extract w) = f w

w # method          = w # method
    this # extract        this # f
    this # f
All extract commands are no-ops within a method block since they point to our current location.

This leaves us with the final comonad law:
  h (extend (\w' -> g (extend f w')) w)
= h (extend g (extend f w))
I'll do the intermediate translation steps for this law since they are illuminating. All we have to remember is that:
extend f w

= w # method
    this # f
    this
Using that, we can mechanically derive both sides of the equation:
-- Left-hand side:
h (extend (\w' -> g (extend f w')) w)
= h (w # method
    this # \w' -> g (extend f w')
    this )
= w # method
    this # \w' -> g (extend f w')
    this # h
= w # method
    g (extend f this)
    this # h
= w # method
    g (this # method
        this # f
        this )
    this # h
= w # method
    this # method
        this # f
        this # g
    this # h

-- Right-hand side
h (extend g (extend f w))
= h (extend f w # method
    this # g
    this )
= extend f w # method
    this # g
    this # h
= (w # method
    this   # f
    this ) # method
        this # g
        this # h
= w # method
    this # f
    this # method
        this # g
        this # h
Placing them side by side gives:
w # method         = w # method
    this # method        this # f
        this # f         this # method
        this # g             this # g
    this # h                 this # h
... and like do notation, we can flatten both forms to a single normal form:
= w # method
    this # f
    this # g
    this # h
The notation suggests quite naturally that if you call a method on this, it's equivalent to inlining the method directly. Conversely, you can arbitrarily factor out any group of steps into their own method.


Comonad transformers


do notation has the nice property that it promotes the correct intuition for the monad transformer laws, where the lift distributes over the do block:
lift $ do x <- m  = do x <- lift m
          f x          lift (f x)

lift (return x) = return x
method notation also promotes the correct intuition for comonad transformers where lower similarly distributes over method blocks:
w # lower # method = w # method
    this # f           this # lower # f
    this # g           this # lower # g

w # lower # extract = w # extract

Conclusion


Object-oriented programming is the long-lost comonadic programming that Haskell programmers have been searching for. Comonads are "object-oriented programming done right".

Monads and do notation transformed the face of Haskell by turning it into the finest imperative programming language. I believe that comonads and method notation may similarly transform Haskell into the finest object-oriented language as well.