Monday, December 13, 2021

Funding isn't the problem with open source

funding

I’m writing this partially in response to the recent log4j vulnerability, where I see several people suggest that funding open source projects could have prevented this security meltdown from happening. In this post I hope to convince you that the biggest problem with open source projects isn’t the lack of funding.

Don’t get me wrong: I do believe that open source contributors should be paid a comfortable wage so that they can contribute full-time to the project. However, there is a greater disease that afflicts open source projects and funding them in the wrong way will only worsen the affliction.

The true problem with open source projects is catering to the needs of large and exploitative tech companies. Open source projects will continue to compromise on quality, ethics, and fair working conditions so long as they view adoption by these companies as the highest measure of success.

The log4j vulnerability

I don’t mean to dunk on log4j or its maintainers, but the log4j vulnerability perfectly illustrates the problem with putting the needs of big business first.

The maintainers of the log4j project knew that one of the lesser-known features was potentially problematic (although perhaps they underestimated the impact). However, did not remove the feature out of concern for breaking backwards compatibility. This is covered in more detail here:

As the above post notes, if large tech companies had funded the log4j project that would have only increased the pressure to not break backwards compatibility. Large companies are notorious for being loathe to migrate their codebases to accommodate breaking changes, to the point that they will significantly fall behind on upgrades or vendor an open source project. These companies consistently place their priorities ahead of the needs of the wider open source ecosystem.

Exploitation

The log4j incident is a symptom of a larger problem: many large and publicly-traded companies are exploitative and abusive and open source projects that simp for these large companies aren’t doing themselves any favors.

Not all companies are bad, but we can all tell when a given company has lost all sense of ethics or social responsibility when they do things like:

  • engaging in anti-competitive business practices

  • exposing sensitive user data through willful neglect

  • busting unions unscrupulously

  • doing business with authoritarian regimes

  • not even attempting to morally justify their actions

    “The shareholders made us do it”

Is having an ethically dubious logo on your project’s page really something to be proud of? Or is it actually a red flag? Think about it:

Do you believe that a company that asks their employees to cut corners won’t ask open source projects they sponsor to hack around problems?

Do you believe that a company that colludes with other employers to depress wages will agree to fair working conditions for the open source projects they depend on?

Do you believe that a company that has compromised on its own morals won’t pressure its dependencies to do the same?

Free software vs open source

I’m not exaggerating when I say that businesses encourage open source developers to compromise on their morals, because this has already happened.

The most prominent example is that the predecessor to the open source movement was the free software movement, which was not just about making source code freely available, but also securing certain freedoms for end users, such as the right to inspect, modify and run the software their lives depend on.

The free software movement is fundamentally a moral movement, and whether or not you agree with their goals, their stated purpose is grounded in ethical arguments, not a business justification. Richard Stallman discusses this in Why Open Source Misses the Point of Free Software:

For the free software movement, free software is an ethical imperative, essential respect for the users’ freedom. By contrast, the philosophy of open source considers issues in terms of how to make software “better”—in a practical sense only. It says that nonfree software is an inferior solution to the practical problem at hand.

In contrast, the open source movement originated when businesses pressured developers to compromise on their ethical principles (especially copyleft licenses) in favor of being as easy to commercialize as possible (especially with permissive licenses). The same essay notes:

That is, however, what the leaders of open source decided to do. They figured that by keeping quiet about ethics and freedom, and talking only about the immediate practical benefits of certain free software, they might be able to “sell” the software more effectively to certain users, especially business.

… and they were successful, if you define success as wildly enriching tech companies.

Solutions

I’m not an expert on fixing this problem, but I also don’t believe that open source developers need to accept exploitative working conditions. I’ll briefly mention some alternatives that people have shared with me when I’ve brought up this subject in the past.

First off, funding is still important (despite this post’s title) and anybody interested in doing open source full-time should read:

… and if you do accept sponsorships, try to steer clear of accepting funding from companies with a dubious record. Even if they never explicitly threaten to withdraw their sponsorship the implicit risk will always loom over you and influence your actions.

If you form a business around your open source software you should prefer more ethical and sustainable business models. For example, consider staying away from venture capital and instead see if something like a co-op might be more appropriate, which is described in more detail here:

This is more common than you think and you can find a list of open source projects backed by co-ops here:

However, open source developers should first reflect upon what success looks likes for their project before pursuing funding. If your measure of success is “large companies use my project or fund my project”, you still run the risk of being taken advantage of by those same companies. If your goal is to be used then, well, you’ll be used.

More generally, I believe that open source development should return to its roots as a free software movement guided by moral principles. Doing so would help the open source community set better boundaries, which would in turn improve software quality, funding, and working conditions. Without a moral center to give developers a spine, they’ll continue to race to the bottom to please corporate interests.

Thursday, October 21, 2021

Co-Applicative programming style

coapplicative

This post showcases an upcoming addition to the contravariant package that permits programming in a “co-Applicative” (Divisible) style that greatly resembles Applicative style.

This post assumes that you are already familiar with programming in an Applicative style, but if you don’t know what that is then I recommend reading:

Example

The easiest way to motivate this is through a concrete example:

{-# LANGUAGE NamedFieldPuns #-}

import Data.Functor.Contravariant (Predicate(..), (>$<))
import Data.Functor.Contravariant.Divisible (Divisible, divided)

nonNegative :: Predicate Double
nonNegative = Predicate (0 <=)

data Point = Point { x :: Double, y :: Double, z :: Double }

nonNegativeOctant :: Predicate Point
nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative
  where
    adapt Point{ x, y, z } = (x, (y, z))

-- | This operator will be available in the next `contravariant` release
(>*<) :: Divisible f => f a -> f b -> f (a, b)
(>*<) = divided

infixr 5 >*<

This code takes a nonNegative Predicate on Doubles that returns True if the double is non-negative and then uses co-Applicative (Divisible) style to create a nonNegativeOctant Predicate on Points that returns True if all three coordinates of a Point are non-negative.

The key part to zoom in on is the nonNegativeOctant Predicate, whose implementation superficially resembles the Applicative style that we know and love:

nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative

The difference is that instead of the <$> and <*> operators we use >$< and >*<, which are their evil twins dual operators1. For example, you can probably see the resemblance to the following code that uses Applicative style:

readDouble :: IO Double
readDouble = readLn

readPoint :: IO Point
readPoint = Point <$> readDouble <*> readDouble <*> readDouble

Types

I’ll walk through the types involved to help explain how this style works.

First, we will take this expression:

nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative

… and explicitly parenthesize the expression instead of relying on operator precedence and associativity:

nonNegativeOctant = adapt >$< (nonNegative >*< (nonNegative >*< nonNegative))

So the smallest sub-expression is this one:

nonNegative >*< nonNegative

… and given that the type of nonNegative is:

nonNegative :: Predicate Double

… and the type of the (>*<) operator is:

(>*<) :: Divisible f => f a -> f b -> f (a, b)

… then we can specialize the f in that type to Predicate (since Predicate implements the Divisible class):

(>*<) :: Predicate a -> Predicate b -> Predicate (a, b)

… and further specialize a and b to Double:

(>*<) :: Predicate Double -> Predicate Double -> Predicate (Double, Double)

… and from that we can conclude that the type of our subexpression is:

nonNegative >*< nonNegative
    :: Predicate (Double, Double)

In other words, nonNegative >*< nonNegative is a Predicate whose input is a pair of Doubles.

We can then repeat the process to infer the type of this larger subexpression:

nonNegative >*< (nonNegative >*< nonNegative))
    :: Predicate (Double, (Double, Double))

In other words, now the input is a nested tuple of three Doubles.

However, we want to work with Points rather than nested tuples, so we pre-process the input using >$<:

adapt >$< (nonNegative >*< (nonNegative >*< nonNegative))
  where
    adapt :: Point -> (Double, (Double, Double))
    adapt Point{ x, y, z } = (x, (y, z))

… and this works because the type of >$< is:

(>$<) :: Contravariant f => (a -> b) -> f b -> f a

… and if we specialize f to Predicate, we get:

(>$<) :: (a -> b) -> Predicate b -> Predicate a

… and we can further specialize a and b to:

(>$<)
    :: (Point -> (Double, (Double, Double)))
    -> Predicate (Double, (Double, Double))
    -> Predicate Point

… which implies that our final type is:

nonNegativeOctant :: Predicate Point
nonNegativeOctant = adapt >$< (nonNegative >*< (nonNegative >*< nonNegative))
  where
    adapt Point{ x, y, z } = (x, (y, z))

Duals

We can better understand the relationship between the two sets of operators by studying their types:

-- | These two operators are dual to one another:
(<$>) :: Functor       f => (a -> b) -> f a -> f b
(>$<) :: Contravariant f => (a -> b) -> f b -> f a

-- | These two operators are similar in spirit, but they are not really dual:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(>*<) :: Divisible   f => f a        -> f b -> f (a, b)

Okay, so (>*<) is not exactly the dual operator of (<*>). (>*<) is actually dual to liftA2 (,)2:

(>*<)      :: Divisible   f => f a -> f b -> f (a, b)
liftA2 (,) :: Applicative f => f a -> f b -> f (a, b)

In fact, if we were to hypothetically redefine (<*>) to be liftA2 (,) then we could write Applicative code that is even more symmetric to the Divisible code (albeit less ergonomic):

import Control.Applicative (liftA2)
import Prelude hiding ((<*>))

(<*>) = liftA2 (,)

infixr 5 <*>

readDouble :: IO Double
readDouble = readLn

readPoint :: IO Point
readPoint = adapt <$> readDouble <*> readDouble <*> readDouble
  where
    adapt (x, (y, z)) = Point{ x, y, z }

-- Compare to:
nonNegativeOctant :: Predicate Point
nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative
  where
    adapt Point{ x, y, z } = (x, (y, z))

It would be nice if we could create a (>*<) operator that was dual to the real (<*>) operator, but I could not figure out a good way to do this.

If you didn’t follow all of that, the main thing you should take away from this going into the next section is:

  • the Contravariant class is the dual of the Functor class
  • the Divisible class is the dual of the Applicative class

Syntactic sugar

GHC supports the ApplicativeDo extension, which lets you use do notation as syntactic sugar for Applicative operators. For example, we could have written our readPoint function like this:

{-# LANGUAGE ApplicativeDo #-}

readPoint :: IO Point
readPoint = do
    x <- readDouble
    y <- readDouble
    z <- readDouble
    return Point{ x, y, z }

… which behaves in the exact same way. Actually, we didn’t even need the ApplicativeDo extension because IO has a Monad instance and anything that has a Monad instance supports do notation without any extensions.

However, the ApplicativeDo language extension does change how the do notation is desugared. Without the extension the above readPoint function would desugar to:

readPoint =
    readDouble >>= \x ->
    readDouble >>= \y ->
    readDouble >>= \z ->
    return Point{ x, y, z }

… but with the ApplicativeDo extension the function instead desugars to only use Applicative operations instead of Monad operations:

-- I don't know the exact desugaring logic, but I imagine it's similar to this:
readPoint = adapt <$> readDouble <*> readDouble <*> readDouble
  where
    adapt x y z = Point{ x, y, z }

So could there be such a thing as “DivisibleDo” which would introduce syntactic sugar for Divisible operations?

I think there could be such an extension, and there are several ways you could design the user experience.

One approach would be to permit code like this:

{-# LANGUAGE DivisibleFrom #-}

nonNegativeOctant :: Predicate Point
nonNegativeOctant =
    from Point{ x, y, z }
        x -> nonNegative
        y -> nonNegative
        z -> nonNegative

… which would desugar to the original code that we wrote:

nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative
  where
    adapt Point{ x, y, z } = (x, (y, z))

Another approach could be to make the syntax look exactly like do notation, except that information flows in reverse:

{-# LANGUAGE DivisibleDo #-}

nonNegativeOctant :: Predicate Point
nonNegativeOctant = do
    x <- nonNegative
    y <- nonNegative
    r <- nonNegative
    return Point{ x, y, z } -- `return` here would actually be a special keyword

I assume that most people will prefer the from notation, so I’ll stick to that for now.

If we were to implement the former DivisibleFrom notation then the Divisible laws stated using from notation would become:

-- Left identity
  from x
      x -> m
      x -> conquer

= m


-- Right identity
  from x
      x -> conquer
      x -> m

= m

-- Associativity
  from (x, y, z)
      (x, y) -> from (x, y)
                    x -> m
                    y -> n
      z -> o

= from (x, y, z)
      x -> m
      (y, z) -> from (y, z)
                    y -> n
                    z -> o

= from (x, y, z)
      x -> m
      y -> n
      z -> o

This explanation of how DivisibleFrom would work is really hand-wavy, but if people were genuinely interested in such a language feature I might take a stab at making the semantics of DivisibleFrom sufficiently precise.

History

The original motivation for the (>*<) operator and Divisible style was to support compositional RecordEncoders for the dhall package.

Dhall’s Haskell API defines a RecordEncoder type which specifies how to convert a Haskell record to a Dhall syntax tree, and we wanted to be able to use the Divisible operators to combine simpler RecordEncoders into larger RecordEncoders, like this:

data Project = Project
    { name        :: Text
    , description :: Text
    , stars       :: Natural
    }

injectProject :: Encoder Project
injectProject =
  recordEncoder
    ( adapt >$< encodeFieldWith "name"        inject
            >*< encodeFieldWith "description" inject
            >*< encodeFieldWith "stars"       inject
    )
  where
    adapt Project{..} = (name, (description, stars))

The above example illustrates how one can assemble three smaller RecordEncoders (each of the encodeFieldWith functions) into a RecordEncoder for the Project record by using the Divisible operators.

If we had a DivisibleFrom notation, then we could have instead written:

injectProject =
    recordEncoder from Project{..}
        name        -> encodeFieldWith "name"        inject
        description -> encodeFieldWith "description" inject
        stars       -> encodeFieldWith "stars"       inject

If you’d like to view the original discussion that led to this idea you can check out the original pull request.

Conclusion

I upstreamed this (>*<) operator into the contravariant package, which means that you’ll be able to use the trick outlined in this post after the next contravariant release.

Until then, you can define your own (>*<) operator inline within your own project, which is what dhall did while waiting for the operator to be upstreamed.


  1. Alright, they’re not categorically dual in a rigorous sense, but I couldn’t come up with a better term to describe their relationship to the original operators.↩︎

  2. I feel like liftA2 (,) should have already been added to Control.Applicative by now since I believe it’s a pretty fundamental operation from a theoretical standpoint.↩︎

Thursday, October 14, 2021

Advice for aspiring bloggers

writing2

I’m writing this post to summarize blogging advice that I’ve shared with multiple people interested in blogging. My advice (and this post) won’t be very coherent, but I hope people will still find this useful.

Also, this advice is targeted towards blogging and not necessarily writing in general. For example, I have 10 years of experience blogging, but less experience with other forms of writing, such as writing books or academic publications.

Motivation

Motivation is everything when it comes to blogging. I believe you should focus on motivation before working on improving anything else about your writing. In particular, if you always force yourself to set aside time to write then (in my opinion) you’re needlessly making things hard on yourself.

Motivation can be found or cultivated. Many new writers start off by finding motivation; inspiration strikes and they feel compelled to share what they learned with others. However, long-term consistent writers learn how to cultivate motivation so that their writing process doesn’t become “feast or famine”.

There is no one-size-fits-all approach to cultivating motivation, because not everybody shares the same motivation for writing. However, the first step is always reflecting upon what motivates you to write, which could be:

  • sharing exciting new things you learn
  • making money
  • evangelizing a new technology or innovation
  • launching or switching to a new career
  • changing the way people think
  • improving your own understanding by teaching others
  • settling a debate or score
  • sorting out your own thoughts

The above list is not comprehensive, and people can blog for more than one reason. For example, I find that I’m most motivated to blog when I have just finished teaching someone something new or arguing with someone. When I conclude these conversations I feel highly inspired to write.

Once you clue in to what motivates you, use that knowledge to cultivate your motivation. For example, if teaching people inspires me then I’ll put myself in positions where I have more opportunities to mentor others, such as becoming an engineering manager, volunteering for Google Summer of Code, or mentoring friends earlier in their careers. Similarly, if arguing with people inspires me then I could hang out on social media with an axe to grind (although I don’t do that as much these days for obvious reasons…).

When inspiration strikes

That doesn’t mean that you should never write when you’re not motivated. I still sometimes write when it doesn’t strike my fancy. Why? Because inspiration doesn’t always strike at a convenient time.

For example, sometimes I will get “hot” to write something in the middle of my workday (such as right after a 1-on-1 conversation) and I have to put a pin in it until I have more free time later.

One of the hardest things about writing is that inspiration doesn’t always strike at convenient times. There are a few ways to deal with this, all of which are valid:

  • Write anyway, despite the inconvenience

    Sometimes writing entails reneging on your obligations and writing anyway. This can happen when you just know the idea has to come out one way or another and it won’t necessarily happen on a convenient schedule.

  • Write later

    Some topics will always inspire you every time you revisit them, so even if your excitement wears off it will come back the next time you revisit the subject.

    For example, sometimes I will start to write about something that I’m not excited about at the moment but I remember I was excited about it before. Then as I start to write everything comes flooding back and I recapture my original excitement.

  • Abandon the idea

    Sometimes you just have to completely give up on writing something.

    I’ve thrown away a lot of writing ideas that I was really attached to because I knew I would never have the time. It happens, it’s sad when it happens, but it’s a harsh reality of life.

    Sometimes “abandon the idea” can become “write later” if I happen to revisit the subject years later at a more opportune time, but I generally try to abandon ideas completely, otherwise they will keep distracting me and do more harm than good.

I personally have done all of the above in roughly equal measure. There is no right answer to which approach is correct and I treat it as a judgment call.

Quantity over quality

One common pattern I see is that new bloggers tend to “over-produce” some of their initial blog posts, especially for ideas they are exceptionally attached to. This is not necessarily a bad thing, but I usually advise against it. You don’t want to put all of your eggs in one basket and you should focus on writing more frequent and less ambitious posts rather than a few overly ambitious posts, especially when starting out.

One reason why is that people tend to be poor judges of their own work, in my experience. Not only do you not know when inspiration will strike, but you will also not know when inspiration has truly struck. There will be some times when you think something you produce is your masterpiece, your magnum opus, and other people are like “meh”. There will be other times when you put out something that feels half-baked or like a shitpost and other people will tell you that it changed their life.

That’s not to say that you shouldn’t focus on quality at all. Quite the opposite: the quality of your writing will improve more quickly if you write more often instead of polishing a few posts to death. You’ll get more frequent feedback from a wider audience if you keep putting your work out there regularly.

Great writing is learning how to build empathy for the reader and you can’t do that if you’re not regularly interacting with your audience. The more they read your work and provide feedback the better your intuition will get for what your audience needs to hear and how your writing will resonate with them. As time goes on your favorite posts will become more likely to succeed, but there will always remain a substantial element of luck to the process.

Constraints

Writing is hard, even for experienced writers like me, because writing is so underconstrained.

Programming is so much easier than writing for me because I get:

  • Tooling support

    … such as an IDE, syntax highlighting or type-checker

  • Fast feedback loop

    For many application domains I can run my code to see if it works or not

  • Clearer demonstration of value

    I can see firsthand that my program actually does what I created it to do

Writing, on the other hand, is orders of magnitude more freeform and nebulous than code. There are so many ways to say or present the exact same idea, because you can vary things like:

  • Choice of words

  • Conceptual approach

  • Sentence / paragraph structure

  • Scope

  • Diagrams / figures

  • Examples

    Oh, don’t get me started on examples. I can spend hours or even days mulling over which example to use that is just right. A LOT of my posts in my drafts have run aground on the choice of example.

There also isn’t a best way to present an idea. One way of explaining things will resonate with some people better than others.

On top of that the feedback loop is sloooooow. Soliciting reviews from others can take days. Or you can publish blind and hope that your own editing process and intution is good enough.

The way I cope is to add artificial constraints to my writing, especially when first learning to write. I came up with a very opinionated way of structuring everything and saying everything so that I could focus more on what I wanted to say instead of how to say it.

The constraints I created for myself touched upon many of the above freeform aspects of writing. Here are some examples:

  • Choice of words

    I would use a very limited vocabulary for common writing tasks. In fact, I still do in some ways. For example, I still use “For example,” when introducing an example, a writing habit which still lingers to this day.

  • Sentence / paragraph structure

    The Science of Scientific Writing is an excellent resource for how to improve writing structure in order to aid reader comprehension.

  • Diagrams / figures

    I created ASCII diagrams for all of my technical writing. It was extremely low-tech, but it got the job done.

  • Examples

    I had to have three examples. Not two. Not four. Three is the magic number.

In particular, one book stood out as exceptionally helpful in this regard:

The above book provides several useful rules of thumb for writing that new writers can use as constraints to help better focus their writing. You might notice that this post touches only very lightly on the technical aspects of authoring and editing writing, and that’s because all of my advice would boil down to: “go read that book”.

As time went on and I got more comfortable I began to deviate from these rules I had created for myself and then I could more easily find my own “voice” and writing style. However, having those guardrails in place made a big difference to me early on to keep my writing on track.

Stamina

Sometimes you need to write something over an extended period of time, long after you are motivated to do so. Perhaps this because you are obligated to do so, such as writing a blog post for work.

My trick to sustaining interest in posts like these is to always begin each writing session by editing what I’ve written so far. This often puts me back in the same frame of mind that I had when I first wrote the post and gives me the momentum I need to continue writing.

Editing

Do not underestimate the power of editing your writing! Editing can easily transform a mediocre post into a great post.

However, it’s hard to edit the post after you’re done writing. By that point you’re typically eager to publish to get it off your plate, but you should really take time to still edit what you’ve written. My rule of thumb is to sleep on a post at least once and edit in the morning before I publish, but if I have extra stamina then I keep editing each day until I feel like there’s nothing left to edit.

Conclusion

I’d like to conclude this post by acknowledging the blog that inspired me to start blogging:

That blog got me excited about the intersection of mathematics and programming and I’ve been blogging ever since trying to convey the same sense of wonder I got from reading about that.

Wednesday, October 6, 2021

The "return a command" trick

return-command

This post illustrates a trick that I’ve taught a few times to minimize the “change surface” of a Haskell program. By “change surface” I mean the number of places Haskell code needs to be updated when adding a new feature.

The motivation

I’ll motivate the trick through the following example code for a simple REPL:

import Control.Applicative ((<|>))
import Data.Void (Void)
import Text.Megaparsec (Parsec)

import qualified Data.Char            as Char
import qualified System.IO            as IO
import qualified Text.Megaparsec      as Megaparsec
import qualified Text.Megaparsec.Char as Megaparsec

type Parser = Parsec Void String

data Command = Print String | Save FilePath String

parsePrint :: Parser Command
parsePrint = do
    Megaparsec.string "print"

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return (Print string)

parseSave :: Parser Command
parseSave = do
    Megaparsec.string "save"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return (Save file string)

parseCommand :: Parser Command
parseCommand = parsePrint <|> parseSave

main :: IO ()
main = do
    putStr "> "

    eof <- IO.isEOF

    if eof
        then do
            putStrLn ""

        else do
            text <- getLine

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right command -> do
                    case command of
                        Print string -> do
                            putStrLn string

                        Save file string -> do
                            writeFile file string

            main

This REPL supports two commands: print and save:

> print Hello, world!
Hello, world!
> save number.txt 42

print echoes back whatever string you supply and save writes the given string to a file.

Now suppose that we wanted to add a new load command to read and display the contents of a file. We would need to change our code in four places.

First, we would need to change the Command type to add a new Load constructor:

data Command = Print String | Save FilePath String | Load FilePath

Second, we would need to add a new parser to parse the load command:

parseLoad :: Parser Command
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return (Load file)

Third, we would need to add this new parser to parseCommand:

parseCommand :: Parser Command
parseCommand = parsePrint <|> parseSave <|> parseLoad

Fourth, we would need to add logic for handling our new Load constructor in our main loop:

                    case command of
                        Print string -> do
                            putStrLn string

                        Save file string -> do
                            writeFile file string
                        
                        Load file -> do
                            string <- readFile file 

                            putStrLn string

I’m not a fan of this sort of program structure because the logic for how to handle each command isn’t all in one place. However, we can make a small change to our program structure that will not only simplify the code but also consolidate the logic for each command.

The trick

We can restructure our code by changing the type of all of our parsers from this:

parsePrint :: Parser Command

parseSave :: Parser Command

parseLoad :: Parser Command

parseCommand :: Parser Command

… to this:

parsePrint :: Parser (IO ())

parseSave :: Parser (IO ())

parseLoad :: Parser (IO ())

parseCommand :: Parser (IO ())

In other words, our parsers now return an actual command (i.e. IO ()) instead of returning a Command data structure that still needs to be interpreted.

This entails the following changes to the implementation of our three command parsers:

{-# LANGUAGE BlockArguments #-}

parsePrint :: Parser (IO ())
parsePrint = do
    Megaparsec.string "print"

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        putStrLn string

parseSave :: Parser (IO ())
parseSave = do
    Megaparsec.string "save"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        writeFile file string

parseLoad :: Parser (IO ())
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        string <- readFile file
        
        putStrLn string

Now that each parser returns an IO () action, we no longer need the Command type, so we can delete the following datatype definition:

data Command = Print String | Save FilePath String | Load FilePath

Finally, our main loop gets much simpler, because we no longer need to specify how to handle each command. That means that instead of handling each Command constructor:

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right command -> do
                    case command of
                        Print string -> do
                            putStrLn string

                        Save file string -> do
                            writeFile file string

                        Load file -> do
                            string <- readFile file

                            putStrLn string

… we just run whatever IO () command was parsed, like this:

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right io -> do
                    io

Now we only need to make two changes to the code any time we add a new command. For example, all of the logic for the load command is right here:

parseLoad :: Parser (IO ())
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        string <- readFile file

        putStrLn string

… and here:

parseCommand :: Parser (IO ())
parseCommand = parsePrint <|> parseSave <|> parseLoad
                                         -- ↑

… and that’s it. We no longer need to change our REPL loop or add a new constructor to our Command datatype (because there is no Command datatype any longer).

What’s neat about this trick is that the IO () command we return has direct access to variables extracted by the corresponding Parser. For example:

parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    -- The `file` variable that we parse here …
    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        -- … can be referenced by the corresponding `IO` action here
        string <- readFile file

        putStrLn string

There’s no need to pack our variables into a data structure and then unpack them again later on when we need to use them. This technique promotes tight and “vertically integrated” code where all of the logic is one place.

Final encodings

This trick is a special case of a more general trick known as a “final encoding” and the following post does a good job of explaining what “initial encoding” and “final encoding” mean:

To briefly explain initial and final encodings in my own words:

  • An “initial encoding” is one where you preserve as much information as possible in a data structure

    This keeps your options as open as possible since you haven’t specified what to do with the data yet

  • A “final encoding” is one where you encode information by how you intend to use it

    This tends to simplify your program if you know in advance how the information will be used

The initial example from this post was an initial encoding because each Parser returned a Command type which preserved as much information as possible without specifying what to do with it. The final example from this post was a final encoding because we encoded our commands by directly specifying what we planned to do with them.

Conclusion

This trick is not limited to returning IO actions from Parsers. For example, the following post illustrates a similar trick in the context of implementing configuration “wizards”:

… where a wizard has type IO (IO ()) (a command that returns another command).

More generally, you will naturally rediscover this trick if you stick to the principle of “keep each component’s logic all in one place”. In the above example the “components” were REPL commands, but this consolidation principle is useful for any sort of plugin-like system.

Appendix

Here is the complete code for the final version of the running example if you would like to test it out yourself:

{-# LANGUAGE BlockArguments #-}

import Control.Applicative ((<|>))
import Data.Void (Void)
import Text.Megaparsec (Parsec)

import qualified Data.Char            as Char
import qualified System.IO            as IO
import qualified Text.Megaparsec      as Megaparsec
import qualified Text.Megaparsec.Char as Megaparsec

type Parser = Parsec Void String

parsePrint :: Parser (IO ())
parsePrint = do
    Megaparsec.string "print"

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        putStrLn string

parseSave :: Parser (IO ())
parseSave = do
    Megaparsec.string "save"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        writeFile file string

parseLoad :: Parser (IO ())
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        string <- readFile file

        putStrLn string

parseCommand :: Parser (IO ())
parseCommand = parsePrint <|> parseSave <|> parseLoad

main :: IO ()
main = do
    putStr "> "

    eof <- IO.isEOF

    if eof
        then do
            putStrLn ""

        else do
            text <- getLine

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right io -> do
                    io

            main