Thursday, October 14, 2021

Advice for aspiring bloggers

writing2

I’m writing this post to summarize blogging advice that I’ve shared with multiple people interested in blogging. My advice (and this post) won’t be very coherent, but I hope people will still find this useful.

Also, this advice is targeted towards blogging and not necessarily writing in general. For example, I have 10 years of experience blogging, but less experience with other forms of writing, such as writing books or academic publications.

Motivation

Motivation is everything when it comes to blogging. I believe you should focus on motivation before working on improving anything else about your writing. In particular, if you always force yourself to set aside time to write then (in my opinion) you’re needlessly making things hard on yourself.

Motivation can be found or cultivated. Many new writers start off by finding motivation; inspiration strikes and they feel compelled to share what they learned with others. However, long-term consistent writers learn how to cultivate motivation so that their writing process doesn’t become “feast or famine”.

There is no one-size-fits-all approach to cultivating motivation, because not everybody shares the same motivation for writing. However, the first step is always reflecting upon what motivates you to write, which could be:

  • sharing exciting new things you learn
  • making money
  • evangelizing a new technology or innovation
  • launching or switching to a new career
  • changing the way people think
  • improving your own understanding by teaching others
  • settling a debate or score
  • sorting out your own thoughts

The above list is not comprehensive, and people can blog for more than one reason. For example, I find that I’m most motivated to blog when I have just finished teaching someone something new or arguing with someone. When I conclude these conversations I feel highly inspired to write.

Once you clue in to what motivates you, use that knowledge to cultivate your motivation. For example, if teaching people inspires me then I’ll put myself in positions where I have more opportunities to mentor others, such as becoming an engineering manager, volunteering for Google Summer of Code, or mentoring friends earlier in their careers. Similarly, if arguing with people inspires me then I could hang out on social media with an axe to grind (although I don’t do that as much these days for obvious reasons…).

When inspiration strikes

That doesn’t mean that you should never write when you’re not motivated. I still sometimes write when it doesn’t strike my fancy. Why? Because inspiration doesn’t always strike at a convenient time.

For example, sometimes I will get “hot” to write something in the middle of my workday (such as right after a 1-on-1 conversation) and I have to put a pin in it until I have more free time later.

One of the hardest things about writing is that inspiration doesn’t always strike at convenient times. There are a few ways to deal with this, all of which are valid:

  • Write anyway, despite the inconvenience

    Sometimes writing entails reneging on your obligations and writing anyway. This can happen when you just know the idea has to come out one way or another and it won’t necessarily happen on a convenient schedule.

  • Write later

    Some topics will always inspire you every time you revisit them, so even if your excitement wears off it will come back the next time you revisit the subject.

    For example, sometimes I will start to write about something that I’m not excited about at the moment but I remember I was excited about it before. Then as I start to write everything comes flooding back and I recapture my original excitement.

  • Abandon the idea

    Sometimes you just have to completely give up on writing something.

    I’ve thrown away a lot of writing ideas that I was really attached to because I knew I would never have the time. It happens, it’s sad when it happens, but it’s a harsh reality of life.

    Sometimes “abandon the idea” can become “write later” if I happen to revisit the subject years later at a more opportune time, but I generally try to abandon ideas completely, otherwise they will keep distracting me and do more harm than good.

I personally have done all of the above in roughly equal measure. There is no right answer to which approach is correct and I treat it as a judgment call.

Quantity over quality

One common pattern I see is that new bloggers tend to “over-produce” some of their initial blog posts, especially for ideas they are exceptionally attached to. This is not necessarily a bad thing, but I usually advise against it. You don’t want to put all of your eggs in one basket and you should focus on writing more frequent and less ambitious posts rather than a few overly ambitious posts, especially when starting out.

One reason why is that people tend to be poor judges of their own work, in my experience. Not only do you not know when inspiration will strike, but you will also not know when inspiration has truly struck. There will be some times when you think something you produce is your masterpiece, your magnum opus, and other people are like “meh”. There will be other times when you put out something that feels half-baked or like a shitpost and other people will tell you that it changed their life.

That’s not to say that you shouldn’t focus on quality at all. Quite the opposite: the quality of your writing will improve more quickly if you write more often instead of polishing a few posts to death. You’ll get more frequent feedback from a wider audience if you keep putting your work out there regularly.

Great writing is learning how to build empathy for the reader and you can’t do that if you’re not regularly interacting with your audience. The more they read your work and provide feedback the better your intuition will get for what your audience needs to hear and how your writing will resonate with them. As time goes on your favorite posts will become more likely to succeed, but there will always remain a substantial element of luck to the process.

Constraints

Writing is hard, even for experienced writers like me, because writing is so underconstrained.

Programming is so much easier than writing for me because I get:

  • Tooling support

    … such as an IDE, syntax highlighting or type-checker

  • Fast feedback loop

    For many application domains I can run my code to see if it works or not

  • Clearer demonstration of value

    I can see firsthand that my program actually does what I created it to do

Writing, on the other hand, is orders of magnitude more freeform and nebulous than code. There are so many ways to say or present the exact same idea, because you can vary things like:

  • Choice of words

  • Conceptual approach

  • Sentence / paragraph structure

  • Scope

  • Diagrams / figures

  • Examples

    Oh, don’t get me started on examples. I can spend hours or even days mulling over which example to use that is just right. A LOT of my posts in my drafts have run aground on the choice of example.

There also isn’t a best way to present an idea. One way of explaining things will resonate with some people better than others.

On top of that the feedback loop is sloooooow. Soliciting reviews from others can take days. Or you can publish blind and hope that your own editing process and intution is good enough.

The way I cope is to add artificial constraints to my writing, especially when first learning to write. I came up with a very opinionated way of structuring everything and saying everything so that I could focus more on what I wanted to say instead of how to say it.

The constraints I created for myself touched upon many of the above freeform aspects of writing. Here are some examples:

  • Choice of words

    I would use a very limited vocabulary for common writing tasks. In fact, I still do in some ways. For example, I still use “For example,” when introducing an example, a writing habit which still lingers to this day.

  • Sentence / paragraph structure

    The Science of Scientific Writing is an excellent resource for how to improve writing structure in order to aid reader comprehension.

  • Diagrams / figures

    I created ASCII diagrams for all of my technical writing. It was extremely low-tech, but it got the job done.

  • Examples

    I had to have three examples. Not two. Not four. Three is the magic number.

In particular, one book stood out as exceptionally helpful in this regard:

The above book provides several useful rules of thumb for writing that new writers can use as constraints to help better focus their writing. You might notice that this post touches only very lightly on the technical aspects of authoring and editing writing, and that’s because all of my advice would boil down to: “go read that book”.

As time went on and I got more comfortable I began to deviate from these rules I had created for myself and then I could more easily find my own “voice” and writing style. However, having those guardrails in place made a big difference to me early on to keep my writing on track.

Stamina

Sometimes you need to write something over an extended period of time, long after you are motivated to do so. Perhaps this because you are obligated to do so, such as writing a blog post for work.

My trick to sustaining interest in posts like these is to always begin each writing session by editing what I’ve written so far. This often puts me back in the same frame of mind that I had when I first wrote the post and gives me the momentum I need to continue writing.

Editing

Do not underestimate the power of editing your writing! Editing can easily transform a mediocre post into a great post.

However, it’s hard to edit the post after you’re done writing. By that point you’re typically eager to publish to get it off your plate, but you should really take time to still edit what you’ve written. My rule of thumb is to sleep on a post at least once and edit in the morning before I publish, but if I have extra stamina then I keep editing each day until I feel like there’s nothing left to edit.

Conclusion

I’d like to conclude this post by acknowledging the blog that inspired me to start blogging:

That blog got me excited about the intersection of mathematics and programming and I’ve been blogging ever since trying to convey the same sense of wonder I got from reading about that.

Wednesday, October 6, 2021

The "return a command" trick

return-command

This post illustrates a trick that I’ve taught a few times to minimize the “change surface” of a Haskell program. By “change surface” I mean the number of places Haskell code needs to be updated when adding a new feature.

The motivation

I’ll motivate the trick through the following example code for a simple REPL:

import Control.Applicative ((<|>))
import Data.Void (Void)
import Text.Megaparsec (Parsec)

import qualified Data.Char            as Char
import qualified System.IO            as IO
import qualified Text.Megaparsec      as Megaparsec
import qualified Text.Megaparsec.Char as Megaparsec

type Parser = Parsec Void String

data Command = Print String | Save FilePath String

parsePrint :: Parser Command
parsePrint = do
    Megaparsec.string "print"

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return (Print string)

parseSave :: Parser Command
parseSave = do
    Megaparsec.string "save"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return (Save file string)

parseCommand :: Parser Command
parseCommand = parsePrint <|> parseSave

main :: IO ()
main = do
    putStr "> "

    eof <- IO.isEOF

    if eof
        then do
            putStrLn ""

        else do
            text <- getLine

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right command -> do
                    case command of
                        Print string -> do
                            putStrLn string

                        Save file string -> do
                            writeFile file string

            main

This REPL supports two commands: print and save:

> print Hello, world!
Hello, world!
> save number.txt 42

print echoes back whatever string you supply and save writes the given string to a file.

Now suppose that we wanted to add a new load command to read and display the contents of a file. We would need to change our code in four places.

First, we would need to change the Command type to add a new Load constructor:

data Command = Print String | Save FilePath String | Load FilePath

Second, we would need to add a new parser to parse the load command:

parseLoad :: Parser Command
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return (Load file)

Third, we would need to add this new parser to parseCommand:

parseCommand :: Parser Command
parseCommand = parsePrint <|> parseSave <|> parseLoad

Fourth, we would need to add logic for handling our new Load constructor in our main loop:

                    case command of
                        Print string -> do
                            putStrLn string

                        Save file string -> do
                            writeFile file string
                        
                        Load file -> do
                            string <- readFile file 

                            putStrLn string

I’m not a fan of this sort of program structure because the logic for how to handle each command isn’t all in one place. However, we can make a small change to our program structure that will not only simplify the code but also consolidate the logic for each command.

The trick

We can restructure our code by changing the type of all of our parsers from this:

parsePrint :: Parser Command

parseSave :: Parser Command

parseLoad :: Parser Command

parseCommand :: Parser Command

… to this:

parsePrint :: Parser (IO ())

parseSave :: Parser (IO ())

parseLoad :: Parser (IO ())

parseCommand :: Parser (IO ())

In other words, our parsers now return an actual command (i.e. IO ()) instead of returning a Command data structure that still needs to be interpreted.

This entails the following changes to the implementation of our three command parsers:

{-# LANGUAGE BlockArguments #-}

parsePrint :: Parser (IO ())
parsePrint = do
    Megaparsec.string "print"

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        putStrLn string

parseSave :: Parser (IO ())
parseSave = do
    Megaparsec.string "save"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        writeFile file string

parseLoad :: Parser (IO ())
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        string <- readFile file
        
        putStrLn string

Now that each parser returns an IO () action, we no longer need the Command type, so we can delete the following datatype definition:

data Command = Print String | Save FilePath String | Load FilePath

Finally, our main loop gets much simpler, because we no longer need to specify how to handle each command. That means that instead of handling each Command constructor:

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right command -> do
                    case command of
                        Print string -> do
                            putStrLn string

                        Save file string -> do
                            writeFile file string

                        Load file -> do
                            string <- readFile file

                            putStrLn string

… we just run whatever IO () command was parsed, like this:

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right io -> do
                    io

Now we only need to make two changes to the code any time we add a new command. For example, all of the logic for the load command is right here:

parseLoad :: Parser (IO ())
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        string <- readFile file

        putStrLn string

… and here:

parseCommand :: Parser (IO ())
parseCommand = parsePrint <|> parseSave <|> parseLoad
                                         -- ↑

… and that’s it. We no longer need to change our REPL loop or add a new constructor to our Command datatype (because there is no Command datatype any longer).

What’s neat about this trick is that the IO () command we return has direct access to variables extracted by the corresponding Parser. For example:

parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    -- The `file` variable that we parse here …
    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        -- … can be referenced by the corresponding `IO` action here
        string <- readFile file

        putStrLn string

There’s no need to pack our variables into a data structure and then unpack them again later on when we need to use them. This technique promotes tight and “vertically integrated” code where all of the logic is one place.

Final encodings

This trick is a special case of a more general trick known as a “final encoding” and the following post does a good job of explaining what “initial encoding” and “final encoding” mean:

To briefly explain initial and final encodings in my own words:

  • An “initial encoding” is one where you preserve as much information as possible in a data structure

    This keeps your options as open as possible since you haven’t specified what to do with the data yet

  • A “final encoding” is one where you encode information by how you intend to use it

    This tends to simplify your program if you know in advance how the information will be used

The initial example from this post was an initial encoding because each Parser returned a Command type which preserved as much information as possible without specifying what to do with it. The final example from this post was a final encoding because we encoded our commands by directly specifying what we planned to do with them.

Conclusion

This trick is not limited to returning IO actions from Parsers. For example, the following post illustrates a similar trick in the context of implementing configuration “wizards”:

… where a wizard has type IO (IO ()) (a command that returns another command).

More generally, you will naturally rediscover this trick if you stick to the principle of “keep each component’s logic all in one place”. In the above example the “components” were REPL commands, but this consolidation principle is useful for any sort of plugin-like system.

Appendix

Here is the complete code for the final version of the running example if you would like to test it out yourself:

{-# LANGUAGE BlockArguments #-}

import Control.Applicative ((<|>))
import Data.Void (Void)
import Text.Megaparsec (Parsec)

import qualified Data.Char            as Char
import qualified System.IO            as IO
import qualified Text.Megaparsec      as Megaparsec
import qualified Text.Megaparsec.Char as Megaparsec

type Parser = Parsec Void String

parsePrint :: Parser (IO ())
parsePrint = do
    Megaparsec.string "print"

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        putStrLn string

parseSave :: Parser (IO ())
parseSave = do
    Megaparsec.string "save"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    Megaparsec.space1

    string <- Megaparsec.takeRest

    return do
        writeFile file string

parseLoad :: Parser (IO ())
parseLoad = do
    Megaparsec.string "load"

    Megaparsec.space1

    file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

    return do
        string <- readFile file

        putStrLn string

parseCommand :: Parser (IO ())
parseCommand = parsePrint <|> parseSave <|> parseLoad

main :: IO ()
main = do
    putStr "> "

    eof <- IO.isEOF

    if eof
        then do
            putStrLn ""

        else do
            text <- getLine

            case Megaparsec.parse parseCommand "(input)" text of
                Left e -> do
                    putStr (Megaparsec.errorBundlePretty e)

                Right io -> do
                    io

            main

Wednesday, September 29, 2021

Fall-from-Grace: A ready-to-fork functional programming language

grace

I’m publishing a repository containing a programming language implementation named Fall-from-Grace (or “Grace” for short). The goal of this language is to improve the quality of domain-specific languages by providing a useful starting point for implementers to fork and build upon.

You can visit the project repository if you want to cut to the chase. The README already provides most of the information that you will need to begin using the language. This announcement post focuses more upon the history and motivation behind this project. Specifically, I imagine some people will want to understand how this work relates to my work on Dhall.

TL;DR: I created a functional language that is a superset of JSON, sort of like Jsonnet but with types and bidirectional type inference.

The motivation

The original motivation for this project was very different than the goal I finally settled upon. My original goal was to build a type checker with type inference for Nix, since this is something that a lot of people wanted (myself included). However, I bit off more than I could chew because Nix permits all sorts of stuff that is difficult to type-check (like computed record fields or the callPackages idiom in Nixpkgs).

I lost steam on my Nix type-checker (for now) but then I realized that I had still built a fairly sophisticated interpreter implementation along the way that other people might find useful. In particular, this was my first new interpreter project in years since I created Dhall, and I had learned quite a few lessons from Dhall about interpreter best practices.

“So”, I thought, “why not publish what I had created so far?”

Well, I can think of one very good reason why not: I want to continue to support and improve upon Dhall because people are using Dhall in production and I have neither the time nor the inclination to build yet another ecosystem around a new programming language. But somebody else might!

So I took the project in a totally different direction: publish an instructive implementation of a programming language that others could fork and build upon.

Unlike Dhall, this new language would not be encumbered by the need to support a language standard or multiple independent implementations, so I could go nuts with adding features that I omitted from Dhall. Also, this language would not be published in any way so that I could keep the implementation clear, opinionated, and maintainable. In other words, this language would be like an “executable tutorial”.

The starting point

I designed the initial feature set of this new language based on feedback I had received from Dhall’s skeptics. The rough language these people had in mind went something like this:

  • The language had to have bidirectional type inference

    The most common criticism I received about Dhall was about Dhall’s limited type inference.

  • The language had to have JSON-compatible syntax

    One of the things that had prevented people from using Dhall was that the migration story was not as smooth as, say, CUE (which is a superset of YAML) or Jsonnet (which is a superset of JSON), because Dhall was not a superset of any existing file format.

    Also, many people indicated that JSON syntax would be easier for beginners to pick up, since they would likely be comfortable with JSON if they had prior experience working with JavaScript or Python.

  • The language had to have JSON-compatible types

    JSON permits all sorts of silliness (like [ 1, [] ]), and people wanted a type system that can cope with that stuff, while still getting most of the benefits of working with types. Basically, they wanted something sort of like TypeScript’s type system.

  • They don’t want to run arbitrary code

    TypeScript already checks off most of the above points, but these people were looking for alternatives because they didn’t want to permit users to run arbitrary JavaScript. In other words, they don’t want a language that is “Pac-Man complete” and instead want something more limited as a clean slate for building their own domain-specific languages.

What I actually built

I created the Fall-from-Grace language, which is my best attempt to approximate the Dhall alternative that most people were looking for.

Fall-from-Grace has:

  • JSON-compatible syntax

  • Bidirectional type-inference and type-checking

  • A JSON-compatible type system

  • Dhall-style filepath and URL imports

  • Fast interpretation

  • Open records and open unions

    a.k.a. row polymorphism and polymorphic variants

  • Universal quantification and existential quantification

    a.k.a. “generics” and “typed holes”

One way to think of Grace is like “Dhall but with better type inference and JSON syntax”. For example, here is the Grace equivalent of the tutorial example from dhall-lang.org:

# ./users.ffg
let makeUser = \user ->
      let home       = "/home/" + user
      let privateKey = home + "/.ssh/id_ed25519"
      let publicKey  = privateKey + ".pub"

      in  { home: home, privateKey: privateKey, publicKey: publicKey }

in  [ makeUser "bill"
    , makeUser "jane"
    ]

… which produces this result:

$ grace interpret ./users.ffg
[ { "home": "/home/bill"
  , "privateKey": "/home/bill/.ssh/id_ed25519"
  , "publicKey": "/home/bill/.ssh/id_ed25519.pub"
  }
, { "home": "/home/jane"
  , "privateKey": "/home/jane/.ssh/id_ed25519"
  , "publicKey": "/home/jane/.ssh/id_ed25519.pub"
  }
]

Just like Dhall, Grace supports let expressions and anonymous functions for simplifying repetitive expressions. However, there are two differences here compared to Dhall:

  • Grace uses JSON-like syntax for records (i.e. : instead of = to separate key-value pairs)

    Because Grace is a superset of JSON you don’t need a separate grace-to-json tool like Dhall. The result of interpreting Grace code is already valid JSON.

  • Grace has better type inference and doesn’t require annotating the type of the user function argument

    The interpreter can work backwards to infer that the user function argument must have type Text based on how user is used.

Another way to think of Grace is as “Jsonnet + types + type inference”. Grace and Jsonnet are both programmable configuration languages and they’re both supersets of JSON, but Grace has a type system whereas Jsonnet does not.

The following example Grace code illustrates this by fetching all post titles from the Haskell subreddit:

# ./reddit-haskell.ffg
let input
      = https://www.reddit.com/r/haskell.json
      : { data: { children: List { data: { title: Text, ? }, ? }, ? }, ? }

in  List/map (\child -> child.data.title) input.data.children

… which at the time of this writing produces the following result:

$ grace interpret ./reddit-haskell.ffg
[ "Monthly Hask Anything (September 2021)"
, "Big problems at the timezone database"
, "How can Haskell programmers tolerate Space Leaks?"
, "async graph traversal in haskell"
, "[ANN] Call For Volunteers: Join The New \"Our Foundation Task Force\""
, "Math lesson to be learned here?"
, "In search a functional job"
, "Integer coordinate from String"
, "New to Haskell"
, "Variable not in scope error"
, "HF Technical Track Elections - Announcements"
, "Learning Haskell by building a static blog generator"
, "Haskell Foundation Board meeting minutes 2021-09-23"
, "Haskell extension 1.7.0 VS Code crashing"
, "[question] Nix/Obelisk with cabal packages intended for hackage"
, "George Wilson - Cultivating an Engineering Dialect (YOW! Lambda Jam 2021)"
, "A new programming game, Swarm by Brent Yorgey"
, "Scheme in 48 hours, First chapter issue"
, "Recursively delete JSON keys"
, "Issue 282 :: Haskell Weekly newsletter"
, "Haskell"
, "Yaml parsing with preserved line numbers"
, "I would like some input on my code if anybody have time. I recently discovered that i can use variables in Haskell (thought one could not use them for some reason). Would just like some input on how i have done it."
, "Diehl's comments on Haskell numbers confuse..."
, "Please explain this syntax"
, "Can Haskell automatically fuse folds?"
]

Here we can see:

  • Dhall-style URL imports

    We can import JSON (or any Grace code) from a URL just by pasting the URL into our code

  • Any JSON expression (like haskell.json) is also a valid Grace expression that we can transform

  • We can give a partial type signature with holes (i.e. ?) specifying which parts we care about and which parts to ignore

Grace’s type system is even more sophisticated than that example lets on. For example, if you ask grace for the type of this anonymous function from the example:

\child -> child.data.title

… then the interpreter will infer the most general type possible without any assistance from the programmer:

forall (a : Type) .
forall (b : Fields) .
forall (c : Fields) .
  { data: { title: a, b }, c } -> a

This is an example of row-polymorphism (what Grace calls “open records”).

Grace also supports polymorphic variants (what Grace calls “open unions”), so you can wrap values of different types in constructors without having to declare any union type in advance:

[ GitHub
    { repository: "https://github.com/Gabriel439/Haskell-Turtle-Library.git"
    , revision: "ae5edf227b515b34c1cb6c89d9c58ea0eece12d5"
    }
, Local { path: "~/proj/optparse-applicative" }
, Local { path: "~/proj/discrimination" }
, Hackage { package: "lens", version: "4.15.4" }
, GitHub
    { repository: "https://github.com/haskell/text.git"
    , revision: "ccbfabedea1cf5b38ff19f37549feaf01225e537"
    }
, Local { path: "~/proj/servant-swagger" }
, Hackage { package: "aeson", version: "1.2.3.0" }
]

… and the interpreter automatically infers the most general type for that, too:

forall (a : Alternatives) .
  List
    < GitHub: { repository: Text, revision: Text }
    | Local: { path: Text }
    | Hackage: { package: Text, version: Text }
    | a
    >

Open records + opens unions + type inference mean that the language does not require any data declarations to process input. The interpreter infers the shape of the data from how the data is used.

Conclusion

If you want to learn more about Grace then you should check out the README, which goes into more detail about the language features, and also includes a brief tutorial.

I also want to conclude by briefly mentioning some other secondary motivations for this project:

  • I want to cement Haskell’s status as the language of choice for implementing interpreters

    I gave a longer talk on this subject where I argue that Haskell can go mainstream by cornering the “market” for interpreted languages:

  • I hope to promote certain Haskell best practices for implementing interpreters

    Grace provides a model project for implementing an interpreted language in Haskell, including project organization, choice of dependencies, and choice of algorithms. Even if people choose to not fork Grace I hope that this project will provide some useful opinionated decisions to help get them going.

  • I mentor several people learning programming language theory and I wanted instructive example code to reference when teaching them

    One of the hardest parts about teaching programming language theory is that it’s hard to explain how to combine language features from more than one paper. Grace provides the complete picture by showing how to mix together multiple advanced features.

  • I wanted to prototype a few language features for Dhall’s language standard

    For example, I wanted to see how realistic it would be to standardize bidirectional type-checking for Dhall. I may write a follow-up post on what I learned regarding that.

  • I also wanted to prototype a few implementation techniques for the Haskell implementation of Dhall

    The Haskell implementation of Dhall is complicated enough that it’s hard to test drive some implementation improvements I had in mind, but Grace was small enough that I could learn and iterate on some ideas more quickly.

I don’t plan on building an ecosystem around Grace, although anybody who is interested in doing so can freely fork the language. Now that Grace is complete I plan on using what I learned from Grace to continue improving Dhall.

I also don’t plan on publishing Grace as a package to Hackage nor do I plan to provide an API for customizing the language behavior (other than forking the language). I view the project as an educational resource and not a package that people should depend on directly, because I don’t commit to supporting this project in production.

I do believe Grace can be forked to create a “Dhall 2.0”, but if that happens such an effort will need to be led by somebody else other than me. The main reason why is that I’d rather preserve Grace as a teaching tool and I don’t have the time to productionize Grace (productionizing Dhall was already a big lift for me).

However, I still might fork Grace in the future for other reasons (including a second attempt at implementing a type-checker for Nix).