## Friday, June 4, 2021

### Probability for Slay the Spire fanatics

probability

I’m a huge fan of Slay the Spire and I wanted to share a small probability trick I’ve learned that has helped me improve my game by calculating the odds of “greedy” plays succeeding. This trick will likely also benefit other games based on cards or probability, too, but the examples from this post will be specific to Slay the Spire.

The general problem I was trying to solve is:

I have a deck containing `N` desirable cards out of `D` total cards. If I draw `H` cards from that deck, what is the chance that I draw (exactly / at least / at most) `M` desirable cards?

This sort of question comes up often in card games, including complex ones (like Slay the Spire) or simpler ones (like poker). Here are some concrete examples of how this relates to Slay the Spire:

• “Next turn I draw 5 cards. What is the likelihood that I draw at least 3 Strikes if there are 7 cards left in the deck, 4 of which are Strikes”.

(Answer: 5 / 7 ≈ 71% chance)

• “One Neow bonus lets me lose 7 max HP to select from 1 of 3 random rare cards. Right now I’m only interested in 6 of the 17 possible rare cards, so what is the chance that I random at least 1 of those 6?”

(Answer: 103 / 136 ≈ 76% chance)

The nice thing about Slay the Spire is that it’s a turn-based single-player game, so (unlike poker) you can take all the time you want when deciding your turn and nobody will rush you to hurry up. This means that I can safely geek out on these sorts of probability calculations to increase my likelihood of winning.

In this post I’ll first present the solution to the above probability question (both as a mathematical formula and as code) and then explain why the formula works.

#### The formula

Before presenting the solution to the above problem, I’d first like to reframe the problem using more precise set notation to avoid ambiguity:

Define `U` to be the universal set of all cards in the deck and then define two (potentially overlapping) subsets of those cards:

• `A`: The set of cards that are drawn
• `B`: The set of cards that are desirable

Let’s also define `¬` to mean set complement, such that:

• `¬A` means the set of cards that are not drawn
• `¬B` means the set of cards that are undesirable

Finally, we’ll use `∩` to denote set intersection, which we will use to define four non-overlapping (i.e. disjoint) sets:

``````           ¬B │       B
┌─────────┼─────────┐
¬A │ ¬A ∩ ¬B │ ¬A ∩  B │
────┼─────────┼─────────┤
A │  A ∩ ¬B │  A ∩ ¬B │
└─────────┴─────────┘``````

… where:

• `A ∩ B` means the set of cards that are drawn and desirable
• `A ∩ ¬B` means the set of cards that are drawn, but undesirable
• `¬A ∩ B` means the set of cards that are desirable, but not drawn
• `¬A ∩ ¬B` means the set of cards that are neither desirable nor drawn

Then the problem becomes:

If the size of the sets `A` and `B` are fixed, what is the probability that their overlap (i.e. the set `A ∩ B`) has a given size?

I use this presentation to highlight that:

• The problem isn’t really specific to cards. It will work for arbitrary sets
• The problem isn’t specific to drawing cards or desirable cards. It works for any two subsets of cards
• The problem is symmetric with respect to A and B. If we swap the size of the two subsets of cards we expect to get the same answer

Using `|S|` to denote the size (i.e. cardinality) of a set S and using `!` to denote factorial, then the probability of two subsets of fixed sizes overlapping by a given size is:

``````                |A|! × |¬A|! × |B|! × |¬B|!
p = ────────────────────────────────────────────────────
|A ∩ B|! × |A ∩ ¬B|! × |¬A ∩ B|! × |¬A ∩ ¬B|! × |U|!``````

With this solution in hand, we can solve our original problem by just renaming the variables:

I have a deck containing `|B|` desirable cards out of `|U|` total cards. If I draw `|A|` cards from that deck, what is the chance that I draw (exactly / at least / at most) `|A ∩ B|` desirable cards?

We already have the solution for the case where we draw exactly `|A ∩ B|` desirable cards. From that we can compute the solutions for drawing at most or at least `|A ∩ B|` desirable cards.

#### The code

If you still don’t follow along, I translate the above solution into Haskell code in this section.

Note that even though the previous formula depends on the sizes of nine sets:

• `|U|`
• `|A|`
• `|¬A|`
• `|B|`
• `|¬B|`
• `|A ∩ B|`
• `|A ∩ ¬B|`
• `|¬A ∩ B|`
• `|¬A ∩ ¬B|`

… there are really only four degrees of freedom, because the size of the following four disjoint sets:

• `|A ∩ B|`
• `|A ∩ ¬B|`
• `|¬A ∩ B|`
• `|¬A ∩ ¬B|`

… uniquely determine the sizes of the other sets, because:

``````| A| = | A ∩  B| + | A ∩ ¬B|
|¬A| = |¬A ∩  B| + |¬A ∩ ¬B|
| B| = | A ∩  B| + |¬A ∩  B|
|¬B| = | A ∩ ¬B| + |¬A ∩ ¬B|
| U| = | A ∩  B| + | A ∩ ¬B| + |¬A ∩  B| + |¬A ∩ ¬B|``````

… so each function in our API will only take four function arguments, corresponding to the size of those four disjoint sets. There are other ways we could define the API that use a different four degrees of freedom, but I find this interface to be the simplest and most symmetric.

``````module Probability where

import Numeric.Natural (Natural)

-- Use exact Rational arithmetic by default instead of floating point arithmetic
default (Natural, Rational)

-- Obligatory: http://www.willamette.edu/~fruehr/haskell/evolution.html?
factorial :: (Enum n, Num n) => n -> n
factorial n = product [1..n]

{-| The probability that two sets of sizes @|A|@ and @|B|@ overlap by a set of
exactly size @|A ∩ B|@

prop> exactly 1 0 0 a === 1 % (a + 1)
prop> exactly a b 0 0 === 1
prop> exactly a 0 b 0 === 1
prop> exactly a b c d === exactly a c b d
-}
exactly
:: (Enum n, Fractional n)
=> Natural
-- ^ The size of @|A ∩ B|@
-> Natural
-- ^ The size of @|A ∩ ¬B|@
-> Natural
-- ^ The size of @|¬A ∩ B|@
-> Natural
-- ^ The size of @|¬A ∩ ¬B|@
-> n
exactly _AB _AnotB _BnotA notAB =
fromIntegral numerator / fromIntegral denominator
where
_A = _AB + _AnotB
_B = _AB + _BnotA

notB = _AnotB + notAB
notA = _BnotA + notAB

_U = _AB + _AnotB + _BnotA + notAB

numerator = product (map factorial [ _A, notA, _B, notB ])

denominator = product (map factorial [ _AB, _AnotB, _BnotA, notAB, _U ])

{-| The probability that two sets of sizes @|A|@ and @|B|@ overlap by a set of
at least size @|A ∩ B|@

prop> atLeast 0 a b c === 1
prop> atLeast a b c d === atMost a c b d
-}
atLeast
:: (Enum n, Fractional n)
=> Natural
-- ^ The minimum size of @|A ∩ B|@
-> Natural
-- ^ The maximum size of @|A ∩ ¬B|@
-> Natural
-- ^ The maximum size of @|¬A ∩ B|@
-> Natural
-- ^ The minimum size of @|¬A ∩ ¬B|@
-> n
atLeast _AB _AnotB _BnotA notAB = sum (map overshootBy [0..(min _AnotB _BnotA)])
where
overshootBy x = exactly (_AB + x) (_AnotB - x) (_BnotA - x) (notAB + x)

{-| The probability that two sets of sizes @|A|@ and @|B|@ overlap by a set of
at most size @|A ∩ B|@

prop> atMost 0 a b c === exactly 0 a b c
prop> atMost a 0 b c === 1
prop> atMost a b 0 c === 1
prop> atMost a b c d === atMost a c b d
prop> atMost a b c d + atLeast a b c d === 1 + exactly a b c d
-}
atMost
:: (Enum n, Fractional n)
=> Natural
-- ^ The maximum size of @|A ∩ B|@
-> Natural
-- ^ The minimum size of @|A ∩ ¬B|@
-> Natural
-- ^ The minimum size of @|¬A ∩ B|@
-> Natural
-- ^ The maximum size of @|¬A ∩ ¬B|@
-> n
atMost _AB _AnotB _BnotA notAB = sum (map undershootBy [0..(min _AB notAB)])
where
undershootBy x = exactly (_AB - x) (_AnotB + x) (_BnotA + x) (notAB - x)``````

Exercise: If you don’t use Haskell, try to port the above functions to your favorite programming language.

#### Examples

Let’s test drive the above utilities on the example scenarios from Slay the Spire.

• “Next turn I draw 5 cards. What is the likelihood that I draw at least 3 Strikes if there are 7 cards left in the deck, 4 of which are Strikes”.

Here, we will be using the `atLeast` function with the following input sizes for each disjoint set:

• `| A ∩ B| = 3` - We wish to draw at least 3 Strike cards
• `| A ∩ ¬B| = 2` - We wish to draw at most 2 non-Strike cards
• `|¬A ∩ B| = 1` - We wish to leave at most 1 Strike card in the deck
• `|¬A ∩ ¬B| = 1` - We wish to leave at least 1 non-Strike card in the deck

… which gives us a 5 in 7 chance:

``````>>> atLeast 3 2 1 1
5 % 7``````

Exercise: Use the `atMost` function to compute the chance of falling short by drawing at most 2 Strikes.

• “One Neow bonus lets me lose 7 max HP to select from 1 of 3 random rare cards. Right now I’m only interested in 6 of the 17 possible rare cards, so what is the chance that I random at least 1 of those 6?”

This will also use the `atLeast` function with the following input sizes for each disjoint set:

• `| A ∩ B| = 1` - We wish to draw at least 1 desired rare card
• `| A ∩ ¬B| = 2` - We wish to draw at most 2 undesirable rare cards
• `|¬A ∩ B| = 5` - We wish to leave at most 5 desirable cards in the pool
• `|¬A ∩ ¬B| = 9` - We wish to leave at least 9 undesirable cards in the pool

… which gives us a 103 in 136 chance:

``````>>> atLeast 1 2 5 9
103 % 136``````

Exercise: Use the `exactly` function to compute the chance of randoming 0 desirable cards.

#### Explanation

This section provides a semi-rigorous explanation of why the formula works, although probably not rigorous enough to be called a proof.

The probability of drawing a correct hand is:

``````    {The number of correct hands}
p = ──────────────────────────────
{The number of possible hands}``````

The number of possible hands is the number of ways that we can draw a hand of size `|A|` (without replacement) from a pool of `|U|` cards (our deck):

``{The number of possible hands} = |U| choose |A|``

Similarly, the number of correct hands is the product of:

• the number of ways to draw `|A ∩ B|` desirable cards from a pool of `|B|` desirable cards and
• the number of ways to draw `|A ∩ ¬B|` undesirable cards from a pool of `|¬B|` undesirable cards
``{The number of correct hands} = (|B| choose |A ∩ B|) × (|¬B| choose |A ∩ ¬B|)``

Putting that together gives:

``````    (|B| choose |A ∩ B|) × (|¬B| choose |A ∩ ¬B|)
p = ─────────────────────────────────────────────
|U| choose |A|``````

Then we can use the `n` choose `k` formula for computing the number of ways to select `n` cards from a pool of `k` cards without replacement, which gives us:

``````            |B|!                   |¬B|!
──────────────────── × ──────────────────────
|A ∩ B|! × |¬A ∩ B|!   |A ∩ ¬B|! × |¬A ∩ ¬B|!
p = ─────────────────────────────────────────────
|U|!
───────────
|A|! × |¬A|``````

… and then if we simplify that we get our original formula:

``````                |A|! × |¬A|! × |B|! × |¬B|!
p = ────────────────────────────────────────────────────
|A ∩ B|! × |A ∩ ¬B|! × |¬A ∩ B|! × |¬A ∩ ¬B|! × |U|!``````

#### Conclusion

Unfortunately, this formula is quite difficult to do in one’s head so I usually load these utility functions into a Haskell REPL to do the math on tricky turns. I haven’t yet figured out an easy heuristic for getting a quick and approximate answer for this sort of probability calculation.

However, if you’re willing to take the time to compute the odds this sort of calculation can quickly add up to improving your odds of winning a Slay the Spire run. I often use this trick to compute the expectation value of greedy plays which can save quite a bit of health or potions over the course of a run.

## Wednesday, May 19, 2021

### Module organization guidelines for Haskell projects

modules

This post collects a random assortment of guidelines I commonly share for how to organize Haskell projects.

#### Organize modules “vertically”, not “horizontally”

The glib summary of this rule is: don’t create a “Types” or “Constants” module.

“Vertically” organized modules are modules that group related functionality within the same module. For example, vertically-oriented modules for a simple interpreter might be:

• A `Syntax` module

… which contains the concrete syntax tree type and utilities for traversing, viewing, or editing that tree.

• A `Parsing` module

… which contains (or imports/re-exports) the parser type, a parser for the syntax tree, and error messages specific to parsing.

• An `Infer` module

… which contains the the type inference logic, exception types, and error messages specific to type-checking.

• An `Evaluation` module

… which logic for evaluating an expression, including possibly a separate data structure for a fully-evaluated abstract syntax tree.

• A `Pretty` module

… which specifies how to pretty-print or otherwise format expressions for display.

“Horizontally” organized modules mean that you organize code into modules based on which language features or imports the code relies upon. For example, horizontally-oriented modules for the same interpreter might be:

• A `Types` module

… which contains almost all types, including the concrete syntax tree, abstract syntax tree, parsing-related types, and exception types.

• A `Lib` module

… which contains almost all functions, including the parsers, type inference code, evaluator, and prettyprinter.

• A `Constants` module

… which contains almost all constants (including all error messages, timeouts, and help text).

• An `App` module

… which contains the `main` entry point for the program.

There are a few reasons I prefer vertical module organization over horizontal module organization:

• Vertically-organized modules are easier to split into smaller packages

For example, in a vertically-organized project I could separate out the `Syntax`, `Parser`, and `Pretty` modules into a separate standalone package, which could be used by other people to create an automatic code formatter for my language without having to depend on the type-checking or evaluation logic.

Conversely, there would be little benefit in separating out a `Types` module from the equivalent horizontally-organized package. Typically, horizontal modules are so tightly coupled to one another that no subset of the modules is useful in isolation.

The separability of vertical modules is an even bigger feature for proprietary projects that aspire to eventually open source their work. Vertically-organized projects are easier to open source a few modules at a time while keeping the proprietary pieces internal.

• Vertically-organized modules tend to promote more granular and incremental build graphs

In a horizontally-organized project, each new type you add to the `Types` module forces a rebuild of the entire package. In a vertically-organized project, if I completely rewrite the type-checking logic then only other modules that depend on type-checking will rebuild (and typically very few depend on type-checking).

• Vertically-organized modules tend to group related changes

A common issue in a horizontally-organized project is that every change touches almost every module, making new contributions harder and making related functionality more difficult to discover. In a vertically-organized project related changes tend to fall within the same module.

#### Naming conventions

I like to use the convention that the default module to import is the same as the package name, except replacing `-` with `.` and capitalizing words.

For example, if your package name is `foo-bar-baz`, then the default module the user imports to use your package is `Foo.Bar.Baz`.

Packages following this module naming convention typically do not have module names beginning with `Control.` or `Data.` prefixes (unless the package name happens to begin with a `control-` or `data-` prefix).

There are a few reasons I suggest this convention:

• Users can easily infer which module to import from the package name

• It tends to lead to shorter module names

For example, the `prettyprinter` package recently switched to this idiom, which changed the default import from `Data.Text.Prettyprint.Doc` to `Prettyprinter`.

• It reduces module naming clashes between packages

The reasoning is that you are already claiming global namespace when naming a package, so should why not also globally reserve the module of the same name, too?

However, this won’t completely eliminate the potential for name clashes for other non-default modules that your package exports.

#### The “God” library stanza

This is a tip for proprietary projects only: put all of your project’s code into one giant library stanza in your `.cabal` file, including the entrypoint logic (like your command-line interface), tests, and benchmarks. Then every other stanza in the `.cabal` file (i.e. the executables, test suites, and benchmarks) should just be a thin wrapper around something exported from one of your “library” modules.

For example, suppose that your package is named `foo` which builds an executable named `bar`. Your `foo.cabal` file would look like this (with only the relevant parts shown):

``````name: foo
…

library
hs-source-dirs: src
exposed-modules:
Foo.Bar
…

executable bar
hs-source-dirs: bar
main-is: Main.hs
…``````

… where `src/Foo/Bar.hs` would look like this:

``````-- ./src/Foo/Bar.hs
module Foo.Bar where

…

main :: IO ()
main = … -- Your real `main` goes here``````

… and `bar/Main.hs` is a trivial wrapper around `Foo.Bar.main`:

``````-- ./bar/Main.hs

module Main where

import qualified Foo.Bar

main :: IO ()
main = Foo.Bar.main``````

This tip specifically works around `cabal repl`’s poor support for handling changes that span multiple project stanzas (both `cabal v1-repl` and `cabal v2-repl` appear to have the problem).

To illustrate the issue, suppose that you use `cabal repl` to load the executable logic for the project like this:

``````\$ cabal repl exe:bar
…
*Main>``````

Now if you change the `Foo.Bar` module and `:reload` the REPL, the REPL will not reflect the changes you made. This is a pain whenever you need to test changes that span your library and an executable, your library and a test suite, or your library and a benchmark.

Also, if you load an executable / test suite / benchmark into the REPL that depends on a separate library stanza then `cabal repl` will force you to generate object code for the library stanza, which is slow. Contrast that with using `cabal repl` to only load the library stanza, which will be faster because it won’t generate object code.

Moreover, `ghcid` uses `cabal repl` to power its fast type-checking loop, which means that `ghcid` also does not work well if you need to quickly switch between changes to the library stanza and other project stanzas.

The fix to all of these problems is: put all of your project’s logic into the library stanza and use only the library stanza as basis for your interactive development. Everything else (your executables, your test suites, and your benchmarks) is just a trivial wrapper around something exported from the library.

I don’t recommend this solution for open source projects, though. If you do this for a public package then your end users will hate you because your package’s library section will depend on test packages or benchmarking packages that can’t be disabled. In contrast, proprietary codebases rarely care about gratuitous dependencies (in my experience).

#### Conclusion

Those are all of the tips I can think of at the moment. Leave a comment if you think I missed another common practice.

## Wednesday, May 5, 2021

### The trick to avoid deeply-nested error-handling code

either-trick

This post documents a small trick that I use to avoid deeply-nested error-handling code. This trick is a common piece of Haskell folklore that many people either learn from others or figure out on their own, but I’m not sure what the official name of this trick is (so I had difficulty searching for prior art explaining this trick). However, I’ve taught this trick to others enough times that I think it merits a blog post of its own.

This post assumes some familiarity with Haskell’s `Either` type and `do` notation, but the Appendix at the end of the post will walk through all of the details using equational reasoning if you’re having trouble following along with how things work.

#### The motivating example

Let’s begin from the following contrived `Either`-based example that uses deeply nested error-handling:

``````{-# LANGUAGE NamedFieldPuns #-}

import Text.Read (readMaybe)

data Person = Person { age :: Int, alive :: Bool } deriving (Show)

example :: String -> String -> Either String Person
example ageString aliveString = do
case readMaybe ageString of
Nothing -> do
Left "Invalid age string"

Just age -> do
if age < 0
then do
Left "Negative age"

else do
case readMaybe aliveString of
Nothing -> do
Left "Invalid alive string"

Just alive -> do
return Person{ age, alive }``````

… which we can use like this:

``````>>> example "24" "True"
Right (Person {age = 24, alive = True})

>>> example "24" "true"
Left "Invalid alive string"

>>> example "" "True"
Left "Invalid age string"

>>> example "-5" "True"
Left "Negative age"``````

Notice how the above coding style increases the nesting / indentation level every time we parse or validate the input in some way. We could conserve indentation by using 2-space indents or indenting only once for each level of nesting:

``````{-# LANGUAGE NamedFieldPuns #-}

import Text.Read (readMaybe)

data Person = Person { age :: Int, alive :: Bool } deriving (Show)

example :: String -> String -> Either String Person
example ageString aliveString = case readMaybe ageString of
Nothing  -> Left "Invalid age string"
Just age -> if age < 0
then Left "Negative age"
else case readMaybe aliveString of
Nothing    -> Left "Invalid alive string"
Just alive -> return Person{ age, alive }``````

However, I think most people writing code like that would prefer to keep the indentation level stable, no matter how many validations the code requires.

#### The trick

Fortunately, we can avoid nesting the code with the following change:

``````{-# LANGUAGE NamedFieldPuns #-}

import Text.Read (readMaybe)

data Person = Person { age :: Int, alive :: Bool } deriving (Show)

example :: String -> String -> Either String Person
example ageString aliveString = do
age <- case readMaybe ageString of
Nothing  -> Left "Invalid age string"
Just age -> return age

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe aliveString of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }``````

Here we make use of several useful properties:

• `return` in Haskell does not short-circuit or exit from the surrounding code

In fact, some Haskell programmers prefer to use `pure` (a synonym for `return`) to better convey the absence of short-circuiting behavior:

``````{-# LANGUAGE NamedFieldPuns #-}

import Text.Read (readMaybe)

data Person = Person { age :: Int, alive :: Bool } deriving (Show)

example :: String -> String -> Either String Person
example ageString aliveString = do
age <- case readMaybe ageString of
Nothing  -> Left "Invalid age string"
Just age -> pure age

if age < 0
then Left "Negative age"
else pure ()

alive <- case readMaybe aliveString of
Nothing    -> Left "Invalid alive string"
Just alive -> pure alive

pure Person{ age, alive }``````
• `Left` does short-circuit from the surrounding code

In fact, we can define a synonym for `Left` named `throw` if we want to better convey the presence of short-circuiting behavior::

``````{-# LANGUAGE NamedFieldPuns #-}

import Text.Read (readMaybe)

data Person = Person { age :: Int, alive :: Bool } deriving (Show)

example :: String -> String -> Either String Person
example ageString aliveString = do
age <- case readMaybe ageString of
Nothing  -> throw "Invalid age string"
Just age -> pure age

if age < 0
then throw "Negative age"
else pure ()

alive <- case readMaybe aliveString of
Nothing    -> throw "Invalid alive string"
Just alive -> pure alive

pure Person{ age, alive }

throw :: String -> Either String a
throw = Left``````
• `Left` / `throw` “return” any type of value

If you look at the type of `throw`, the “return” type is a polymorphic type `a` because `throw` short-circuits (and therefore doesn’t need to return a real value of that type).

This is why the type checker doesn’t complain when we do this:

``````age <- case readMaybe ageString of
Nothing  -> throw "Invalid age string"
Just age -> pure age``````

… because both branches of the `case` expression type-check as an expression that “return”s an `Int`. The left branch type-checks as a branch that returns an `Int` because it cheats and never needs to return anything and the right branch returns a bona-fide `Int` (the `age`).

We can make this explicit by giving both branches of the `case` expression an explicit type annotation:

``````age <- case readMaybe ageString of
Nothing  -> throw "Invalid age string" :: Either String Int
Just age -> pure age                   :: Either String Int``````

This means that both branches of a `case` expression will always share the same return type if at least one branch is a `Left` / `throw`. Similarly, both branches of an `if` expression will share the same return type if at least one branch is a `Left` / `throw`:

``````if age < 0
then throw "Negative age" :: Either String ()
else pure ()              :: Either String ()``````
• We can return a value from a `case` expression

New Haskell programmers might not realize that `case` expressions can return a value (just like any other expression), which means that their result can be stored as a new value within the surrounding scope:

``````age <- case readMaybe ageString of
Nothing  -> throw "Invalid age string" :: Either String Int
Just age -> pure age                   :: Either String Int``````

The type-checker doesn’t mind that only the second branch returns a valid `age` because it knows that the outer `age` is unreachable if the first branch short-circuits. This understanding is not built into the compiler, but is rather an emergent property of how `do` notation works for `Either`. See the Appendix for a fully-worked equational reasoning example showing why this works.

#### Combinators

You can build upon this trick by defining helpful utility functions to simplify things further. For example, I sometimes like to define the following helper function:

``````orDie :: Maybe a -> String -> Either String a
Just a  `orDie` _      = return a
Nothing `orDie` string = Left string

{- Equivalent, more explicit, implementation:

maybe `orDie` string =
case maybe of
Nothing -> Left string
Just a  -> return a
-}``````

… which you can use like this:

``````{-# LANGUAGE NamedFieldPuns #-}

import Text.Read (readMaybe)

data Person = Person { age :: Int, alive :: Bool } deriving (Show)

orDie :: Maybe a -> String -> Either String a
Just a  `orDie` _      = Right a
Nothing `orDie` string = Left string

example :: String -> String -> Either String Person
example ageString aliveString = do
age <- readMaybe ageString `orDie` "Invalid age string"

if age < 0
then Left "Negative age"
else return ()

alive <- readMaybe aliveString `orDie` "Invalid alive string"

return Person{ age, alive }``````

… which is even more clear (in my opinion).

#### Conclusion

That is the entirety of the trick. You can return values from `case` expressions to avoid deeply-nesting your `Either` code, or you can define utility functions (such as `orDie`) which do essentially the same thing.

This trick applies equally well to any other `Monad` that supports some notion of short-circuiting on failure, such as `ExceptT` (from `transformers` / `mtl`). The only essential ingredient is some `throw`-like function that short-circuits and therefore has a polymorphic return type.

#### Appendix

You can build a better intuition for why this works using equational reasoning. We’ll begin from an example usage of our function and at each step of the process we will either desugar the code or substitute one subexpression with another equal subexpression.

In all of the examples, we’ll begin from this definition for the `example` function:

``````example :: String -> String -> Either String Person
example ageString aliveString = do
age <- case readMaybe ageString of
Nothing  -> Left "Invalid age string"
Just age -> return age

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe aliveString of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }``````

Let’s first begin with the example that fails the most quickly:

``````example "" "True"

-- Substitute `example` with its definition:
--
-- … replacing `ageString` with `""`
--
-- … replacing `aliveString` with `"True"`
do  age <- case readMaybe "" of
Nothing  -> Left "Invalid age string"
Just age -> return age

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Definition of `readMaybe` (not shown):
--
-- (readMaybe "" :: Maybe Int) = Nothing
do  age <- case Nothing of
Nothing  -> Left "Invalid age string"
Just age -> return age

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Simplify the first `case` expression
do  age <- Left "Invalid age string"

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Desugar `do` notation one step
--
-- (do age <- m; n) = (m >>= \age -> n)
Left "Invalid age string" >>= \age ->
do  if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Definition of `>>=` for `Either`
--
-- (Left x >>= _) = (Left x)
Left "Invalid age string"``````

… and we’re done! The key step is the last bit where we simplify `(Left … >>= _)` to `(Left …)`, which is how `Either` short-circuits on failure. Because this simplification does not use the downstream code everything type-checks just fine even though we never supply a valid `age`.

For completeness, let’s also walk through the example where everything succeeds:

``````example "24" "True"

-- Substitute `example` with its definition:
--
-- … replacing `ageString` with `"24"`
--
-- … replacing `aliveString` with `"True"`
do  age <- case readMaybe "24" of
Nothing  -> Left "Invalid age string"
Just age -> return age

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Definition of `readMaybe` (not shown):
--
-- (readMaybe "24" :: Maybe Int) = (Just 24)
do  age <- case Just 24 of
Nothing  -> Left "Invalid age string"
Just age -> return age

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Simplify the first `case` expression
do  age <- return 24

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- return = Right
do  age <- Right 24

if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Desugar `do` notation one step
--
-- (do age <- m; n) = (m >>= \age -> n)
Right 24 >>= \age ->
do  if age < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age, alive }

-- Definition of `>>=` for `Either`
--
-- (Right x >>= f) = (f x)
--
-- This means that we substitute `age` with `24`
do  if 24 < 0
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- (24 < 0) = False
do  if False
then Left "Negative age"
else return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- (if False then l else r) = r
do  return ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- return = Right
do  Right ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- (do m; n) = (do _ <- m; n)
do  _ <- Right ()

alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- Desugar `do` notation one step
--
-- (do age <- m; n) = (m >>= \age -> n)
Right () >>= _ ->
do  alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- Definition of `>>=` for `Either`
--
-- (Right x >>= f) = (f x)
--
-- This means that we substitute `_` (unused) with `()`
do  alive <- case readMaybe "True" of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- Definition of `readMaybe` (not shown):
--
-- (readMaybe "True" :: Maybe Bool) = (Just True)
do  alive <- case Just True of
Nothing    -> Left "Invalid alive string"
Just alive -> return alive

return Person{ age = 24, alive }

-- Simplify the `case` expression
do  alive <- return True

return Person{ age = 24, alive }

-- return = Right
do  alive <- Right True

return Person{ age = 24, alive }

-- Desugar `do` notation one step
--
-- (do age <- m; n) = (m >>= \age -> n)
Right True >>= \alive ->
do  return Person{ age = 24, alive }

-- Definition of `>>=` for `Either`
--
-- (Right x >>= f) = (f x)
--
-- This means that we substitute `alive` with `True`
do  return Person{ age = 24, alive = True }

-- Desugar `do` notation
--
-- do m = m
return Person{ age = 24, alive = True }

-- return = Right
Right Person{ age = 24, alive = True }``````

As an exercise, you can try to extrapolate between those two examples to reason through what happens when we evaluate the remaining two examples which fail somewhere in between:

``````>>> example "24" "true"

>>> example "-5" "True"``````