Thursday, August 16, 2018

NixOS in production

nixos.prod

This is a short post summarizing what I wish I had known when I first started using NixOS in production. Hopefully other people will find this helpful to avoid the pitfalls that I ran into.

The main issue with NixOS is that the manual recommends a workflow that is not suitable for deployment to production. Specifically, the manual encourages you to:

  • keep Nix source code on the destination machine
    • i.e. /etc/nixos/{hardware-,}configuration.nix
  • build on the destination machine
  • use Nix’s channel system to obtain nixpkgs code

This post will describe how you can instead:

  • build a source-free binary closure on a build machine
  • transfer and deploy that binary closure to a separate destination machine

This guide overlaps substantially with what the NixOps tool does and you can think of this guide as a way to transplant a limited subset of what NixOps does to work with other provisioning tools (such as Terraform).

You might also find this guide useful even when using NixOS as a desktop operating system for handling more exotic scenarios not covered by the nixos-rebuild command-line tool.

Building the closure

We’ll build up to the final solution by slowly changing the workflow recommended in the NixOS manual.

Suppose that you already have an /etc/nixos/configuration.nix file and you use nixos-rebuild switch to deploy your system. You can wean yourself off of nixos-rebuild by building the binary closure for the system yourself. In other words, you can reimplement the nixos-rebuild build command from scratch.

To do so, create the following file anywhere on your system:

Then run:

Congratulations, you’ve just built a binary closure for your system!

Deploying the closure

nix-build deposits a result symlink in the current directory pointing to the built binary closure. To deploy the current system to match the built closure, run:

Congratulations, you’ve just done the equivalent of nixos-rebuild switch!

As the above command suggests, the closure contains a ./bin/switch-to-configuration which understands a subset of the commands that the nixos-rebuild command does. In particular, the switch-to-configuration script accepts these commands:

$ ./result/bin/switch-to-configuration --help
Usage: ../../result/bin/switch-to-configuration [switch|boot|test]

switch:       make the configuration the boot default and activate now
boot:         make the configuration the boot default
test:         activate the configuration, but don't make it the boot default
dry-activate: show what would be done if this configuration were activated

Adding a profile

The nixos-rebuild command actually does one more thing in addition to buiding the binary closure and deploying the system. The nixos-rebuild command also creates a symlink pointing to the current system configuration so that you can roll back to that configuration later. The symlink also acts like a garbage collection root, preventing the system from being garbage collected until you remove the symlink (either directly using rm or a higher-level utility such as nix-collect-garbage)

You can record the system configuration in the same way as nixos-rebuild using the nix-env command:

Querying system options

You can use the same nixos.nix file to query what options you’ve set for your system, just like the nixos-option utility. For example, if you want to compute the final value of the networking.firewall.allowedTCPPorts option then you run this command:

Pinning nixpkgs

Now that you’ve taken control of the build you can do fancier things like pin nixpkgs to a specific revision of nixpkgs so that you don’t need to use a channels or the NIX_PATH any longer:

In fact, this makes your build completely insensitive to the NIX_PATH, eliminating a potential source of non-determinism from the build.

Building remotely

Now that you’ve removed nixos-rebuild from the equation you can build the binary closure on a separate machine from the one that you deploy to. You can check your nixos.nix, configuration.nix and hardware-configuration.nix files into version control and nix-build the system on any machine that can check out your version controlled Nix configuration. All you have to do is change the import path to be a relative path to the configuration.nix file within the same repository instead of an absolute path:

Then all your build machine has to do is:

$ git clone "git@github.com:${OWNER}/${REPOSITORY}.git"
$ nix-build --attr system "./${REPOSITORY}/path/to/nixos.nix"

To deploy the built binary closure to another machine, use nix copy. If you have SSH access to the destination machine then you can nix copy directly to that machine:

If you do not have SSH access to the destination machine you can instead use nix-copy to create a binary archive file containing the binary closure of your system:

… and upload the binary archive located at /tmp/system to the destination machine using your upload method of choice. Then import the binary archive into the /nix/store on the destination machine using nix copy:

Once the binary closure is on the machine, you install the closure the same way as before:

… replacing /nix/store/... with the /nix/store path of your closure (since there is no result symlink on the destination machine).

Conclusion

That’s it! Now you should be able to store your NixOS configuration in version control, build a binary closure as part of continuous integration, and deploy that binary closure to a separate destination machine. You can also now pin your build to a specific revision of Nixpkgs so that your build is more deterministic.

I wanted to credit my teammate Parnell Springmeyer who taught me the ./result/bin/switch-to-configuration trick for deploying a NixOS system and who codified the trick into the nix-deploy command-line tool. I also wanted to credit Remy Goldschmidt who interned on our team over the previous summer and taught me how to reliably pin Nixpkgs.

Monday, May 21, 2018

How I evaluate Haskell packages

This post summarizes the rough heuristics that I use for evaluating Haskell packages. Usually I use these rules of thumb when:

  • choosing between multiple similar packages
  • deciding whether to depend on a package at all

Some of these guidelines work for other programming languages, too, but some of them are unique to Haskell. Even if you are a veteran programmer in another language you still might find some useful tidbits here when approaching the Haskell ecosystem for the first time.

I've organized the metrics I use into five categories:

  • "Large positive" - Things that impress me
  • "Small positive" - Positive features that make me more likely to adopt
  • "Irrelevant" - Things I never pay attention to when judging package quality
  • "Small negative" - Things that turn me off
  • "Large negative" - Huge red flags

Large positive

  • High quality documentation

    Excellent documentation is one of the strongest signals of maintainer commitment to a package. Extensive documentation is very expensive to keep up to date with a package because documentation is difficult to machine check in general.

    You can definitely have the build check that type signatures or doctests are correct, but anything more elaborate usually requires human intervention to keep up to date.

    Disclaimer: I'm biased here because I'm a huge proponent of documenting things well.

  • Impossible to use incorrectly

    One of the nice features of Haskell is that you can use the type system to make impossible states unrepresentable or enforce invariants to guarantee proper use. I strongly prefer libraries that take advantage of this to rule out errors at compile time so that I can sleep soundly at night knowing that my work built on top of a solid foundation.

  • On Stackage

    All packages on Stackage are guaranteed to build and have a known good build plan (called a Stackage "resolver") that both stack and cabal can reuse. All packages on Stackage "play nice" with each other, which greatly reduces the total cost of ownership for depending on these Haskell packages.

    Being on Stackage is also a good signal that the package is maintained by somebody because packages require some upkeep to ensure that they continue to build against the latest version of other packages.

  • Competitive Benchmarks

    I strongly prefer packages that post benchmarks comparing their package to their competitors. Such a comparison implies that the author went to the trouble of using and understanding alternative packages and using them idiomatically. That perspective often improves the quality and design of their own package.

    Also, competitive benchmarks are rare and require significant upkeep, so they double as strong signal of maintainer commitment.

  • 100+ reverse dependencies

    You can check the packages that depend on a given package (a.k.a. "reverse dependencies") using the following URL:

    https://packdeps.haskellers.com/reverse/${package-name}

    Packages that have a 100 or more reverse dependencies have substantial buy-in from the Haskell ecosystem and are safe to depend on. You're also likely to depend on these packages anyway through your transitive dependencies.

    (Suggested by: Tim Humphries)

Small positive

  • One complete example

    This means that the package has one "end-to-end" example showing how all the pieces fit together. I prefer packages that do this because they don't force me to waste a large amount of time reinventing what the author could have communicated in a much shorter amount of time.

    You would be surprised how many Haskell packages have extensive Haddock coverage but still lack that one overarching example showing how to use the package.

  • 100+ downloads in last 30 days

    Download statistics are not a very strong signal of activity on Hackage since most Haskell build tools (i.e. cabal, stack, Nix) will cache dependencies. These tools also don't require frequent cache invalidations like some other build tools which shall not be named.

    However, having 100+ downloads in the last 30 days is a reasonable signal that enough people were recently able to get the package working. That means that if anything goes wrong then I assume that there must be some solution or workaround.

  • Maintained/used/endorsed by a company

    I think this is a pretty obvious signal. Similar to download metrics, this implies that somebody was able to extract something commercially valuable from this package.

  • Pure or mostly pure API

    Packages that provide pure APIs are easier to test and easier to integrate into a code base. This reduces the total cost of ownership for that dependency.

  • Upper bounds on dependencies that are up to date

    I'm not going to take a strong stance on the "great PVP war" in this post. All I will note is that if a package has upper bounds that are up to date then that is a good sign that the package is actively maintained (whether or not you agree that upper bounds are a good idea).

    In other words, upper bounds are one way that a package author can "virtue signal" that they have the time and resources to maintain a package.

Irrelevant

  • On Hackage

    Many people new to Haskell ecosystem assume that there is some sort of quality bar to be on Hackage just because Hackage is not GitHub. There is no such quality bar; anybody can obtain a Hackage account and upload their packages (even joke packages).

  • Stability field

    The stability field of packages fell out of fashion years ago. Also, in my experience people are poor judges of whether or not their own packages will be stable. I prefer to look at the frequency of releases and how often they are breaking changes as empirical evidence of a package's stability.

  • Version number

    First off, make sure you understand PVP. The key bit is that the first two components of a version number represent the major version of a package. That means that if the package goes from version 1.0 to 1.1 then that signals just as much of a breaking change to the API as if the package went from version 1.0 to 2.0. The usual rule of thumb for interpreting version numbers is:

    X.Y.Z.A
    ↑ ↑ ↑ ↑
    | | | |
    | | | Non-breaking change that is invisible (i.e. documentation or refactor)
    | | | Note: Not all packages use a fourth number in their versions
    | | |
    | | Non-breaking change
    | |
    | Breaking change without fanfare
    |
    Breaking change with great fanfare

    The absolute version number basically does not matter. There are many widely used and mature packages on Hackage that are still version 0.*. This can be jarring coming from another language ecosystem where people expect a mature package to be at least version 1.0.

  • Votes

    Hackage 2 supports voting on packages but I've never consulted a package's votes when deciding whether or not to adopt. I prefer quality signals that cannot be easily faked or gamed.

  • Tests

    I've never checked whether or not I'm using a well-tested package or not. Maybe that's not a good idea, but I'm being honest about how I evaluate packages.

    This probably comes as a surprise to a person coming to Haskell from another language ecosystem, especially a dynamically typed language that rely heavily on tests to ensure quality. Haskell programmers lean much more on the type system to enforce and audit quality. For example, type signatures show me at a glance if a package is overusing side effects or deals with too many states or corner cases that I need to account for.

Small negative

  • Upper bounds that are not up to date

    The other nice thing about adding upper bounds to a package is that they also accurately signal when a package is no longer maintained. They require constant "gardening" to keep up to date so when an author abandons a package they naturally transform from a positive signal into a negative signal simply by virtue of inaction.

  • Large dependency footprint

    This used to be a much bigger issue before the advent of Stackage. People would aggressively trim their dependency trees out of fear that one bad dependency would cause them to waste hours finding a valid build plan.

    I still spend some time these days trying to minimize my dependency footprint, mainly so that I don't need to respond as often to breaking changes in my immediate dependencies. However, this is a relative minor concern in comparison to how bad things used to be.

  • Uses linked lists inappropriately

    The #1 source of Haskell performance degradation is keeping too many objects on the heap and linked lists trash your heap if you materialize the entire list into memory (as opposed to streaming over the list so that you only materialize at most one element at any time).

    I try to avoid dependencies that use linked lists unless they are the optimal data structure because I will be paying the price for those dependencies every time my program runs a garbage collection.

  • Frequent breaking API changes

    Breaking API changes are not a really big deal in Haskell due to the strong type system. Breaking API changes are mainly a minor maintenance burden: if you depend on a library that releases breaking changes all the time you usually only need to spend a few minutes updating your package to build against the latest API.

    This matters more if you maintain a large number of packages or if you have a huge number of dependencies, but if you don't then this is a relatively small concern.

  • Idiosyncratic behavior

    "Idiosyncratic" means that the package author does something weird unique to them. Some examples include using a private utility library or custom Prelude that nobody else uses or naming all of their classes C and all their types T (you know who you are).

    When I see this I assume that the package author has no interest in building consensus or being a "well-behaved" member of the Haskell ecosystem. That sort of package is less likely to respond or adapt to user feedback.

Large negative

  • Uses String inappropriately

    This is a special case of "Uses linked lists inappropriately" since the default String type in Haskell is stored as a linked list of characters.

    Strings tend to be rather large linked lists of characters that are commonly fully materialized. This implies that even modest use of them will add lots of objects to the heap and tax the garbage collector.

  • No documentation

    If I see no documentation on a package I "nope" right out of there. I assume that if a maintainer does not have time to document the package then they do not have time to address user feedback.

  • Not on Hackage

    I assume that if a package is not on Hackage that the maintainer has no interest in supporting or maintaining the package. The bar for adding a package to Hackage is so low that the absence of a package on Hackage is a red flag.

    Note that this does not imply that packages on Hackage will be supported. There are packages on Hackage where the author does not intend to support the package.

    Note: a reader pointed out that this is confusing given that I treat "on Hackage" as "Irrelevant". Perhaps another way to phrase this is that being on Hackage is a necessary but not sufficient signal of maintainer support.

  • Maintainer ignoring pull requests

    This is obviously the clearest sign that a maintainer does not respond to user feedback. There are fewer things that upset me more than a maintainer that completely ignores volunteers who submit pull requests to fix bugs.

Conclusion

Hopefully that gives you some quick and dirty heuristics you can use when evaluating Haskell packages for the first time. If you think that I missed something important just let me know and I'll update this post.

Monday, February 5, 2018

The wizard monoid

Recent versions of GHC 8.0 provides a Monoid instance for IO and this post gives a motivating example for why this instance is useful by building combinable "wizard"s.

Wizards

I'll define a "wizard" as a program that prompts a user "up front" for multiple inputs and then performs several actions after all input has been collected.

Here is an example of a simple wizard:

main :: IO ()
main = do
    -- First, we request all inputs:
    putStrLn "What is your name?"
    name <- getLine

    putStrLn "What is your age?"
    age <- getLine

    -- Then, we perform all actions:
    putStrLn ("Your name is: " ++ name)
    putStrLn ("Your age is: " ++ age)

... which produces the following interaction:

What is your name?
Gabriel<Enter>
What is your age?
31<Enter>
Your name is: Gabriel
Your age is: 31

... and here is an example of a slightly more complex wizard:

import qualified System.Directory

main :: IO ()
main = do
    -- First, we request all inputs:
    files <- System.Directory.listDirectory "."
    let askFile file = do
            putStrLn ("Would you like to delete " ++ file ++ "?")
            response <- getLine
            case response of
                "y" -> return [file]
                _   -> return []

    listOfListOfFilesToRemove <- mapM askFile files
    let listOfFilesToRemove = concat listOfListOfFilesToRemove

    -- Then, we perform all actions:
    let removeFile file = do
            putStrLn ("Removing " ++ file)
            System.Directory.removeFile file
    mapM_ removeFile listOfFilesToRemove

... which produces the following interaction:

Would you like to delete file1.txt?
y<Enter>
Would you like to delete file2.txt?
n<Enter>
Would you like to delete file3.txt?
y<Enter>
Removing file1.txt
Removing file3.txt

In each example, we want to avoid performing any irreversible action before the user has completed entering all requested input.

Modularity

Let's revisit our first example:

main :: IO ()
main = do
    -- First, we request all inputs:
    putStrLn "What is your name?"
    name <- getLine

    putStrLn "What is your age?"
    age <- getLine

    -- Then, we perform all actions:
    putStrLn ("Your name is: " ++ name)
    putStrLn ("Your age is: " ++ age)

This example is really combining two separate wizards:

  • The first wizard requests and displays the user's name
  • The second wizard requests and displays the user's age

However, we had to interleave the logic for these two wizards because we needed to request all inputs before performing any action.

What if there were a way to define these two wizards separately and then combine them into a larger wizard? We can do so by taking advantage of the Monoid instance for IO, like this:

import Data.Monoid ((<>))

name :: IO (IO ())
name = do
    putStrLn "What is your name?"
    x <- getLine
    return (putStrLn ("Your name is: " ++ x))

age :: IO (IO ())
age = do
    putStrLn "What is your age?"
    x <- getLine
    return (putStrLn ("Your age is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
    respond <- request
    respond

main :: IO ()
main = runWizard (name <> age)

This program produces the exact same behavior as before, but now all the logic for dealing with the user's name is totally separate from the logic for dealing with the user's age.

The way this works is that we split each wizard into two parts:

  • the "request" (i.e. prompting the user for input)
  • the "response" (i.e. performing an action based on that input)

... and we do so at the type-level by giving each wizard the type IO (IO ()):

name :: IO (IO ())

age :: IO (IO ())

The outer IO action is the "request". When the request is done the outer IO action returns an inner IO action which is the "response". For example:

--      ↓ The request
name :: IO (IO ())
--          ↑ The response
name = do
    putStrLn "What is your name?"
    x <- getLine
    -- ↑ Everything above is part of the outer `IO` action (i.e. the "request")

    --      ↓ This return value is the inner `IO` action (i.e. the "response")
    return (putStrLn ("Your name is: " ++ x))

We combine wizards using the (<>) operator, which has the following behavior when specialized to IO actions:

ioLeft <> ioRight

= do resultLeft  <- ioLeft
     resultRight <- ioRight
     return (resultLeft <> resultRight)

In other words, if you combine two IO actions you just run each IO action and then combine their results. This in turn implies that if we nest two IO actions then we repeat this process twice:

requestLeft <> requestRight

= do respondLeft  <- requestLeft
     respondRight <- requestRight
     return (respondLeft <> respondRight)

= do respondLeft  <- requestLeft
     respondRight <- requestRight
     return (do
         unitLeft  <- respondLeft
         unitRight <- respondRight
         return (unitLeft <> unitRight) )

-- Both `unitLeft` and `unitRight` are `()` and `() <> () = ()`, so we can
-- simplify this further to:
= do respondLeft  <- requestLeft
     respondRight <- requestRight
     return (do
         respondLeft
         respondRight )

In other words, when we combine two wizards we combine their requests and then combine their responses.

This works for more than two wizards. For example:

request0 <> request1 <> request2 <> request3

= do respond0 <- request0
     respond1 <- request1
     respond2 <- request2
     respond3 <- request3
     return (do
         respond0
         respond1
         respond2
         respond3 )

To show this in action, let's revisit our original example once again:

import Data.Monoid ((<>))

name :: IO (IO ())
name = do
    putStrLn "What is your name?"
    x <- getLine
    return (putStrLn ("Your name is: " ++ x))

age :: IO (IO ())
age = do
    putStrLn "What is your age?"
    x <- getLine
    return (putStrLn ("Your age is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
    respond <- request
    respond

main :: IO ()
main = runWizard (name <> age)

... and this time note that name and age are awfully similar, so we can factor them out into a shared function:

import Data.Monoid ((<>))

prompt :: String -> IO (IO ())
prompt attribute = do
    putStrLn ("What is your " ++ attribute ++ "?")
    x <- getLine
    return (putStrLn ("Your " ++ attribute ++ " is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
    respond <- request
    respond

main :: IO ()
main = runWizard (prompt "name" <> prompt "age")

We were not able to factor out this shared logic back when the logic for the two wizards were manually interleaved. Once we split them into separate logical wizards then we can begin to exploit shared structure to compress our program.

This program compression lets us easily add new wizards:

import Data.Monoid ((<>))

prompt :: String -> IO (IO ())
prompt attribute = do
    putStrLn ("What is your " ++ attribute ++ "?")
    x <- getLine
    return (putStrLn ("Your " ++ attribute ++ " is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
    respond <- request
    respond

main :: IO ()
main = runWizard (prompt "name" <> prompt "age" <> prompt "favorite color")

... and take advantage of standard library functions that work on Monoids, like foldMap so that we can mass-produce wizards:

import Data.Monoid ((<>))

prompt :: String -> IO (IO ())
prompt attribute = do
    putStrLn ("What is your " ++ attribute ++ "?")
    x <- getLine
    return (putStrLn ("Your " ++ attribute ++ " is: " ++ x))

runWizard :: IO (IO a) -> IO a
runWizard request = do
    respond <- request
    respond

main :: IO ()
main = runWizard (foldMap prompt [ "name", "age", "favorite color", "sign" ])

More importantly, we can now easily see at a glance what our program does and ease of reading is a greater virtue than ease of writing.

Final example

Now let's revisit the file removal example through the same lens:

import qualified System.Directory

main :: IO ()
main = do
    -- First, we request all inputs:
    files <- System.Directory.listDirectory "."
    let askFile file = do
            putStrLn ("Would you like to delete " ++ file ++ "?")
            response <- getLine
            case response of
                "y" -> return [file]
                _   -> return []

    listOfListOfFilesToRemove <- mapM askFile files
    let listOfFilesToRemove = concat listOfListOfFilesToRemove

    -- Then, we perform all actions:
    let removeFile file = do
            putStrLn ("Removing " ++ file)
            System.Directory.removeFile file
    mapM_ removeFile listOfFilesToRemove

We can simplify this using the same pattern:

import qualified System.Directory

main :: IO ()
main = do
    files <- System.Directory.listDirectory "."
    runWizard (foldMap prompt files)

prompt :: FilePath -> IO (IO ())
prompt file = do
    putStrLn ("Would you like to delete " ++ file ++ "?")
    response <- getLine
    case response of
        "y" -> return (do
            putStrLn ("Removing " ++ file)
            System.Directory.removeFile file )
        _   -> return (return ())

runWizard :: IO (IO a) -> IO a
runWizard request = do
    respond <- request
    respond

All we have to do is define a wizard for processing a single file, mass-produce the wizard using foldMap and the Monoid instance for IO takes care of bundling all the requests up front and threading the selected files to be removed afterwards.

Conclusion

This pattern does not subsume all possible wizards that users might want to write. For example, if the wizards depend on one another then this pattern breaks down pretty quickly. However, hopefully this provides an example of you can chain the Monoid instance for IO with other Monoid instance (even itself!) to generate emergent behavior.

Sunday, January 28, 2018

Dhall Survey Results (2017-2018)

I advertised a survey in my prior post to collect feedback on the Dhall configuration language. You can find the raw results here:

18 people completed the survey at the time of this writing and this post discusses their feedback.

This post assumes that you are familiar with the Dhall configuration language. If you are not familiar, then you might want to check out the project website to learn more:

"Which option describes how often you use Dhall"

  • "Never used it" - 7 (38.9%)
  • "Briefly tried it out" - 7 (38.9%)
  • "Use it for my personal projects" - 1 (5.6%)
  • "Use it at work" - 3 (16.7%)

The survey size was small and the selection process is biased towards people who follow me on Twitter or read my blog and are interested enough in Dhall to answer the survey. So I don't read too much into the percentages but I am interested in the absolute count of people who benefit from using Dhall (i.e. the last two categories).

That said, I was pretty happy that four people who took the survey benefit from the use of Dhall. These users provided great insight into how Dhall currently provides value in the wild.

I was also surprised by how many people took the survey that had never used Dhall. These people still gave valuable feedback on their first impressions of the language and use cases that still need to be addressed.

"Would anything encourage you to use Dhall more often?"

This was a freeform question designed to guide future improvements and to assess if there were technical issues holding people back from adopting the language.

Since there were only 18 responses, I can directly respond to most of them:


Mainly, a formal spec (also an ATS parsing implementation)

Backends for more languages would be great.

More compilation outputs, e.g. Java

The main thing is more time on my part (adoption as JSON-preprocessor and via Haskell high on my TODO), but now that you ask, a Python binding

Stable spec, ...

Julia bindings

Scala integration, ...

I totally agree with this feedback! Standardizing the language and providing more backends are the highest priorities for this year.

No language was mentioned more than once, but I will most likely target Java and Scala first for new backends.


Some way to find functions would be immensely helpful. The way to currently find functions is by word of mouth.

This is a great idea, so I created an issue on the issue tracker to record this suggestion:

I probably won't have time myself to implement this (at least over the next year), but if somebody is looking for a way to contribute to the Dhall ecosystem this would be an excellent way to do so.


I would use Dhall more often if it had better type synonym support.

...; ability to describe the whole config in one file

Fortunately, I recently standardized and implemented type synonym in the latest release of the language. For example, this is legal now:

    let Age = Natural

in  let Name = Text

in  let Person = { age : Age, name : Name }

in  let John : Person = { age = +23, name = "John" }

in  let Mary : Person = { age = +30, name = "Mary" }

in  [ John, Mary ]

I assume the feedback on "ability to describe the whole config in one file" is referring a common work-around of importing repeated types from another file. The new type synonym support means you should now be able to describe the whole config in one file without any issues.


Verbosity for writing values in sum types was a no-go for our use case

...; more concise sum types.

The most recent release added support for the constructors keyword to simplify sum types. Here is an example of this new keyword in action:

    let Project =
          constructors
          < GitHub  : { repository : Text, revision : Text }
          | Hackage : { package : Text, version : Text }
          | Local   : { relativePath : Text }
          >

in  [ Project.GitHub
      { repository = "https://github.com/Gabriel439/Haskell-Turtle-Library.git"
      , revision   = "ae5edf227b515b34c1cb6c89d9c58ea0eece12d5"
      }
    , Project.Local { relativePath = "~/proj/optparse-applicative" }
    , Project.Local { relativePath = "~/proj/discrimination" }
    , Project.Hackage { package = "lens", version = "4.15.4" }
    , Project.GitHub
      { repository = "https://github.com/haskell/text.git"
      , revision   = "ccbfabedea1cf5b38ff19f37549feaf01225e537"
      }
    , Project.Local { relativePath = "~/proj/servant-swagger" }
    , Project.Hackage { package = "aeson", version = "1.2.3.0" }
    ]

Missing a linter usable in editors for in-buffer error highlighting.

..., editor support, ...

Dhall does provide a Haskell API including support for structured error messages. These store source spans which can be used for highlighting errors, so this should be straightforward to implement. These source spans power Dhall's error messages, such as:

$ dhall <<< 'let x = 1 in Natural/even x'

Use "dhall --explain" for detailed errors

Error: Wrong type of function argument

Natural/even x

(stdin):1:14

... the source span is how Dhall knows to highlight the Natural/even x subexpression at the bottom of the error message and that information can be programmatically retrieved from the Haskell API.


Sometimes have to work around record or Turing completeness limitations to generate desired json or yaml: often need record keys that are referenced from elsewhere as strings.

I will probably never allow Dhall to be Turing complete. Similarly, I will probably never permit treating text as anything other than an opaque value.

Usually the solution here is to upstream the fix (i.e. don't rely on weakly typed inputs). For example, instead of retrieving the field's name as text, retrieve a Dhall function to access the desired field, since functions are first-class values in Dhall.

If you have questions about how to integrate Dhall into your workflow, the current best place to ask is the issue tracker for the language. I'm not sure if this is the best place to host such questions and I'm open to suggestions for other forums for support.


Would like to be able to compute the sha1 or similar of a file or expression.

The Dhall package does provide a dhall-hash utility exactly for this purpose, which computes the semantic hash of any file or expression. For example:

$ dhall-hash <<< 'λ(x : Integer) → [1, x, 3]'
sha256:2ea7142f6bd0a1e70a5843174379a72e4066835fc035a5983a263681c728c5ae

Better syntax for optional values. It is abysymmal now. But perhaps I'm using dhall wrong. My colleague suggests merging into default values.

Dropping the lets

simpler syntax; ...

There were several complaints about the syntax. The best place to discuss and/or propose syntax improvements is the issue tracker for the language standard.

For example:


Better error messages; ...

The issue tracker for the Haskell implementation is the appropriate place to request improvements to error messages. Here are some recently examples of reported issues in error messages:


I worry that contributors/friends who'd like to work on a project with me would be intimidated because they don't know the lambda calculus

Dhall will always support anonymous functions, so I doubt the language will be able to serve those who shy away from lambda calculus. This is probably where something like JSON would be a more appropriate choice.


..., type inference

..., full type inference, row polymorphism for records

This is probably referring to Haskell/PureScript/Elm-style inference where the types of anonymous functions can be inferred (i.e. using unification). I won't rule out supporting type inference in the future but I doubt it will happen this year. However, I welcome any formal proposal to add type inference to the language now that the type-checking semantics have been standardized.


It's cool, but I'm yet to be convinced it will make my life easier

No problem! Dhall does not have a mission to "take over the world" or displace other configuration file formats.

Dhall's only mission is to serve people who want programmable configuration files without sacrificing safety or ease of maintenance.

"What do you use Dhall for?"

"Why do you use Dhall?"

This was the part of the survey that I look forward to the most: seeing how people benefit from Dhall. These were two freeform questions that I'm combining in one section:


WHAT: HTML templating, providing better manifest files (elm-package.json)

WHY: Dhall as a template language allows us to express all of the inputs to a template in one place. We don't have to hunt-and-peck to find out what a template is parameterized by. And if we want to understand what some variable is in the template, we can just look at the top of the file where its type is declared. We want exact dependencies for our elm code. There is no specific syntax for exact versions in elm-package.json. There is a hack around that limitation which requires enforcement by convention (or CI scripts or some such). We use Dhall to generate exact versions by construction. The JSON that's generated uses the hack around the limitation, but we don't have to think about that accidental complexity.


WHAT: Ad hoc command-line tools


WHAT: Generating docker swarm and other configs from common templates with specific variables inserted safely.

WHY: JSON and yaml suck, want to know about errors at compile time.


WHAT: Kubernetes

WHY: Schema validation. Kubernetes is known for blowing up in your face at runtime by doing evaluation and parsing at the same time


WHAT: Config

WHY: Strong typing & schema


WHY: Rich functinality and concise syntax


WHAT: JSON preprocessor

WHY: JSON is universal but a bit dumb


WHAT: configuration - use haskell integration and spit out json/yaml on some projects (to kubernetes configs for instance)

WHY: I love pure fp and dealing with configs wastes a lot of my time and is often error prone


Interestingly, ops-related applications like Docker Swarm and Kubernetes were mentioned in three separate responses. I've also received feedback (outside the survey) from people interested in using Dhall to generate Terraform and Cloudformation templates, too.

This makes sense because configuration files for operational management tend to be large and repetitive, which benefits from Dhall's support for functions and let expressions to reduce repetition.

Also, reliability matters for operations engineer, so being able to validate the correctness of a configuration file ahead of time using a strong type system can help reduce outages. As Dan Luu notes:

Configuration bugs, not code bugs, are the most common cause I’ve seen of really bad outages. When I looked at publicly available postmortems, searching for “global outage postmortem” returned about 50% outages caused by configuration changes. Publicly available postmortems aren’t a representative sample of all outages, but a random sampling of postmortem databases also reveals that config changes are responsible for a disproportionate fraction of extremely bad outages. As with error handling, I’m often told that it’s obvious that config changes are scary, but it’s not so obvious that most companies test and stage config changes like they do code changes.

Also, type-checking a Dhall configuration file is a faster and more convenient to valid correctness ahead-of-time than an integration test.

Functions and types were the same two justifications for my company (Awake Security) using Dhall internally. We had large and repetitive configurations for managing a cluster of appliances that we wanted to simplify and validate ahead of time to reduce failures in the field.

Another thing that stood out was how many users rely on Dhall's JSON integration. I already suspected this but this survey solidified that.

I also learned that one user was using Dhall not just to reduce repetition, but to enforce internal policy (i.e. a version pinning convention for elm-package.json files). This was a use case I hadn't thought of marketing before.

Other feedback

We tried it as an option at work for defining some APIs but describing values in sum types was way too verbose (though I think I understand why it might be needed). We really like the promise of a strongly typed config language but it was pretty much unusable for us without building other tooling to potentially improve ergonomics.

As mentioned above, this is fixed in the latest release with the addition of the constructors keyword.


Last time I checked, I found syntax a bit awkward. Hey, Wadler's law :) Anyway, great job! I'm not using Dhall, but I try to follow its development, because it's different from everything else I've seen.

Syntax concerns strike again! This makes me suspect that this will be a recurring complaint (similar to Haskell and Rust).

I also appreciate that people are following the language and hopefully getting ideas for their own languages and tools. The one thing I'd really like more tools to steal is Dhall's import system.


You rock. Thanks for all your work for the community.

Dhall is awesome!

KOTGW

I love the project and am grateful for all the work that's been done to make it accessible and well documented. Keep up the good work!

Thank you all for the support, and don't forget to thank other people who have contributed to the project by reporting issues and implementing new features.

Tuesday, January 2, 2018

Dhall - Year in review (2017-2018)

The Dhall programmable configuration language is now one year old and this post will review progress over the last year and the future direction of the language in 2018.

If you're not familiar with Dhall, you might want to visit the official GitHub project which is the recommended starting point. This post assumes familiarity with the Dhall language.

Also, I want to use this post to advertise a short survey that you can take if you are interested in the language and would like to provide feedback:

Standardization

Standardization is currently the highest priority for the language since Dhall cannot be feasibly be ported to other languages until the standard is complete. In the absence of the standard parallel implementations in other languages are chasing a moving and poorly-defined target.

Fortunately, standardization made good progress this year. The completed parts are:

The one missing piece is the standard semantics for import resolution. Once that is complete then alternative implementations should have a well-defined and reasonably stable foundation to build upon.

Standardizing the language should help stabilize the language and increase the barrier to making changes. The goal is that implementations of Dhall in other languages can review and approve/reject proposed changes to the standard. I've already adopted this approach myself by submitting pull requests to change the standard before changing the Haskell implementation. For example:

This process gives interested parties an official place to propose and review changes to the language.

New integrations

One of the higher priorities for the language is integrations with other languages and configuration file formats in order to promote adoption. The integrations added over the past year were:

Also, an initial Dhall binding to JavaScript was prototyped, but has not yet been polished and published:

  • JavaScript (using GHCJS)
    • Supports everything except the import system

All of these integrations have one thing in common: they all build on top of the Haskell implementation of the Dhall configuration language. This is due to necessity: the Dhall language is still in the process of being standardized so depending on the Haskell API is the only realistic way to stay up to date with the language.

Dhall implementations in other languages are incomplete and abandoned, most likely due to the absence of a stable and well-defined specification to target:

The JSON integration has been the most successful integration by far. In fact, many users have been using the JSON integration as a backdoor to integrating Dhall into their favorite programming language until a language-native Dhall integration is available.

Some users have proposed that Dhall should emphasize the JSON integration and de-emphasize language-native Dhall bindings. In other words, they believe that Dhall should primarily market itself as an alternative to Jsonnet or HCL.

So far I've struck a middle ground, which is to market language-native bindings (currently only Haskell and Nix) when they exist and recommend going through JSON as a fallback. I don't want to lean for too long on the JSON integration because:

  • Going through JSON means certain Dhall features (like sum types and functions) are lost in translation.
  • I believe Dhall's future is healthier if the language integrates with a diverse range of configuration file formats and programming languages instead of relying on the JSON integration.
  • I want to avoid a "founder effect" problem in the Dhall community where JSON-specific concerns dwarf other concerns
  • I would like Dhall to eventually be a widely supported configuration format in its own right instead of just a preprocessor for JSON

New language features

Several new language features were introduced in 2017. All of these features were added based on user feedback:

Don't expect as many new language features in 2018. Some evolution is natural in the first year based on user feedback, but I will start rejecting more feature requests unless they have a very high power-to-weight ratio. My goal for 2018 is to slow down language evolution to give alternative implementations a stable target to aim for.

The one likely exception to this rule in 2018 is a widespread user request for "type synonym" support. However, I will probably won't get around to fixing this until after the first complete draft of the language standard.

Tooling

The main tooling features that landed this year were:

Tooling is one of the easier things to build for Dhall since command-line tools can easily depend on the Haskell API, which is maintained, well-documented, and up-to-date with the language.

However, excessive dependence on tooling can sometimes indicate a flaw in the language. For example, the recent constructors keyword proposal originated as something that several people were originally implementing using their own tooling and was elevated to a language feature.

Haskell-specific improvements

Dhall also provides a Haskell API that you can build upon, which also landed some notable improvements:

The language core is simple enough that some people have begun using Dhall as a starting point for their implementing their own programming languages. I personally don't have any plans to market or use Dhall for this purpose but I still attempt to support this use case as long as these requests:

  • don't require changes to language standard
  • don't significantly complicate the implementation.
  • don't deteriorate performance

Documentation

The language's official documentation over the last year has been the Haskell tutorial, but I've been slowly working on porting the tutorial to language-agnostic documentation on a GitHub wiki:

The most notable addition to the wiki is a self-contained JSON-centric tutorial for getting started with Dhall since that's been the most popular integration so far.

The wiki will eventually become the recommended starting point for new users after I flesh the documentation out more.

Robustness improvements

I've also made several "invisible" improvements under the hood:

I've been slowly whittling away at type-checking bugs both based on user reports and also from reasoning about the type-checking semantics as part of the standardization process.

The next step is to use proof assistants to verify the safety of the type-checking semantics but I doubt I will do that in 2018. The current level of assurance is probably good enough for most Dhall users for now. However, anybody interested in doing formal verification can take a stab at it now that the type-checking and normalization semantics have been formalized.

I also plan to turn the test suite into a standard conformance test that other implementations can use to verify that they implemented the standard correctly.

Infrastructure

This section covers all the miscellaneous improvements to supporting infrastructure:

Acknowledgments

I'd also like to thank everybody who helped out over the past year, including:

... and also everybody who filed issues, reported bugs, requested features, and submitted pull requests! These all help improve the language and no contribution is too small.

Conclusion

In 2017 the focus was on responding to user feedback as people started to use Dhall "in anger". In 2018 the initial plan is to focus on standardization, language stabilization, and creating at least one complete alternative implementation native to another programming language.

Also, don't forget to take the language survey if you have time. I will review the feedback from the survey in a separate follow-up post and update the plan for 2018 accordingly.