Here at Altometrics, we often use Amazon Web Services for deploying web applications. Out of the AWS suite of tools, we are very fond of Amazon CloudFormation. CloudFormation provides a way to specify your application’s entire infrastructure in a single, declarative template. The associated tools handle creating and connecting all of the necessary resources (auto scaling groups, load balancers, etc.) at once, as well as updating your stack when the template changes.
But we have one grievance with CloudFormation. The configuration file needs to be formatted as JSON.
What's the problem with JSON? I thought JSON was great?
In the last decade, JSON has largely replaced XML as the de facto method of transmitting data between applications. The most common use case is sending messages from a browser to a web server or vice versa. This works well in most cases, because the browser has built-in support for JSON and there is an open source JSON parsing library for any language used on the server.
As we are progressing through this “Age of JSON”, more development tools are using JSON for a new purpose: configuration files. For example: NPM, Bower, Amazon Cloud Formation, and Packer by Hashicorp, all require configuration files to be written in JSON. But it is when we are required to write and maintain large JSON config files that we soon realize that JSON is not an optimal markup for creation by humans.
JSON happens to be a tyrant about commas. This is a valid map in JSON::
1 2 3 4
but if the comma separating the two lines is missing, then the result is an unparseable syntax error: Invalid JSON:
1 2 3 4
And even more distressingly, trailing commas are syntax errors as well: Invalid JSON:
1 2 3 4
A common scenario in a configuration files is reordering lines for human readability purposes. If you wanted the
"min" key before the
"max" key, you might do a simple cut and paste in your text editor to move the
”min” line up. But if you do that the result is two separate syntax errors for a missing and an extraneous comma.
1 2 3 4
If you want to store a long string in JSON with linebreaks, you must use
\n as a line separator. As an example:
However, you can’t wrap the string to the next line for readability. This is Invalid JSON:
So long strings have to stay on the same line, becoming longer and more unwieldy to edit.
Lack of Comments
Per the JSON spec, comments are explicitly forbidden in JSON. Here is JSON’s creator Douglas Crockford explaining why he removed comments. And while it’s true that comments are superfluous if you are simply using JSON to communicate between machines, comments serve an extremely important role in human-readable configuration files. Comments can clarify intent and help prevent misunderstandings by future readers. In the Crockford post about comments he claims to have a solution for us! He recommends writing your JSON with comments and passing it to JSMin, which will strip out the comments, before passing it to the JSON parser. That will certainly work, but now that we have introduced a build step into our JSON configuration file, let’s think a bit bigger.
EDN: An Alternative
EDN is a data notation similar to JSON. It was developed by Rich Hickey when he created Clojure. In fact, Clojure programs are written entirely as nested EDN forms. EDN has answers for all the JSON problems previously mentioned
EDN solves this problem by not requiring commas at all. Commas are treated as whitespace in EDN. So this:
is equivalent to this:
Now elements can be moved around with impunity.
EDN strings can span multiple lines
Comments are delimited by semicolons
1 2 3 4
Fine, But CloudFormation Still Requires JSON!
That’s true! No matter how many words I put into a blog post recommending EDN over JSON, Amazon is not going to change their mind on how CloudFormation templates should be written. So in order to use EDN and CloudFormation together, we wrote a command line tool to translate an EDN file into a JSON file. You can install it simply with
npm install edn-to-json and then use it with
edn-to-json sample.edn > sample.json. For my CloudFormation use case, we have a shell script that first runs
edn-to-json prod-template.edn > prod-template.json and then calls the
aws CLI tool with the newly created
prod-template.json as the input parameter. We added
prod-template.json to our
.gitignore file so we don’t have to keep both the EDN and JSON versions of the template in up-to-date in version control and we can treat the JSON file like a build artifact. We are very pleased with this workflow and are planning to use it anywhere JSON configuration files are required in future work.