My First Elm App

I recently wrote my first application in Elm, which is promoted as a “delightful language for reliable webapps”.

The application is an accessible color palette builder, which builds on the excellent design of the 18F Visual Identity Guide to provide visual designers with real-time feedback on the accessibility of their palettes:

Screenshot of the accessible color palette builder application

Elm is elegantly simple

What strikes me most about Elm is how easy it is to learn, and how quickly one can start using it to build useful things. This can be done through Elm’s official guide, though I personally used Elm: The Pragmatic Way by The Pragmatic Studio, which I highly recommend.

When I first learned Elm in March of 2016, I actually wouldn’t have been so quick to recommend it: at that time, building anything useful required learning about something called signals, which was a significant conceptual hurdle; but after signals were dropped in Elm 0.17, things got a lot simpler. This illustrates one of my favorite things about the design of Elm: its creator and community are constantly trying to make it easier to learn without losing any of its power.

Even if one doesn’t end up using Elm in their production code, I think Elm might actually make it easier to learn concepts that can still be used in JavaScript. For any JavaScript programmers who might currently be overwhelmed by the sheer cognitive load introduced by React, Redux, TypeScript, and concepts like immutability and functional programming, I highly recommend dabbling in Elm. It’s a lot simpler than the analogous set of JS tooling, and it’s often just as (if not more) powerful. And even if you decide to ultimately go with the JS tooling instead, it should be easier because you’ll already have learned the fundamental concepts in Elm.

All that said, though, writing my first app in Elm wasn’t without its share of difficulties.

Interacting with DOM-based JavaScript

Easily the biggest frustration of using Elm was that, while it does include robust support for interoperating with legacy JavaScript code, its virtual DOM implementation currently has no analogue to React’s component lifecycle. This feature of React has been critical for me when it comes to integrating third-party JS widgets with my code, as it essentially serves as an “escape hatch” from the virtual DOM. The lack of such a feature in Elm made it difficult to integrate jscolor into my app.

A nice standards-driven alternative to component lifecycle methods might have been the use of HTML5 Custom Elements; however, Elm’s virtual DOM has no support for this either–though I suspect it wouldn’t be hard to add, and I’d be interested in contributing support for it, if it’s something the community agrees with.


One of the most promoted advantages of Elm is that it actually has no runtime exceptions. That is, unless your code explicitly calls out to JavaScript for some of its functionality, it is impossible for a line of code to cause your program to “crash” and become unusable. This is something that even TypeScript has a hard time guaranteeing (though it does still reduce runtime exceptions significantly).

However, this isn’t to say that one’s code will be free of bugs: accidentally using a < instead of a > when comparing two numbers will obviously result in unintentional behavior that no static analyzer could detect. And due to Elm’s nature as a functional programming language, I had some trouble figuring out exactly how to debug my program in the few instances that my program went awry. I had no access to a conventional debugger, and adding logging statements into the midst of my code sometimes felt like an engineering feat by itself, but this could easily be due to my unfamiliarity with debugging functional code. Not being able to write statements takes some getting used to!

Outside of that, though, not having to worry about–and constantly guard against–every line of my code potentially throwing an exception was still an enormous weight off my back. And Elm’s ridiculously friendly error messages made conversing with the compiler delightful.

Some skepticism

While Elm was enjoyable to learn and use, I have to admit that I’m not fully sold on the functional notion of “immutability at any cost”. While it’s a great idea in theory, in practice I’m not certain I’ve run into a lot of situations where mutability was a significant source of bugs–as long as I was disciplined about keeping things immutable to a reasonable degree.

I’m also not sure if the limitations posed by being forced to be immutable at all times outweigh the advantages of, say, a language that encourages immutability but allows mutation if needed. An example of the latter might be the language Rust, where data is immutable by default but can be made mutable through the use of a mut keyword. Like Elm, that language has a similar focus on rock-solid reliability, but does so without restricting the programmer to a purely functional paradigm.

In any case, I’m looking forward to working more with Elm and seeing the language continue to evolve.

Discovering Accessibility

My final project working at the Mozilla Foundation was, which was the first content-based website I’ve helped create in quite some time. During the site’s development, I finally gave myself the time to learn about a practice I’d been procrastinating to learn about for an embarrassingly long time: accessibility.

One of the problems I’ve had with a lot of guides on accessibility is that they focus on standards instead of people. As a design-driven engineer, I find standards necessary but not sufficient to create compelling user experiences. What I really wanted to know about was not the ARIA markup to use for my code, but how to empathize with the way “extreme users”–people with disabilities–use the Web.

I finally found a book with such a holistic approach to accessibility called A Web For Everyone by Sarah Horton and Whitney Quesenbery. I’m still not done reading it, but I highly recommend it.

Stage 1: Accessibility Is Awesome!

The first thing I did in an attempt to empathize with users of screen readers was to actually be proactive and learn to use a screen reader. The first one I learned how to use was the open-source NVDA screen reader for Windows. Learning how to use it actually reminded me a bit of learning vi and emacs for the first time: for example, because I couldn’t visually scan through a page to see its headings, I had to learn special keyboard commands to advance to the next and previous heading.

Obviously, however, I am a very particular kind of user when I use a screen reader: because I don’t actually rely on auditory information as much as a blind person, I can’t listen to a screen reader’s narration very fast. And because I’m a highly technical user who is good at remembering keyboard shortcuts, I can remember a lot of them. So it was useful to compare my own use of screen readers against Ginny Redish’s paper on Observing Users Who Work With Screen Readers (PDF).

After learning the basics of NVDA, I found Terrill Thompson’s blog post on Good Examples of Accessible Web Sites and tried visiting some of them with my shiny new screen reader. Doing this gave me lots of inspiration on how to make my own sites more accessible.

The web service was also quite helpful in educating me on best practices my existing websites lacked, and The Paciello Group’s Web Components Punch List was helpful when I needed to create or evaluate custom UI widgets.

All of this has constituted what I’ve begun to call my “honeymoon” with accessibility. It was quite satisfying to empathize with the needs of extreme users, and I was excited about creating sites that were delightful to use with NVDA.

Stage 2: Accessibility Is Hard!

What ended up being much harder, though, was actually building a delightful experience for users who might be using any screen reader.

The second screen reader I learned how to use was Apple’s excellent VoiceOver, which comes built-in with all OS X and iOS devices. And like the early days of the Web, when a delightful experience on one browser was completely unusable in another, I often found that my hard work to improve my site’s usability on NVDA often made the site less usable on VoiceOver. For example, as Steve Faulkner has documented, the behavior of the ARIA role="alert" varies immensely across different browser and screen reader combinations, which led to some frustrating trade-offs on the Teach site.

One potential short-term solution to this might be for sites to have slightly different code depending on the particular browser/screen-reader combination being used. Aside from being a bad idea for a number of reasons, though, it’s also technically impossible–the current screen reader isn’t reflected in navigator.userAgent or anything else.

So, that’s the current situation I find myself in with respect to accessibility: creating accessible static content is easy and helps extreme users, but creating accessible rich internet applications is quite difficult because screen readers implement the standards so differently. I’m eagerly hoping that this situation improves over the coming years.

Clarifying Coding

With the upcoming Hour of Code, there’s been a lot of confusion as to the definition of what “coding” is and why it’s useful, and I thought I’d contribute my thoughts.

Rather than talking about “coding”, I prefer to think of “communicating with computers”. Coding, depending on its definition, is one of many ways that a human can communicate with a computer; but I feel that the word “communicating” is more powerful than “coding” because it gets to the heart of why we use computers in the first place.

We communicate with computers for many different reasons: to express ourselves, to create solutions to problems, to reuse solutions that others have created. At a minimum, this requires basic explorational literacy: knowing how to use a mouse and keyboard, using them to navigate an operating system and the Web, and so forth. Nouns in this language of interaction include terms like application, browser tab and URL; verbs include click, search, and paste.

These sorts of activities aren’t purely consumptive: we express ourselves every time we write a Facebook post, use a word processor, or take a photo and upload it to Instagram. Just because someone’s literacies are limited to this baseline doesn’t mean they can’t do incredibly creative things with them.

And yet communicating with computers at this level may still prevent us from doing what we want. Many of our nouns, like application, are difficult to create or modify using the baseline literacies alone. Sometimes we need to learn the more advanced skills that were used to create the kinds of things that we want to build or modify.

This is usually how coders learn how to code: they see the digital world around them and ask, “how was that made?” Repeatedly asking this question of everything one sees eventually leads to something one might call “coding”.

This is, however, a situation where the journey may be more important than the destination: taking something you really care about and asking how it’s made–or conversely, taking something imaginary you’d like to build and asking how it might be built–is both more useful and edifying than learning “coding” in the abstract. Indeed, learning “coding” without a context could easily make it the next Algebra II, which is a terrifying prospect.

So, my recommendation: don’t embark on a journey to “learn to code”. Just ask “how was that made?” of things that interest you, and ask “how might one build that?” of things you’d like to create. You may or may not end up learning how to code; you might actually end up learning how to knit. Or cook. Or use Popcorn Maker. Regardless of where your interests lead you, you’ll have a better understanding of the world around you, and you’ll be better able to express yourself in ways that matter.

A HTML Microformat for Open Badges

Sometimes a person wanders by the #badges IRC channel and asks us how to issue a badge.

The response usually involves asking the user what kind of technical expertise they have; if they’re a programmer, we point them at the specification. If they’re not, well, we usually point them to a place like or credly.

One of the problems with pointing people at the specification is that it’s highly technical. JSON, the format the badge takes, is unfamiliar to non-programmers and doesn’t support code comments to make things a bit easier to grasp. Once a badge is hosted as JSON, the URL to the JSON file needs to either be opaquely “baked” into a PNG file, or it needs to be given to the Open Badges Issuer API behind the scenes, which requires additional programming. Furthermore, the JSON file needs to at least specify a criteria URL, which necessitates the creation of a human-readable HTML page.

That’s a lot of parts.

But what if a badge were just a Web page, formatted in a consistent way that made it easy for machines to read? What if issuing a badge was as easy as filling out a form, copying out a resulting HTML snippet and pasting it into your blog or website?

Microformats can help us do this, because they were designed precisely for this kind of purpose.

Now let’s look at the other solution: hosting one’s badges through third-party services like or credly. While incredibly easy, one of the problems is that the badge metadata is hosted on—and therefore issued by—a domain that the badge’s creator doesn’t actually own. This will be particularly confusing for recipients and verifiers who discover that their badge was issued by a domain they may never have heard of.

When badges can be represented as HTML, however, we make it really easy for people to host badges on domains they already own. If someone’s presence on the internet is already represented by their blog or website, shouldn’t we make it as easy as possible for them to issue badges from there, rather than an unrelated domain?

I made a proof-of-concept Web service that allows you to play around with this idea at Just fill out the form and paste the resulting HTML snippet in your blog or website, and you’re good to go. The snippet even includes a “push to backpack” button that allows the recipient to push the badge to their backpack.

One of the limitations with my service is that it’s really just a “bridge”, or hack, that translates between the Badge microformat I’m proposing and the JSON specification that Open Badge tools currently support. As a result, the issuer of the badge will appear to be rather than your actual blog or website. If we add an HTML microformat to the Open Badges specification, however, we won’t need a bridge, so this problem will go away.

For more information on the technical details of the microformat, including potential security concerns, see the README for the Github project.

On Enforcing Mandatory Code Review

Many software projects enforce mandatory code reviews, even for their most senior developers. While I’ve mentioned before that code reviews can be very useful, I also think that mandatory code reviews among trusted members of a software team can have a number of downsides.

First and foremost, developers don’t have a common consensus on what code review actually means. How much time should it take? Does it mean acting like a human computer and painstakingly processing every line of code like a computer would? Does it mean evaluating the high-level architecture of a patch, or finding formatting errors? Does it mean just skimming the code and vaguely understanding it enough to take care of it if the original author gets hit by a bus? Does it mean evaluating the big-O complexity of an algorithm? Does it mean all of these things?

Many people who ask for mandatory code reviews have no idea what they’re asking for—because it’s mandatory, so they just have to—and the people who do the code reviews are in a similar position. As a result, while code reviews often improve software quality, in some environments a mandatory review policy can amount to an ill-defined bureaucratic ritual of unknown value.

Because of this, I’ve seen a lot of reviewers—myself included—offer nothing but so-called “nitpicks” in their code review comments, as a way of appearing to perform a useful act while in fact slowing a project down, destroying morale, and optimizing for their own minimal time investment by engaging in days or weeks of asynchronous pedantry. Other times, because reviewers have been asked to do something extremely vague, they often procrastinate, which causes code to bit-rot and sets a project back even further.

But what happens when we make code reviews voluntary, instead of mandatory? Well, then it’s called asking for advice.

Many of the most useful “code reviews” I’ve experienced came not from asking someone’s permission to land code, but from simply being uncertain of very specific aspects of my own code, and asking my peers for help. Sometimes this has come in the form of a github pull request; other times it’s been in the form of pair programming; other times it’s just involved me dumping some source code into a webpage and asking someone over IRC about it.

There are a number of things I like about this practice. The first is that it’s my choice to ask for advice, which is far more empowering than asking for permission, which is what a mandatory code review policy implies. Even if I had the exact same conversations through mandatory code reviews that I would through voluntary code reviews, I would still enjoy the latter more, because they’re my decision rather than my obligation.

Another advantage of voluntary code reviews is that I know exactly what I’m asking for. If I feel insecure about my own code, I can introspect and understand why I’m feeling that way, which leads me to specific questions. Often different questions are best answered by different people, some of whom may even work on different projects; when I ask those people to review my code, I’m requesting very specific things that are highly relevant to their expertise. I’m also targeting my questions in a way that ensures that I don’t take up too much of their time. And because it’s viewed by them as a well-defined, time-boxed favor rather than a vague obligation, they’re typically much more responsive and excited about helping me than they would be if it were a mandatory code review.

In conclusion, rather than decreasing software quality, I believe that the social incentives inherent in voluntary code review policies encourage developers to take ownership of the code they write by paying close attention to its needs and valuing the time of others who may need to take a look at it.

Building Bridges Between GUIs and Code With Markup APIs

Recently the Twitter Bootstrap documentation gave a name to something that I’ve been excited about for a pretty long time: Markup API.

Markup APIs give superpowers to HTML. Through the use of class attributes, data attributes, X-Tags, or other conventions they effectively extend the behavior of HTML, turning it into a kind of magic ink. Favorite examples of mine include Twitter Bootstrap, Wowhead Tooltips, and my own Instapoppin.

The advantages of a markup API over a JavaScript API are numerous:

  • They mean that an author only needs to know HTML, whose syntax is very easy to learn, rather than JavaScript, whose syntax is comparatively difficult to learn.
  • Because the API is in HTML rather than JavaScript, it’s declarative rather than imperative. This makes it much easier for development tools to intuit what a user is trying to do—by virtue of a user specifying what they want rather than how to do it. And when a development tool has a clearer idea of what the user wants, it can offer more useful context-sensitive help or error messaging.
  • Because of HTML’s simple and declarative structure, it’s easy for tools to modify hand-written HTML, especially with a library like Slowparse, which help ensure that whitespace and other formatting is preserved. Doing the same with JavaScript, while possible with libraries like esprima, can be difficult because the language is so complex and dynamic.

These advantages make it possible to create GUI affordances atop hand-coded HTML that make it much easier to write. As an example of this, I hacked up prototype slideshow demo and physics demo in July of last year. Dragging an element with the class thimble-movable in the preview pane changes (or adds) CSS absolute positioning properties in the source code pane in real-time, and holding down the shift key modifies width and height. This allows users to size and position elements in a way that even a professional developer would find far more preferable to the usual “guess a number and see how it looks” method. Yet this mechanism still places primacy on the original source code; the GUI is simply a humane interface to change it.

This is the reverse of most authoring tools with an “export to HTML” feature, whereby an opaque internal data model is compiled into a blob of HTML, CSS, and JavaScript that can’t be re-imported into the authoring tool. Pedagogically, this is unfortunate because it means that there’s a high cost to ever leaving the authoring tool—effectively making the authoring tool its own kind of “walled garden”. Such applications could greatly facilitate the learning of HTML and CSS by defining a markup API for their content and allowing end-users to effortlessly switch between hand-coding HTML/CSS and using a graphical user interface that does it for them.

Building Experiences That Work Like The Web

Much has been said about the greatness of the Web, yet most websites don’t actually work like the Web does. And some experiences that aren’t even on the web can still embody its spirit better than the average site.

Here are three webbish characteristics that I want to see in every site I use, and which I try my best to implement in anything I build.

  • “View Source” for every piece of user-generated content. Many sites that support user comments allow users to use some kind of markup language to format their responses. Flickr allows some HTML with shortcuts for embedding other photos, user avatars, and photo sets; Github permits a delicious smorgasboard of HTML and Markdown.

    The more powerful a site’s language for content creation, the more likely it is that one user will see another’s content and ask, “how did they do that?”. If sites like Flickr and Github added a tiny “view source” button next to every comment, it would become much easier for users to create great things and learn from one another.

    I should note that by “source” I don’t necessarily mean plain-text source code: content created by Popcorn Maker, for instance, supports a non-textual view-source by making it easy for any user to transition from viewing a video to deconstructing and remixing it.

  • Outbound Linkability. Every piece of user-generated content should be capable of “pointing at” other things in the world, preferably in a variety of ways that support multiple modes of expression. For instance, a commenting system should at the very least make it trivially easy to insert a clickable hyperlink into a comment; one step better is to allow a user to link particular words to a URL, as with the <a> tag in HTML. Even better is to allow users to embed the content directly into their own content, as with the <img> and <iframe> tags.

  • Inbound Linkability. Conversely, any piece of user-generated content should be capable of being “pointed at” from anywhere else in the world. At the very least, this means permalinks for every piece of content, such as a user comment. Even better is making every piece of content embeddable, so that other places in the world can frame your content in different contexts.

As far as I know, the primary reason most sites don’t implement some of these features is due to security concerns. For example, a na├»ve implementation of outbound linkability would leave itself open to link farming, while allowing anyone to embed any page on your site in an <iframe> could make you vulnerable to clickjacking. Most sites “play it safe” by simply disallowing such things; while this is perfectly understandable, it is also unfortunate, as they disinherit much of what makes the Web such a generative medium.

I’ve learned a lot about how to mitigate some of these attacks while working through the security model for Thimble, and I’m beginning to think that it might be useful to document some of this thinking so it’s easier for people to create things that work more like the Web. If you think this is a good (or bad) idea, feel free to tweet @toolness.

Questions: Designing for Accessibility on the Web

Marco Zehe recently wrote a good, sobering blog post comparing the accessibility of Web apps to those of native ones.

Much of what I’ve seen on supporting accessibility on the Web has to do with using the right standards: always providing alt attributes for images, for example, or adding semantic ARIA metadata to one’s markup.

As a designer, however, I don’t have much interest in these standards because they don’t seem to address human factors. What’s more interesting to me is understanding how screen readers present the user interface to vision-impaired people and how usable that interface is to them. This would parallel my own experience of designing for non-impaired users, where I use my understanding of human-computer interaction to create interfaces from first principles.

I’ve been meaning to actually get a screen reader and try browsing the Web for a few days to get a better idea of how to build usable interfaces, but I haven’t gotten around to it yet, and I’m also not sure if it’s the best way to empathize with vision-impaired users. In any case, though, my general concern is that there seems to be a distinct lack of material on “how to build truly usable web applications for the vision impaired.” Instead, I only see articles on how to be ARIA standards-compliant, which tells me nothing about the actual human factors involved in designing for accessibility.

So, I’ll be spending some time looking for such resources, and trying to get a better idea of what it’s like to use the internet as someone who is vision-impaired. If you know of any good pointers, please feel free to tweet at me. Thanks!

Learning and Grammatical Forgiveness

HTML is a very interesting machine language because, like human languages, most things that interpret it are very forgiving.

For instance, did you know that the following HTML is technically invalid?

  <source src="movie.mp4"></source>

It’s invalid because <source> is a so-called void element: since it can’t have any content inside it, you simply don’t need a closing tag for it. The <img> tag works the same way. The technically correct way to write the above HTML snippet is as follows:

  <source src="movie.mp4">

However, in practice, all Web browsers will interpret both of these snippets the exact same way. When a browser sees the closing </source> tag on the first snippet, it realizes the “mistake” the author has made, and simply pretends it isn’t there.

What’s interesting to me is the way this mirrors human languages, and what it means for teaching. For instance, the following sentence is grammatically incorrect:

The dog loves it's owner.

However, no one who knows English will actually be confused by the meaning of the statement.

When I was trained as an adult literacy tutor several years ago, one of the most important principles we were taught was that fostering a love for writing was vastly more important than grammatical correctness. The “red pen” commonly used by school teachers for correcting grammatical errors was seen as anathema to this: when we found a grammatical error in a novice writer’s work, we were encouraged to ignore it unless it actually made the piece confusing or ambiguous for readers in a way that the author didn’t intend. Otherwise, the novice writer would become quickly distracted and discouraged by all their “mistakes” and view writing as a minefield rather than a way to communicate their thoughts and ideas.

We’re running into similar issues in the design of the Webpage Maker. On one hand, the fact that Web browsers are so forgiving when interpreting HTML enables us to follow a similar philosophy as that of progressive adult literacy tutors.

But sometimes, the forgiving nature of Web browsers backfires: they actually render a document that is vastly different from the author’s intent, which is just as frustrating as a pedantic nitpicker. We’ve created a library called Slowparse—soon to be renamed—which attempts to assist with this, providing the logic needed for a user interface to gently inform users of potential ways their HTML and CSS code might be misinterpreted by machines. A full specification of errors and warnings is also available, as is an interactive demo that uses the library to provide real-time feedback to users.

It’s been interesting to see how different Slowparse is from a HTML/CSS validator, whose goal is not one of learning, but of ensuring conformance to a specification. From a learning perspective, a validator is like the pedantic teacher who loves their red pen: some of its feedback is quite useful, but the remainder is likely to confuse and intimidate a newcomer.

Partly as a result of its learning goals, Slowparse actually “warns” the user of things that are technically valid HTML/CSS, but which likely don’t reflect the intent of the author. One current example of this is in regards to the use of unquoted attributes in HTML5, though that particular example is still subject to change.

At this point, I think the challenge will be to work with our learning team and user test our interface to the point that we achieve a good balance between being a pedantic nitpicker and providing useful feedback that helps users as quickly as possible. In my opinion, if we do things right, we’ll help people develop a love for HTML and CSS—even if what they write may technically be “grammatically incorrect.”

Prototyping Presentations

Presentations take a long time to make. Particularly when I’m just conceptualizing my presentation, it takes a lot of work to record myself talking, use a tool to sync it with the proper visuals, and then repeat the recording and syncing process as I iterate on the content.

I recently made a simple tool called Quickpreso to make the process of “prototyping” a presentation quicker, and more like writing a simple HTML page.

A presentation in Quickpreso is just an HTML file with a series of alternating lines for visuals and voice-overs, like this:

<img src="slide-one.jpg">

This text will be spoken for slide one.

<a href="">I am slide two.</a>

This text will be spoken for slide two.

The visuals can contain any HTML markup. Each section of voice-over text is rendered by the OS X say command; they’re all concatenated together into an audio file by ffmpeg. Finally, the visuals are synced to the audio in a Web page using popcorn.js.

Quick iteration is facilitated by a simple Python web server that regenerates the audio file when it detects changes to the voice over text. The final product is all static content that can be served from any web server.

I used this tool to create a Webmaking for Knitters presentation in January. The result is quite robotic, obviously, though it can be made a little more natural-sounding if newer voices from OS X Lion are used (I’m still on Snow Leopard).

One particular advantage of this approach, however, is that you get subtitles/closed-captioning for free. There’s also nothing preventing you from re-recording the final audio in your own voice once you’re happy with your prototype.

The source code is available on Github at toolness/quickpreso. The code is in an alpha state, so your mileage may vary; fortunately, though, the source code is miniscule, so understanding and changing it shouldn’t be hard.