My First Elm App

I recently wrote my first application in Elm, purchase which is promoted as a “delightful language for reliable webapps”.

The application is an accessible color palette builder, cost which builds on the excellent design of the 18F Visual Identity Guide to provide visual designers with real-time feedback on the accessibility of their palettes:

Screenshot of the accessible color palette builder application

Elm is elegantly simple

What strikes me most about Elm is how easy it is to learn, syringe and how quickly one can start using it to build useful things. This can be done through Elm’s official guide, though I personally used Elm: The Pragmatic Way by The Pragmatic Studio, which I highly recommend.

When I first learned Elm in March of 2016, I actually wouldn’t have been so quick to recommend it: at that time, building anything useful required learning about something called signals, which was a significant conceptual hurdle; but after signals were dropped in Elm 0.17, things got a lot simpler. This illustrates one of my favorite things about the design of Elm: its creator and community are constantly trying to make it easier to learn without losing any of its power.

Even if one doesn’t end up using Elm in their production code, I think Elm might actually make it easier to learn concepts that can still be used in JavaScript. For any JavaScript programmers who might currently be overwhelmed by the sheer cognitive load introduced by React, Redux, TypeScript, and concepts like immutability and functional programming, I highly recommend dabbling in Elm. It’s a lot simpler than the analogous set of JS tooling, and it’s often just as (if not more) powerful. And even if you decide to ultimately go with the JS tooling instead, it should be easier because you’ll already have learned the fundamental concepts in Elm.

All that said, though, writing my first app in Elm wasn’t without its share of difficulties.

Interacting with DOM-based JavaScript

Easily the biggest frustration of using Elm was that, while it does include robust support for interoperating with legacy JavaScript code, its virtual DOM implementation currently has no analogue to React’s component lifecycle. This feature of React has been critical for me when it comes to integrating third-party JS widgets with my code, as it essentially serves as an “escape hatch” from the virtual DOM. The lack of such a feature in Elm made it difficult to integrate jscolor into my app.

A nice standards-driven alternative to component lifecycle methods might have been the use of HTML5 Custom Elements; however, Elm’s virtual DOM has no support for this either–though I suspect it wouldn’t be hard to add, and I’d be interested in contributing support for it, if it’s something the community agrees with.


One of the most promoted advantages of Elm is that it actually has no runtime exceptions. That is, unless your code explicitly calls out to JavaScript for some of its functionality, it is impossible for a line of code to cause your program to “crash” and become unusable. This is something that even TypeScript has a hard time guaranteeing (though it does still reduce runtime exceptions significantly).

However, this isn’t to say that one’s code will be free of bugs: accidentally using a < instead of a > when comparing two numbers will obviously result in unintentional behavior that no static analyzer could detect. And due to Elm’s nature as a functional programming language, I had some trouble figuring out exactly how to debug my program in the few instances that my program went awry. I had no access to a conventional debugger, and adding logging statements into the midst of my code sometimes felt like an engineering feat by itself, but this could easily be due to my unfamiliarity with debugging functional code. Not being able to write statements takes some getting used to!

Outside of that, though, not having to worry about–and constantly guard against–every line of my code potentially throwing an exception was still an enormous weight off my back. And Elm’s ridiculously friendly error messages made conversing with the compiler delightful.

Some skepticism

While Elm was enjoyable to learn and use, I have to admit that I’m not fully sold on the functional notion of “immutability at any cost”. While it’s a great idea in theory, in practice I’m not certain I’ve run into a lot of situations where mutability was a significant source of bugs–as long as I was disciplined about keeping things immutable to a reasonable degree.

I’m also not sure if the limitations posed by being forced to be immutable at all times outweigh the advantages of, say, a language that encourages immutability but allows mutation if needed. An example of the latter might be the language Rust, where data is immutable by default but can be made mutable through the use of a mut keyword. Like Elm, that language has a similar focus on rock-solid reliability, but does so without restricting the programmer to a purely functional paradigm.

In any case, I’m looking forward to working more with Elm and seeing the language continue to evolve.

Discovering Accessibility

My final project working at the Mozilla Foundation was, impotent which was the first content-based website I’ve helped create in quite some time. During the site’s development, page I finally gave myself the time to learn about a practice I’d been procrastinating to learn about for an embarrassingly long time: accessibility.

One of the problems I’ve had with a lot of guides on accessibility is that they focus on standards instead of people. As a design-driven engineer, I find standards necessary but not sufficient to create compelling user experiences. What I really wanted to know about was not the ARIA markup to use for my code, but how to empathize with the way “extreme users”–people with disabilities–use the Web.

I finally found a book with such a holistic approach to accessibility called A Web For Everyone by Sarah Horton and Whitney Quesenbery. I’m still not done reading it, but I highly recommend it.

Stage 1: Accessibility Is Awesome!

The first thing I did in an attempt to empathize with users of screen readers was to actually be proactive and learn to use a screen reader. The first one I learned how to use was the open-source NVDA screen reader for Windows. Learning how to use it actually reminded me a bit of learning vi and emacs for the first time: for example, because I couldn’t visually scan through a page to see its headings, I had to learn special keyboard commands to advance to the next and previous heading.

Obviously, however, I am a very particular kind of user when I use a screen reader: because I don’t actually rely on auditory information as much as a blind person, I can’t listen to a screen reader’s narration very fast. And because I’m a highly technical user who is good at remembering keyboard shortcuts, I can remember a lot of them. So it was useful to compare my own use of screen readers against Ginny Redish’s paper on Observing Users Who Work With Screen Readers (PDF).

After learning the basics of NVDA, I found Terrill Thompson’s blog post on Good Examples of Accessible Web Sites and tried visiting some of them with my shiny new screen reader. Doing this gave me lots of inspiration on how to make my own sites more accessible.

The web service was also quite helpful in educating me on best practices my existing websites lacked, and The Paciello Group’s Web Components Punch List was helpful when I needed to create or evaluate custom UI widgets.

All of this has constituted what I’ve begun to call my “honeymoon” with accessibility. It was quite satisfying to empathize with the needs of extreme users, and I was excited about creating sites that were delightful to use with NVDA.

Stage 2: Accessibility Is Hard!

What ended up being much harder, though, was actually building a delightful experience for users who might be using any screen reader.

The second screen reader I learned how to use was Apple’s excellent VoiceOver, which comes built-in with all OS X and iOS devices. And like the early days of the Web, when a delightful experience on one browser was completely unusable in another, I often found that my hard work to improve my site’s usability on NVDA often made the site less usable on VoiceOver. For example, as Steve Faulkner has documented, the behavior of the ARIA role="alert" varies immensely across different browser and screen reader combinations, which led to some frustrating trade-offs on the Teach site.

One potential short-term solution to this might be for sites to have slightly different code depending on the particular browser/screen-reader combination being used. Aside from being a bad idea for a number of reasons, though, it’s also technically impossible–the current screen reader isn’t reflected in navigator.userAgent or anything else.

So, that’s the current situation I find myself in with respect to accessibility: creating accessible static content is easy and helps extreme users, but creating accessible rich internet applications is quite difficult because screen readers implement the standards so differently. I’m eagerly hoping that this situation improves over the coming years.

Does Privacy Matter?

A few years ago, store I made a tool called Collusion in an attempt to better understand how websites I’d never even heard of were tracking my adventures across the Internet.

The results my tool showed me were at best a bit creepy. I didn’t really mind terribly that third parties I’d never heard of had been watching me, cialis 40mg in collusion with the sites I visited. I just wish they’d asked me first (through something more approachable than an inscrutable privacy policy).

But, web as the old adage goes, I had nothing to hide. What do I care if some advertising companies use my data to offer me better services? Or even if the NSA mines it to determine whether I’m a terrorist?

I’m still struggling to answer these questions. I don’t know if I’ll ever be able to answer them coherently, but after reading a few books, I have some ideas.

For one thing, I don’t think it matters whether one has nothing to hide. What matters is if they look like they have something to hide.

One of the most invisible things about the Internet is that there are hordes of robots constantly scrutinizing your aggregate online behavior and determining whether you fit a certain profile. If you do, as Daniel Solove argues, your life could become a bit like that of Josef K. from Kafka’s The Trial. Or—to cite a true story—like that of Sarah Abdurrahman of On The Media, whose family was detained and aggressively interrogated for several hours at the US-Canada border for unknown reasons.

What determines whether you look like you have something to hide? The robot builders have it in their best interests to keep that secret: otherwise, the people with something to hide would simply start gaming the system. Yet this can also result in a chilling effect: innocent people self-censoring their online behavior based on what they think the robots might be looking for.

These robots don’t have to be working for the government, either. They could be working for, say, your health insurance company, looking for prior conditions that you might be hiding from them. The robots might even ostensibly work for “the people” in the name of transparency and openness, as Evgeny Morozov argues, distorting the public’s perception of you in ways that you can’t control.

What can one do to protect their privacy? One of the problems with using a tool like PGP or Tor to protect one’s privacy is that it paradoxically makes one look like they’re hiding something. When everyone lives in a glass house, you’ll look suspicious if you don’t.

Privacy problems are systemic, and I think their protections are necessarily systemic too: in order for one to not look like they’re trying to hide something, privacy needs to be a default, not something one opts-in to. Not only does this need to be done with technology, but it also needs to be accomplished through legislation and social norms.

Clarifying Coding

With the upcoming Hour of Code, gonorrhea there’s been a lot of confusion as to the definition of what “coding” is and why it’s useful, treatment and I thought I’d contribute my thoughts.

Rather than talking about “coding”, I prefer to think of “communicating with computers”. Coding, depending on its definition, is one of many ways that a human can communicate with a computer; but I feel that the word “communicating” is more powerful than “coding” because it gets to the heart of why we use computers in the first place.

We communicate with computers for many different reasons: to express ourselves, to create solutions to problems, to reuse solutions that others have created. At a minimum, this requires basic explorational literacy: knowing how to use a mouse and keyboard, using them to navigate an operating system and the Web, and so forth. Nouns in this language of interaction include terms like application, browser tab and URL; verbs include click, search, and paste.

These sorts of activities aren’t purely consumptive: we express ourselves every time we write a Facebook post, use a word processor, or take a photo and upload it to Instagram. Just because someone’s literacies are limited to this baseline doesn’t mean they can’t do incredibly creative things with them.

And yet communicating with computers at this level may still prevent us from doing what we want. Many of our nouns, like application, are difficult to create or modify using the baseline literacies alone. Sometimes we need to learn the more advanced skills that were used to create the kinds of things that we want to build or modify.

This is usually how coders learn how to code: they see the digital world around them and ask, “how was that made?” Repeatedly asking this question of everything one sees eventually leads to something one might call “coding”.

This is, however, a situation where the journey may be more important than the destination: taking something you really care about and asking how it’s made–or conversely, taking something imaginary you’d like to build and asking how it might be built–is both more useful and edifying than learning “coding” in the abstract. Indeed, learning “coding” without a context could easily make it the next Algebra II, which is a terrifying prospect.

So, my recommendation: don’t embark on a journey to “learn to code”. Just ask “how was that made?” of things that interest you, and ask “how might one build that?” of things you’d like to create. You may or may not end up learning how to code; you might actually end up learning how to knit. Or cook. Or use Popcorn Maker. Regardless of where your interests lead you, you’ll have a better understanding of the world around you, and you’ll be better able to express yourself in ways that matter.

How Colorblindness Blinds Us

When will we (finally) become a colorblind society? The pursuit of colorblindness makes people impatient. With courage, recuperation we should respond: Hopefully never.

— Michelle Alexander

In her excellent book The New Jim Crow, discount rx Michelle Alexander makes an argument that the notion of colorblindness is a deeply flawed principle that has proved catastrophic for African Americans in the post-civil rights era.

This is a notion that I find confusing, and I don’t claim to fully understand Alexander’s argument. One aspect I can relate, to, however, are the effects that occur when we set unreasonable expectations for our own inevitable prejudices.

I was particularly struck by two seemingly trivial racially-charged faux pas that occurred earlier this year. The first occurred when Lisa Lampanelli, a celebrity I’ve never heard of, posted the following Tweet:

The other was when actress Julianne Hough decided to wear blackface as part of her Halloween costume.

What surprised me wasn’t the incidents themselves, but the public’s response to them, which was to inundate the celebrities with scorn and derision. No discussions were had, and no rationales were given: the actions of these celebrities were instantaneously judged vile and offensive, and everyone was told never to repeat the same mistakes:

Race isn’t brought up very often in today’s public discourse; when it is, it usually follows this familiar pattern of a faux pas followed by scorn and a hasty apology. This is partly due to the colorblind principle, which polarizes the notion of prejudice: we’re supposed to be colorblind, and if we’re not, we’re a racist bigot. Therefore, it’s safest to never, ever mention race, at the risk of being labeled.

In 2008, a study at Northwestern University’s Department of Social Psychology found that “white subjects [are] so afraid of being branded as racist, they indicated a preference for avoiding all contact with black people.” This is something I’ve felt personally, despite not being white. And it’s no surprise, given the outcome of the aforementioned incidents and countless others like them.

The net effect of our reaction to race in public discourse—that is, the instinct to brand anyone who isn’t colorblind as racist—blinds us to everything important in our culture that is actually race-based, such as the multitude of issues surrounding the school-to-prison pipeline that Alexander addresses in her book.

In today’s world, I’d argue that racism actually has very little to do with calling a friend “my nigga” or wearing blackface. It has everything to do with the sense of fear I feel when I realize a black man is walking behind me on my way home. But until we stop pretending that we’re colorblind and building a culture of fear around conversations about race, we won’t even realize this kind of racism exists, let alone have a truthful dialogue about it.

Audio Things!

I’ve really gotten into podcasts this summer. Normally, ambulance I find them difficult to focus my attention on, hospital but some habits I’ve picked up recently have helped with this: I started running regularly, cure and I started playing Euro Truck Simulator 2. In fact, I liked the latter so much that I started a blog about it at

Just as French Fries are my delivery vehicles for ketchup, these new activities are my delivery vehicles for podcasts.

Well, I haven’t only been listening to podcasts. In particular, while driving my virtual truck around Europe, I’ve been listening to the BBC World Service. This was largely motivated by my desire to feel European, but it’s an excellent station nonetheless.

I’ve also been listening to audiobooks, which has been made particularly enjoyable by Amazon’s Whispersync for Voice technology. This allows me to effortlessly switch between the Kindle and audio versions of a book, depending on the context (both media can be purchased together for a low price). Using this, I alternately read and listened to Michelle Alexander’s The New Jim Crow, which I highly recommend to anyone living in America.

And then there are the podcasts. Some of them are the staples that most people I know have heard of, like Radiolab and This American Life; I listen to them and they’re amazing for pretty obvious reasons. But there’s a few potentially lesser-known ones I’d like to highlight:

  • Life of the Law is my latest obsession. I first discovered this through the 99% Invisible episode An Architect’s Code, which both shows collaborated on. It’s a fascinating podcast that contextualizes our legal system in ways that make people like me, who are normally bored to tears by the law, utterly enthralled. This is probably aided by the fact that, like Planet Money—another unexpectedly fascinating show—every episode is relatively short and focused.
  • On The Media is consistently interesting to me because it examines the way the media covers current events. I’m not always interested in current events in and of themselves, but I am fascinated by the way the media covers them, so this podcast is often my gateway to understanding what’s going on in the world.
  • Spark was recommended to me by Mark Surman and I love it because it’s a show about technology for people who aren’t, well, obsessed with it. This means that topics often focus on the impact of technology on society, with a great balance of coverage between its positive and negative effects.

I’ve been using an iPhone app called Downcast to listen to these, and have found it much more convenient and usable than the default Podcasts app.

If there are any podcasts you regularly listen to and think I might enjoy, please feel free to tweet your suggestions @toolness.

A HTML Microformat for Open Badges

Sometimes a person wanders by the #badges IRC channel and asks us how to issue a badge.

The response usually involves asking the user what kind of technical expertise they have; if they’re a programmer, opisthorchiasis we point them at the specification. If they’re not, well, we usually point them to a place like or credly.

One of the problems with pointing people at the specification is that it’s highly technical. JSON, the format the badge takes, is unfamiliar to non-programmers and doesn’t support code comments to make things a bit easier to grasp. Once a badge is hosted as JSON, the URL to the JSON file needs to either be opaquely “baked” into a PNG file, or it needs to be given to the Open Badges Issuer API behind the scenes, which requires additional programming. Furthermore, the JSON file needs to at least specify a criteria URL, which necessitates the creation of a human-readable HTML page.

That’s a lot of parts.

But what if a badge were just a Web page, formatted in a consistent way that made it easy for machines to read? What if issuing a badge was as easy as filling out a form, copying out a resulting HTML snippet and pasting it into your blog or website?

Microformats can help us do this, because they were designed precisely for this kind of purpose.

Now let’s look at the other solution: hosting one’s badges through third-party services like or credly. While incredibly easy, one of the problems is that the badge metadata is hosted on—and therefore issued by—a domain that the badge’s creator doesn’t actually own. This will be particularly confusing for recipients and verifiers who discover that their badge was issued by a domain they may never have heard of.

When badges can be represented as HTML, however, we make it really easy for people to host badges on domains they already own. If someone’s presence on the internet is already represented by their blog or website, shouldn’t we make it as easy as possible for them to issue badges from there, rather than an unrelated domain?

I made a proof-of-concept Web service that allows you to play around with this idea at Just fill out the form and paste the resulting HTML snippet in your blog or website, and you’re good to go. The snippet even includes a “push to backpack” button that allows the recipient to push the badge to their backpack.

One of the limitations with my service is that it’s really just a “bridge”, or hack, that translates between the Badge microformat I’m proposing and the JSON specification that Open Badge tools currently support. As a result, the issuer of the badge will appear to be rather than your actual blog or website. If we add an HTML microformat to the Open Badges specification, however, we won’t need a bridge, so this problem will go away.

For more information on the technical details of the microformat, including potential security concerns, see the README for the Github project.

On Enforcing Mandatory Code Review

Many software projects enforce mandatory code reviews, recipe even for their most senior developers. While I’ve mentioned before that code reviews can be very useful, I also think that mandatory code reviews among trusted members of a software team can have a number of downsides.

First and foremost, developers don’t have a common consensus on what code review actually means. How much time should it take? Does it mean acting like a human computer and painstakingly processing every line of code like a computer would? Does it mean evaluating the high-level architecture of a patch, or finding formatting errors? Does it mean just skimming the code and vaguely understanding it enough to take care of it if the original author gets hit by a bus? Does it mean evaluating the big-O complexity of an algorithm? Does it mean all of these things?

Many people who ask for mandatory code reviews have no idea what they’re asking for—because it’s mandatory, so they just have to—and the people who do the code reviews are in a similar position. As a result, while code reviews often improve software quality, in some environments a mandatory review policy can amount to an ill-defined bureaucratic ritual of unknown value.

Because of this, I’ve seen a lot of reviewers—myself included—offer nothing but so-called “nitpicks” in their code review comments, as a way of appearing to perform a useful act while in fact slowing a project down, destroying morale, and optimizing for their own minimal time investment by engaging in days or weeks of asynchronous pedantry. Other times, because reviewers have been asked to do something extremely vague, they often procrastinate, which causes code to bit-rot and sets a project back even further.

But what happens when we make code reviews voluntary, instead of mandatory? Well, then it’s called asking for advice.

Many of the most useful “code reviews” I’ve experienced came not from asking someone’s permission to land code, but from simply being uncertain of very specific aspects of my own code, and asking my peers for help. Sometimes this has come in the form of a github pull request; other times it’s been in the form of pair programming; other times it’s just involved me dumping some source code into a webpage and asking someone over IRC about it.

There are a number of things I like about this practice. The first is that it’s my choice to ask for advice, which is far more empowering than asking for permission, which is what a mandatory code review policy implies. Even if I had the exact same conversations through mandatory code reviews that I would through voluntary code reviews, I would still enjoy the latter more, because they’re my decision rather than my obligation.

Another advantage of voluntary code reviews is that I know exactly what I’m asking for. If I feel insecure about my own code, I can introspect and understand why I’m feeling that way, which leads me to specific questions. Often different questions are best answered by different people, some of whom may even work on different projects; when I ask those people to review my code, I’m requesting very specific things that are highly relevant to their expertise. I’m also targeting my questions in a way that ensures that I don’t take up too much of their time. And because it’s viewed by them as a well-defined, time-boxed favor rather than a vague obligation, they’re typically much more responsive and excited about helping me than they would be if it were a mandatory code review.

In conclusion, rather than decreasing software quality, I believe that the social incentives inherent in voluntary code review policies encourage developers to take ownership of the code they write by paying close attention to its needs and valuing the time of others who may need to take a look at it.

Building Bridges Between GUIs and Code With Markup APIs

Recently the Twitter Bootstrap documentation gave a name to something that I’ve been excited about for a pretty long time: Markup API.

Markup APIs give superpowers to HTML. Through the use of class attributes, anorexia data attributes, X-Tags, or other conventions they effectively extend the behavior of HTML, turning it into a kind of magic ink. Favorite examples of mine include Twitter Bootstrap, Wowhead Tooltips, and my own Instapoppin.

The advantages of a markup API over a JavaScript API are numerous:

  • They mean that an author only needs to know HTML, whose syntax is very easy to learn, rather than JavaScript, whose syntax is comparatively difficult to learn.
  • Because the API is in HTML rather than JavaScript, it’s declarative rather than imperative. This makes it much easier for development tools to intuit what a user is trying to do—by virtue of a user specifying what they want rather than how to do it. And when a development tool has a clearer idea of what the user wants, it can offer more useful context-sensitive help or error messaging.
  • Because of HTML’s simple and declarative structure, it’s easy for tools to modify hand-written HTML, especially with a library like Slowparse, which help ensure that whitespace and other formatting is preserved. Doing the same with JavaScript, while possible with libraries like esprima, can be difficult because the language is so complex and dynamic.

These advantages make it possible to create GUI affordances atop hand-coded HTML that make it much easier to write. As an example of this, I hacked up prototype slideshow demo and physics demo in July of last year. Dragging an element with the class thimble-movable in the preview pane changes (or adds) CSS absolute positioning properties in the source code pane in real-time, and holding down the shift key modifies width and height. This allows users to size and position elements in a way that even a professional developer would find far more preferable to the usual “guess a number and see how it looks” method. Yet this mechanism still places primacy on the original source code; the GUI is simply a humane interface to change it.

This is the reverse of most authoring tools with an “export to HTML” feature, whereby an opaque internal data model is compiled into a blob of HTML, CSS, and JavaScript that can’t be re-imported into the authoring tool. Pedagogically, this is unfortunate because it means that there’s a high cost to ever leaving the authoring tool—effectively making the authoring tool its own kind of “walled garden”. Such applications could greatly facilitate the learning of HTML and CSS by defining a markup API for their content and allowing end-users to effortlessly switch between hand-coding HTML/CSS and using a graphical user interface that does it for them.

Building Experiences That Work Like The Web

Much has been said about the greatness of the Web, this yet most websites don’t actually work like the Web does. And some experiences that aren’t even on the web can still embody its spirit better than the average site.

Here are three webbish characteristics that I want to see in every site I use, ampoule and which I try my best to implement in anything I build.

  • “View Source” for every piece of user-generated content. Many sites that support user comments allow users to use some kind of markup language to format their responses. Flickr allows some HTML with shortcuts for embedding other photos, user avatars, and photo sets; Github permits a delicious smorgasboard of HTML and Markdown.

    The more powerful a site’s language for content creation, the more likely it is that one user will see another’s content and ask, “how did they do that?”. If sites like Flickr and Github added a tiny “view source” button next to every comment, it would become much easier for users to create great things and learn from one another.

    I should note that by “source” I don’t necessarily mean plain-text source code: content created by Popcorn Maker, for instance, supports a non-textual view-source by making it easy for any user to transition from viewing a video to deconstructing and remixing it.

  • Outbound Linkability. Every piece of user-generated content should be capable of “pointing at” other things in the world, preferably in a variety of ways that support multiple modes of expression. For instance, a commenting system should at the very least make it trivially easy to insert a clickable hyperlink into a comment; one step better is to allow a user to link particular words to a URL, as with the <a> tag in HTML. Even better is to allow users to embed the content directly into their own content, as with the <img> and <iframe> tags.

  • Inbound Linkability. Conversely, any piece of user-generated content should be capable of being “pointed at” from anywhere else in the world. At the very least, this means permalinks for every piece of content, such as a user comment. Even better is making every piece of content embeddable, so that other places in the world can frame your content in different contexts.

As far as I know, the primary reason most sites don’t implement some of these features is due to security concerns. For example, a na├»ve implementation of outbound linkability would leave itself open to link farming, while allowing anyone to embed any page on your site in an <iframe> could make you vulnerable to clickjacking. Most sites “play it safe” by simply disallowing such things; while this is perfectly understandable, it is also unfortunate, as they disinherit much of what makes the Web such a generative medium.

I’ve learned a lot about how to mitigate some of these attacks while working through the security model for Thimble, and I’m beginning to think that it might be useful to document some of this thinking so it’s easier for people to create things that work more like the Web. If you think this is a good (or bad) idea, feel free to tweet @toolness.