Storything Interactive Prototype

Last week I merged the Webmaking Tutorial Prototype with the Webmaking 101 for Journalists prototype we made during our three day sprint in NYC in February.

The result, which is code-named Storything, is currently hosted at Give it a try!

The design for this prototype is based on Jess Klein’s instructional overlay mockups. A separate two-pane editor for example snippets is included in the movie frame; my hope here is that by setting the tutorial movies in an actual editing environment, users will obtain a better understanding of how to use our tool. This is further aided by the ability for the tutorial movies to highlight parts of the user interface outside the movie frame.

At present, the prototype awards badges for a few simple things, like creating your first paragraph and header elements. These badges aren’t necessary to advance the tutorial, however; this is partly because we’d like the tutorial to be approached in a non-linear way, and also because the assessment algorithms for the badges likely can’t capture every possible edge case in user input.

The prototype’s source code can be forked at toolness/storything on Github.

Webmaker Tutorial Prototyping

Recently I’ve been playing around with creating interactive tutorials that teach people how to create things on the Web.

Check out this prototype. At the end of a movie-like tutorial, you’ll be given a challenge to write your first bit of HTML. At any time, you can use the scrubber at the bottom-right to review any part of the tutorial; anything you’ve typed so far in the challenge is undone while you’re scrubbing, and is automatically re-applied once you’re back at the challenge.

This prototype is an evolution of an idea that Jess Klein wrote about regarding instructional overlays for our webmaking tools.

How It Works

The tutorial uses Popcorn to construct a “movie” that automates the user interface of a two-paned HTML editor with an instructional overlay. I made an ad hoc mini-framework called tutorial.js to make it easy to script these sorts of experiences; for example, here’s what the beginning of the prototype’s tutorial looks like:

  .dialogue("Hi! I am Mr. Love Bomb and will teach you how to be a webmaker now.", 0)
  .typechars("you're really cool.")
  .dialogue("Check it out, this is HTML source code—the language of the Web.", 0)

The dialogue() method just makes the tutorial avatar—in this case, Mr. Lovebomb—say something. typechars() types some characters into the HTML editor, and spotlight() draws the user’s attention to part of the page. Each action is queued to occur serially, in a style inspired by the chaining API of soda.

The full tutorial script is at tutorial-script.js, and the full tutorial source is available on Github.

Hacking The Web With Interactive Stories

I recently made The Parable of The Hackasaurus, which is a game-like attempt to make web-hacking easy to learn through a series of simple puzzles in the context of a story.

The parable is really more of a proof-of-concept that combines a bunch of different ideas than an actual attempt at interactive narrative, though. The puzzles don’t actually have anything to do with the story, for instance. But I wanted an excuse to do something fun with the vibrant art that Jessica Klein has made for the project, while also exploring possibilities for the Hack This Game sprint and giving self-directed learners a path to understanding how the Hackasaurus tools work.

If you know HTML and CSS, you’re welcome to remix this and make your own “web hacking puzzle” where the goggles are the controller. Just fork the repository on GitHub and modify index.html and bugs.js as you see fit. Almost all the puzzles, achievements, and hints are data-driven and explained in documentation, so you don’t actually need to know much JavaScript.

For more information on my motivations in creating the parable, check out the experiment’s README. Jess is also thinking about making an interactive comic that teaches web-hacking, which I’m looking forward to.

The Challenges of Developing Offline Web Apps

As I mentioned at the end of my last post, there’s a lot of usability problems that make writing an offline web app difficult.

When writing a “native” client-side app using technologies like Microsoft .NET or Apple’s Cocoa framework, it’s assumed that everything your program is doing, and everything it needs, is already installed on the local device. Anything not on the local device needs to be explicitly fetched over the network. Any app is therefore “offline” by default, and it’s very easy to tell, both when reading and writing code, when it needs to access the network.

Understandably, this makes it trivially easy to write an offline native app. One’s very first app works offline, unless there’s something related to the app’s purpose for which network access is obviously required.

On the other hand, offline web apps build on the already confusing, frequently opaque world of resource caching. They also break developer expectations in various ways that I’ll explain shortly. Furthermore, no browsers except Chrome provide tools for introspecting and understanding the behavior of an offline application, which adds to developer frustration.

Problem One: Simple Things Aren’t Simple.

Let’s say you’ve made a simple web-based currency converter, which is fully self-contained in a single HTML file and never requests any resources from the network. A browser won’t let it be accessed while the user is offline unless you do a few extra things:

  1. Create a manifest which enumerates every resource that needs to be available to the app.

  2. Make sure the manifest is served with a mime type of text/cache-manifest, which is done differently depending on the web server software you’re using.

    As I mention in my post On The Webbyness Of An Installable Web App, this single requirement vastly complicates the scope of the problem by requiring developers to understand the HTTP protocol layer and their web server software. Furthermore, Chrome’s Web Inspector also appears to be the only tool that provides helpful error feedback if the manifest is of the wrong type—developers using Safari, Opera, or Firefox will likely be confused as their application silently fails to be cached.

  3. Add a manifest attribute to their HTML file’s <html> element that points to the manifest.

Doing all this will allow you to deploy your simple currency converter app for offline use; it’s clearly much harder than writing an offline native app. But what’s worse is that adding offline support complicates further development of your app in a number of ways.

Problem Two: Your Development Workflow Is Now Broken.

Let’s say you make a simple typo fix in your HTML file. The fix won’t actually propagate unless you also alter your app’s manifest file in some trivial way; even then, because the cache manifest is checked after the cached version is loaded, the page needs to be reloaded an additional time to activate the new version.

Needless to say, this complicates the development workflow for the app, as it impacts the edit-test cycle. Workarounds exist, of course; for All My Etherpads, I’ve added an --ignore-cache-manifests option to the bundled Python development server that allows the server to pretend that the manifest doesn’t exist, so that the app’s offline cache is obsoleted and code changes propagate instantly. Even though such workarounds make it possible to develop as one normally does, developers still must remember to make a trivial change to their manifest file when they re-deploy, or else their existing users won’t see their changes.

Problem Three: Your App Can’t Access The Network When Online.

After creating your first completely offline app, you might think that adding a manifest file would just let you specify what files your app needs for offline use, and that accessing other resources would obey the normal rules of the web: they’d be accessible if the user happened to be online, and they’d fail to load if the user was offline.

But this isn’t the case at all. Simply having a manifest file changes the rules by which a page accesses any network resource, whether the user’s browser is online or offline. Unless a manifest explicitly lists a resource in one of its sections or contains an asterisk in its NETWORK section, attempting to access the resource from an XMLHttpRequest, a src attribute in a HTML tag, or anything else will fail, even if the user is online. No browsers currently provide very good feedback about this, either, leaving developers befuddled.

Problem Four: Cross-Browser Support Is Hard.

Not all browsers support the full application cache standard. Some browsers, for instance, don’t support the FALLBACK manifest category, which will cause apps on those browsers to behave differently.

Problem Five: There’s Not Much Tooling For This.

As I mentioned at the beginning of the post, Chrome is the only major browser that provides any information on the state of the application cache. Developers on other browsers at least need to add some code that monitors the state of the application cache and logs the many kinds of events it can emit, but even that information doesn’t tell you everything you need to know.

This is probably the most important problem, since it exacerbates all the others: if an offline app isn’t behaving as expected, a developer without the right tools can’t tell whether it’s due to many of the problems explained above, or something else entirely.

The Bigger Picture

These issues are symptomatic of a larger problem that most in-browser development tools face, which is that they generally provide little information that tells you exactly why a resource couldn’t be accessed. Aside from the complexities offline apps introduce, cross-origin resource sharing and the single-origin limitations put upon fonts and video make resource access in HTML5 incredibly complex. Yet most of our tools haven’t yet adapted to the development problems that arise when we use the new platform features.

The Good News

One of the best things about offline web apps, which is part of the reason for the additional complexity from the world of native apps, is that automatically updating is simply part of the way they work; you don’t have to put them on an online marketplace to benefit from the functionality, nor do you have to research and invest in any kind of third-party libraries that incessantly nag the user to upgrade. This is awesome.

As for the challenges facing developer ergonomics, help is on the way. Mikeal Rogers and Max Ogden are currently working on a project to make offline web apps easier to build. Folks at Mozilla believe that support for offline apps is an important use-case and want to make the user and developer experience better. Every major browser except Internet Explorer already seems to support the feature, which is also great. I’m already thinking of a few ways to ease these hassles myself.

In the long term, though, as I explained in my previous post, I think that the user perception of offline web apps—and the associated usability problems with even using a browser offline—are at least as important as the developer-facing challenges. Fortunately, the Open Web has some really awesome advocates, and I’m confident we can solve all of these problems once we put our heads together.

Adventures in Code Review and Pair Programming

A key component of Mozilla’s development process is code review, which consists of a trusted expert reviewing the material comprising the changes to a piece of software in order to fix a bug or add a feature. This is a great idea for a number of reasons:

  1. It helps increase the project’s bus factor, or number of people who understand how the software works and why decisions were made. If any trusted member of the community could simply push changes without requiring another person to be aware of them and understand them, then if that person were hit by a bus or truck, some amount of understanding about the software would be lost, especially rationales that could only be uncovered by the conversation that occurs during code review.
  2. There are many qualitative aspects of code that aren’t understood by a computer interpreting it. For example, if a developer writes some code that talks to another component, having someone well-versed in the other component read the changes can help correct any implicitly incorrect assumptions about the communication before flaws occur further downstream in the development process, when they will be more costly to detect and fix.

However, code review can also have its own costs. Because it requires two people instead of one, by necessity any changes one person wants to make are blocked by the other. Most code review at Mozilla is done asynchronously, which means that one developer writes and uploads a patch that a reviewer reads at their convenience. Because reviewing isn’t much fun and reviewers have lots of other things to do, the back-and-forth process of reviewing and correcting a patch can take several days, weeks, or even months—and that’s not even including the implicit task of finding a reviewer in the first place. All this can sometimes make the pace of development slow to a crawl: imagine a complex conversation occurring for one hour every other day over a period of weeks, requiring each participant to remember the state of the conversation every time their focus came back to it.

Another problem with asynchronous code review is that changes which are considered relatively low-priority, such as paper cuts, might be constantly stuck in a state of needing to be reviewed or revised because they’re constantly pushed to the bottom of someone’s queue as more important changes are discovered and prioritized above them. Even worse, developers may be reluctant to voluntarily fix such smaller bugs because they have little confidence they’ll ever get reviewed.

Enter Pair Programming

One potential solution to this is pair programming, whereby the developer writes code while the reviewer looks over their shoulder, effectively serving as a “navigator”, fixing potential mistakes before they even occur.

It sounded like something worth trying out, so I ran the idea by Myk Melez, who agreed. A little while later my colleague Brian Warner and I started fixing a Jetpack SDK bug together using this technique.

This proceeded quite smoothly. In particular, I noticed a few unintended consequences of pair programming:

  1. Coding can be a rather solitary activity, and when working on a major change, it can get lonely. Pair programming makes the activity social, which makes it more fun for me personally.
  2. There’s a tremendous amount of knowledge the coder and reviewer can learn from one another in a face-to-face or voice-chat based interaction that often is too burdensome to communicate via written text. For Brian and I, at least, I definitely felt that both of us learned more from pair programming than we would’ve if we stuck to the standard process of writing to each other.
  3. Both Brian and I had an implicit social contract of staying focused on the task at hand, or something related to it, which allowed me to focus much better than I do on my own. When I’m by myself, it’s surprisingly easy to get sidetracked by new emails or IRC messages, but because I didn’t want to waste Brian’s time, I never digressed while pair programming. Sometimes we’d stray off the main path to discuss the semantic intricacies of JavaScript or Python, but such excursions generally supported our goal rather than distracting us from it.
  4. Because I actually enjoyed the activity, I was much more likely to get more out of it: ask sharper questions, provide better answers, and write code with fewer flaws.
  5. Low-priority bugs were fixed rapidly because of the super-fast synchronous communication. One morning Brian and I just took a few to “warm up” and were able to make the Jetpack SDK significantly more usable by getting rid of paper cuts that may have taken months to fix using traditional code review.

Being able to simply push changes to the codebase after pair programming them made development about as much fun as hacking on my personal projects. At a very high level, in fact, I think of asynchronous code review more like asking for permission while code review feels more like simply having a conversation. Or, in pedagogical terms, the former feels more like taking a test, while the latter feels more like peer-based learning.

In short, pair programming required no context-switching and felt faster, more effective, and more fun than what I was used to.


Because pair programming introduces a strong social aspect into the development process, a lot of psychological factors can come into play. For instance, if the navigator is overbearing, like a sort of “back-seat driver”, then I suspect pair programming could quickly become less efficient, less effective, and less enjoyable than alternatives.

On the opposite end of the spectrum, I think it’s also important to ensure that the person at the keyboard has at most the same amount of knowledge about the code they’re changing as the navigator. Otherwise, it’s easy for the driver to simply go off on their own way and leave the navigator in the dust. Keeping the principle that the navigator was effectively the reviewer made it easy for Brian and I to ensure this: whenever we had to make a change to code that Brian understood more than me, I’d be the driver, and vice versa. This meant more peer-based learning would occur, improving the bus factor.

Another potential problem with pair programming is that it can become harder to record the review process for posterity. Unless the dialogue occurring between the two programmers is recorded and made available to the public, part of the development process effectively becomes closed. That said, though, Brian and I tried to remain vigilant about other people reading our code in the future: when we spent more than a few minutes discussing how to do something, we’d take a moment to write a comment explaining why we picked the solution we did.

It should also be remembered that alternatives have their own share of risks: for instance, I’ve noticed that when I review code asynchronously, I have a tendency to only skim over it and comment on the surface-level details. In contrast, while pair programming, it’s much easier to ask intricate questions about how or why something works because the cost of doing so is much lower.


Pair programming need not be at odds with code review. In fact, a fair amount of the “pair programming” Brian and I did actually consisted of one of us writing the code alone and later walking the other person through the changes in a kind of synchronous code review, fixing any problems on-the-fly. In other words, simply turning the standard asynchronous code review process into a synchronous one looks a lot like pair programming.

In any case, I’m glad that all of these development processes are available at Mozilla, since different ones seem more appropriate depending on the personalities and technical challenges involved. None of them will be effective, however, unless both participants have a vested interest in making them work.

Prelude To Barcelona

I recently wrote about a talk I gave at the Mozilla Summit on What Mozilla Can Learn From 826 National. Shortly after my presentation, Mark Surman dared me to teach a class on Web hacking for non-techies at the Peer 2 Peer University School of Webcraft, which got me thinking about how I’d teach a class in such a distance-learning environment.

My favorite kind of teaching is face-to-face, one-on-one mentoring. I think it works well because teacher and student have easy access to each others’ “state”: they can see what each other are working on, and infer how they’re feeling based on body language and other non-verbal cues. This makes it much easier for a teacher to gain insight on a student’s thought processes.

So at the very least, I needed a tool allowing HTML to be written and rendered collaboratively in real-time. I also wanted to prototype a solution as fast as possible, so I could quickly iterate, learn from my mistakes, and respond to the needs of my students.


The simplest and most effective collaborative environment I knew of at the time was called Etherpad, which allows non-technical users to easily collaborate on simple documents, or pads, in real-time. No “saving” is necessary because every keystroke is remembered and the whole pad is undoable back to the moment of its creation.

Etherpad lets you create a pad by just inventing your own URL. For instance, if you want to create a pad called hai2u on Mozilla’s Etherpad server, you can just visit and then share the link with friends to work on it together in real-time.

The fundamental fabric of the Web—HTML, JavaScript, and CSS—is all plain text, so Etherpad had enough functionality to serve as a source code editor. The only other thing I needed was a way to get the user’s browser to render and execute that code, instead of just treating it as plain text.

So I bought the domain and set it up with a few lines of Python to essentially serve as a HTML “viewer” for pads on Mozilla’s Etherpad server. The aforementioned hai2u, for instance, could now be viewed at

It was far from ideal; I’d actually tried setting up Etherpad on my own server so I could modify its user interface to make this process much simpler, but ran into technical problems as Etherpad required more memory than my server had. Still, the simple hack would allow students to experiment with HTML in one browser window and view it in another, collaborate or receive assistance in real-time, and instantly share their work with friends, which was good enough.

Now all I needed were some students.

Guinea Pigs

The awesome Mozilla community engagement team has bi-weekly meetings to discuss things like local community events and the latest fundraising, download, and education campaigns. I noticed, though, that they used Keynote to create presentations that were uploaded to SlideShare and only viewable with the Flash plugin. Besides requiring proprietary technologies, it was just a crummy user experience.

Aside from that, our engagement team isn’t comprised of techies, which made them an excellent specimen for my pedagogical experiment. If the Web is so great because it’s open and hackable and remixable, then surely they wouldn’t mind doing their slides using HTML, right? I had my own doubts about this theory, but I wanted to try it out.

I hastily grabbed screenshots of their Keynote template, combined them with the hacked-up JavaScript from my Summit presentation, and cobbled together a makeshift template. Then I scheduled a meeting with the engagement team.

Teaching them how to write HTML at the meeting was easy—they’d all had at least a little experience with it before. Also interesting were my answers to their questions on how to do specific things: most of the time, I just told them to find another slide that did what they wanted, copy its code, paste it into a new slide, and modify it as necessary. I’ve read about the power of the browser’s View Source feature, but actually seeing it used by non-engineers was inspiring. And perhaps it was just because they were part of Mozilla, but it was great to see them having fun hacking on HTML.

Before leaving the meeting, my coworker Mary Colvig wrote up instructions for the rest of the engagement team to use. That evening, some people who hadn’t been able to make the meeting tried things out for themselves.

When I looked at the first slide of my template the next day, it had changed:

This is why I like making things for people.


The engagement team used my hacked-up template for their July 28 meeting, but after that I worked with Sean Martell to make one that didn’t suck.

Paul Rouget and the European engagement team later used and the slides to do some amazing things. Because we now had an easy way to play with the very fabric of the Web, we could start using all its awesome power in our own meetings. In particular, Paul hooked up WebSockets, a chat widget, and open video to create a fully synchronized experience that allowed people to see the meeting’s video, the current HTML slide, and chat with other participants at the same time.


Mark Surman has recently put me in touch with Ari Bader-Natal, the creator of, which is a beautiful Etherpad-powered collaboration and learning environment for processing.js. I’m hoping to work with him to vastly improve

The amazing Anant Narayanan also joined us at Mozilla at the beginning of October, and I’m working with him to push forward experimentation with webcam and microphone input, so we can make our meetings even more participatory while also moving the Web platform forward.

I’m also looking forward to working with Ingrid Erickson and Taylor Bayless at the Drumbeat Festival to build something that makes it easy and fun for newcomers to learn and hack together on the Open Web—perhaps it will build upon or the Open Web Challenges that I made for my Design Challenge Tutorials, or maybe it will be something completely different.

Either way, I’m excited.

Reviewer Dashboards

As I mentioned in my post on The Social Constraints of Bettering The Web, finding a code reviewer can be difficult in Mozilla projects. At least, it’s definitely the case with the Jetpack SDK, which I’m actively involved in as both a reviewer and contributor.

Last week, on casual observation, it seemed like Myk Melez had been getting a lion’s share of code review demands placed on him. While I had some theories on why this might be the case, I also realized that I had no idea what the big picture was as far as code reviews were concerned. As such, I often had no idea who to assign a review to if I could think of more than one person who’d be qualified to do it.

So I put together this prototype dashboard for the Jetpack SDK project:

It’s really simple, but it’s a useful enough start that Aza took the code and adapted it to serve as a reviewer dashboard for the awesome Firefox Panorama project.

As with my Bugzilla Dashboard, this is just an HTML page that uses Gerv’s glorious Bugzilla REST API. There is no Django or Rails or other server-side code, so you can fork it on Github and modify it without setting up a server.

There’s some other things that could be useful here:

  • I have a cross-site XMLHttpRequest service up on my server that lets anyone grab the rendered HTML content of Mozilla wiki pages from any web page. This allows reviewer dashboards to use the Mozilla wiki as a backend for some information, e.g. for storing information about who the module owners/peers are instead of hard-coding it into the app.
  • As Patrick Walton mentioned to me, it’d be great to have basic status/availability information about reviewers. In the case of the Jetpack SDK, for example, Myk is currently on vacation, and Dietrich and Drew are both heads-down on Firefox 4 work so they shouldn’t be assigned any reviews. It’d be nice to have that kind of information up there too, perhaps pulled from Benjamin Smedberg’s weeky updates app.

I think this sort of application could also be used as a “leaderboard” to incentivize code reviews. A few weeks ago, Shawn Wilsher issued the following tweet:

I have this problem where I measure how much work I do based on how much code I write. I haven’t been coding much and feel largely useless.

I assume he wrote this because he’d been reviewing tons of code. If that’s the case, a dashboard like this could also be used to help reviewers feel awesome about their work by being publicly recognized for it, while also providing more transparency and accountability about the review process as a whole.


Over the past few years, I’ve made a number of little Web applications that are actually just HTML pages.

Building things this way is really fun and really simple. It’s easy to understand and remix because there’s no custom server-side infrastructure to complicate matters. In some ways, it’s just like writing my first Web pages in the 1990’s, only now I can use JavaScript for more than just image rollovers.

I’ve always run into a roadblock, though, when I’ve wanted to make one of my apps social. There aren’t any Ajax APIs I know of that can be used to share information in a way that isn’t vulnerable to spammers and other kinds of misuse, so I made one.

Twitblob is a simple cross-origin RESTful API that gives all Twitter users a small amount of public JSON blob storage on a host server. It also handles three-legged Twitter OAuth key exchange, allowing web pages to add social functionality with minimum fuss.

In a lot of ways, Twitter is a spam-resistant identity provider: its staff deals with the incredible hassle of keeping out malicious users, which makes it possible for me to offer 20k of storage to every Twitter user across any page on the Web without having to worry too much about misuse. So far I’ve used my Twitblob endpoint for a simple voting booth for my local Awesome Foundation chapter, as well as a prototype scheduling application for the 2010 Mozilla Summit. It’s been nice for adding social functionality to a page in a pinch.

Participatory, Scalable, Transparent Competitions

I’ve been involved in the judging pipeline for three competitions now. Today, I judged for an inspiring competition called Node Knockout, held by Joyent and Fortnight Labs.

The first two competitions I participated in didn’t scale. I wasn’t even a judge for the first one—we had a tiny handful of celebrity judges who couldn’t possibly review all of the submissions, so me and some colleagues furiously attempted to cull the list down for them. It wasn’t fun, and there wasn’t any way for the public to participate in the judging process. It also wasn’t really transparent—I was part of the process, yet everything I did was hidden from the public, including any valuable feedback I may have been able to give the entrants.

In the second competition, I was a judge for one of the rounds, but there were so many entrants that I simply didn’t have time to carefully examine each one. It was exhausting, and I didn’t even feel like I was able to give each entry the time it deserved.

In stark contrast, judging for Node Knockout was an amazing experience on three levels.

Knockout judging was participatory. Instead of a tiny handful of judges, Joyent and Fortnight Labs actually enlisted a small army of industry experts, including three of my coworkers. The public could participate, too: ultimately about half of an entrant’s “rating” was determined by them and the other half was determined by the judges. In some sense, the judges were just a pool of “trusted voters” whose votes were weighted more heavily than everyone else’s.

Knockout judging scaled. Since there were around a hundred entries total, the contest runners only required each judge to evaluate 6 or 7 assigned entries, allowing them to carefully examine each one and provide useful feedback. This allowed me to spend lots of time on each one and come out of the evaluation process feeling excited about the competition instead of exhausted.

Knockout judging was transparent. Furthermore, everyone’s comments and ratings were completely public, effectively constituting a body of valuable feedback the entrants could use if they wanted to continue working on their project. Every entrant’s team had a page on the Knockout site that listed all the comments and ratings submitted so far; it read a lot like the comments on a blog post.

In short, Knockout wasn’t just a competition about the Web; it was a competition held in the spirit of the Web, too. Thanks to Joyent and Fortnight Labs for holding such a fun event!

The Social Constraints of Bettering The Web, Part I

I’ve recently been proud and inspired to see two new features land in the latest Firefox 4 betas: Web developers can now access the raw audio data in <audio> and <video> elements, and Firefox Panorama helps users manage their tabs.

In his excellent post Experiments with audio, conclusion, Dave Humphrey mentions the following Tweet from Joe Hewitt:

Bottom line: we can currently only move as fast as employees of browser makers can go, and our imagination is limited by theirs. @joehewitt

Dave goes on to say that not one person who did the audio work in Firefox 4 was an employee of the Mozilla corporation. He also mentions the following:

As much as Mozilla made it possible for me to experiment, they also made sure that what got accepted was of the highest quality. I haven’t blogged about audio much over the past three months, mostly because we’ve been too busy getting the patch fixed up based on reviews. Before it could land we had to think about testing, security, JS performance, DOM manipulations, memory allocation, etc. To get this landed we needed lots of advice from various people, who have been generous with their time and knowledge.

As far as I can tell from my discussions with individuals in the trenches at Mountain View and throughout the Mozilla community, this process of getting advice and approval from various people is the biggest bottleneck of innovation in Firefox. Some of my talented coworkers created an excellent feature called account manager that won’t be making its way into Firefox 4 because the feature’s patches weren’t able to be reviewed in time for Firefox 4’s feature freeze. The core work was done, but the team couldn’t find the human resources to make sure it passed Firefox’s technical standards.

Other colleagues of mine are working on new built-in developer tools for Firefox, but as I understand it, there are only one or two people in the world who can review their patches and approve them for landing in Firefox’s code repository.

Within Mozilla, I see my coworkers vie for the attention of this tiny handful of gatekeepers. People in charge need convincing; the clever social engineer has a lot of power when it comes to navigating this landscape. I don’t know how much of this happens publicly via IRC, newsgroups, demos, blogs, bugs, wikis, or other backwaters of the Mozilla megaverse, as opposed to private conversations. I’m not sure how much the distinction matters, really; it’s just unfortunate that after working at Mozilla for two and a half years, I still have no idea how high-level decisions are made, who the gatekeepers are, and how they are to be convinced.

Yet I’m also sure that convincing the people who hold the keys to change a product used by hundreds of millions of people is non-trivial at any organization, and probably with good reason.

At a broad level, Hewitt’s tweet was about the notion that there exists some kind of bottleneck between people’s imagination and the ability of browser makers to keep up with it. Dave Humphrey pointed out that the bottleneck has nothing to do with paid employees of Mozilla, since anyone can contribute. My observations indicate that the biggest choke-point at Mozilla right now is a social problem that Clay Shirky might call fame: an imbalance between inbound attention and outbound attention. Lots of people want to get great features into Firefox, but a very limited set of people are actually able to pay attention to them and get those features out the door—much less determine which features are actually appropriate for the hundreds of millions the product is broadcast to.

That’s all I’ve got for now; I’ll write about some possible solutions to this in Part II.