On The Webbyness of an Installable Web App

I’ve heard some talk lately, primarily from Henri Sivonen, regarding whether Google’s notion of an Installable Web App is “webby”.

I am not sure exactly what webby means, but if I had to guess, it would involve the kinds of qualities that Mitchell Baker and Mark Surman believe make the web better: more transparent, participatory, decentralized, and hackable.

Though I’m not fully sold on these newfangled apps, I can think of three ways that they could make the web better.

The Usability of Files

At first glance, one might say that putting some web pages and scripts into a ZIP file marginally reduces transparency, as one now has to unpack it to see the original code.

However, this simple mechanism addresses a significant barrier to web development that isn’t often captured by web standards: casual developers intuitively understand files because—for better or for worse—they’re something that’s an integral part of the way one interacts with their computer from day to day. The notion of creating some files on one’s computer and making them available to the rest of the internet is reasonably simple and profoundly powerful. The operative metaphor of a ZIP file—putting a bunch of files into a little box that can be delivered as a single file and unpacked later—isn’t hard to understand once one knows what a file is. While it’s a bit of a hassle to put things into the box and unpack them, most operating systems already offer tools to make this easier for ordinary users.

Introduce the notion of HTTP Response Headers, however, and barriers start to appear: this is a completely different concept from files, requires an understanding of the Hypertext Transfer Protocol to not be completely mysterious, and requires learning the particularities of tools that one may not even be aware that they’re using.

Dump some files onto a server and you don’t even need to care whether that server uses Apache, Microsoft Internet Information Services, Lighttpd, or something else. Add a requirement to a Web standard that an arbitrary file be served with a special MIME type via an HTTP header, however, and suddenly a casual Web developer has to become aware of an entirely new layer of technology that they were previously blissfully unaware of.

The reason I mention this is because there are at least two different web standard proposals I know of that involve the use of ZIP files to make it easier for casual developers to participate in the web and make it better. One of them is Alexander Limi’s Resource Packages specification, and the other is Google’s proposal for Installable Web Apps. Both of them involve taking things that are currently arcane—configuring a web server to automatically serve files with compression over a keepalive connection and serving an offline web application with a cache manifest of type text/cache-manifest, respectively—and make them easy and understandable for non-professionals.

With some browsers already capable of introspecting into ZIP files, the effectiveness of View Source—the enabler of transparency and hackability—need not be reduced. In fact, it could even be increased: if putting JavaScript into a compressed ZIP file reduces its size enough to make minification less of a necessity when it comes to delivering Web content quickly, then more web content will be delivered in way that others can learn from and remix.

Untethered Applications

Google’s proposal for Web Apps actually makes the internet a more decentralized place, because it contains provisions for creating and sharing entirely serverless, untethered applications. Using the terminology of Jonathan Zittrain, this makes it easier for control to be transferred to the endpoint that users are (hopefully) in control of. That said, there are other proposals that could technically enable similar use cases, such as Brandon Sterne’s excellent Content Security Policy.

Webs of Trust, Not Heirarchies

Another area in which Installable Web Apps could decentralize the internet has to do with the field of trust. It’s currently very difficult to actually prove that a piece of Web content or functionality I created came from me, and wasn’t altered at some point by someone else. The only viable way to do this is via Secure HTTP, which requires asking an authority for permission to issue you a certificate. That this frequently involves paying them money and that the system is susceptible to corruption are besides the point. As Mark Surman mentions in a draft of Drumbeat’s mission statement:

Ultimately, our goal is a strong, safe open internet: an internet built and backed by a massive global community committed to the idea that everyone should all be able to freely create, innovate and express ideas online without asking permission from others. (Emphasis mine)

It should be possible to prove to other people that something came from you without having to ask permission from someone else, and in this respect, even though this mechanism is part of the Web, I would argue that it is profoundly un-webby. Google’s proposal for Installable Web Applications associates an application’s identity with a public key that doesn’t require a blessing from any kind of authority; all versions of the application are self-signed by the key, which makes it far easier to establish trust between a user and an application. The trust model is also more granular and secure, because it creates a trust relationship between the user and the particular application they’re using, rather than the server they’re connecting to—which often isn’t even under a web developer’s full control. It’s because of this that we’re using a similar mechanism in Jetpack; extending it to the entire Web would be very webby, not coincidentally because it establishes a foundation for what could eventually become a web of trust.


While I’m still on the fence regarding whether Google’s proposal for Installable Web Apps are the best solution for a better Web, I do think that they’re a step in the right direction. In particular, they address social issues and usability concerns that, if resolved, will make computing life more transparent, participatory, decentralized, and hackable for everyone.

Herdict-Firefox Integration and Better HTML Presentations

I recently wanted to create a short, two-minute and thirty second “pitch” for the Herdict-Firefox integration prototype I’m working on with Jennifer Boriss, Laura Miyakawa, and Jeffrey Licht.

Here is the result. It turned out that the pitch itself was an experiment for me: after fiddling around with Screenflow and iMovie for a bit, I got frustrated with their limitations and decided to just use HTML to put together the presentation.

After writing out the script for the pitch, and recording my narration with Audacity, I saved the file as both Ogg Vorbis and MP3—different browsers support different formats—and set up a directory structure.

As with Mozilla: The Big Picture, I basically stuffed everything into the structure of an HTML page. The first two slides of the presentation, for instance, look something like this:

  <div id="slides">
    <div data-at="0.0">
      <a href="http://www.mozillalabs.com"><img
         id="logo" src="images/labs-logo.png"/></a>
      <h1>Firefox-Herdict Integration Pitch</h1>
    <div data-at="4.0">
      <img src="images/server-not-found.png"/>

The data-at attribute is an example of the HTML 5 data- attribute and records how many seconds into the audio the slide should be displayed. I marked up subtitles for the presentation in a similar way.

After that, I wrote some JavaScript that just attaches a timeupdate event listener to the presentation’s audio element and synchronizes the current slide and subtitle to its position. The result is something that looks and feels to an end-user like a YouTube video—one can even “scrub” the position slider to quickly rewind and fast-forward. However, I’d argue that this approach is actually superior to standard video in a number of ways:

  1. Slides can have any valid HTML content embedded in them. Text can be copied and pasted, their look and feel can be altered through CSS; images can be hyperlinked to their original sources.
  2. It’s easier to eliminate compression artifacts without sacrificing bandwidth and download size. Text, for instance, is always super-crisp.
  3. Since everything uses HTML, CSS, and JavaScript, anyone can view-source or use a web inspector to investigate how things are put together; as I explain in The Open Web is Magic Ink, they can take it apart, see how it works, and put it back to together in a different way. Doing such things with a pure bitmapped video representation wouldn’t be possible: you’d need the source “project files” for whatever program was used to compose the video, not to mention access to said program.
  4. Subtitles can be toggled on or off, and adding new languages isn’t hard.

This approach has its downsides, too, of course: there wasn’t a really easy way for me to embed the presentation in this blog post, for instance, and it can’t be viewed at all on Internet Explorer, as far as I know.

Still, it was a fun experiment to try, and for this particular use case I actually found it easier to compose everything using Open Web technologies than with the proprietary tools at my disposal.

Please be sure to check out the actual presentation, too, as the stuff we’re doing with Herdict is way cool.

Kids And The Open Web

Every time I think about why I like the open web, I basically think of how well it fits with the way I learned to use and program computers as a kid: my first computer, an Atari 400, came with everything I needed to do programming, and I (or my parents) didn’t have to spend hundreds of dollars or sign an NDA to get a development tool.

My favorite technical book as a child was Creating Adventure Games On Your Computer, which contained plain BASIC code for games that you could play, augment, and make your own. A column in one of my favorite magazines, 3-2-1 Contact, featured the same kind of content.

All of this was easy enough for a child to grasp—often far easier, as Jef Raskin observed in The Humane Interface, than today’s development tools. But being able to use a tool that provided an incredibly low barrier to generativity is something that I value a lot about my childhood. It’s in part where a lot of the real passion and excitement for open source and the Open Web come from: people like me see in them the qualities that made them truly excited about computers as a kid. Qualities that we’re constantly in danger of losing today as the field becomes more professionalized and controlled.

So that got me thinking about Drumbeat again: what if promotional materials for the Open Web focused on how it makes lives better for children who are budding hackers? Lots of adults aren’t tech savvy, but they know that their kids are, and if we can prove that the Open Web is better for their kids, and that they can make their kids’ lives better by choosing a standards-compliant browser, maybe they will.

After playing around with this idea for a bit, I came up with this:

The photo on the page is taken from Flickr user .sick sad little world.’s The Taste of Ink. Feel free to get the source and remix!

Design Challenge Tutorials

Over the last two weeks, I gave two tutorials to our Design Challenge students.

The first was called Engineering Prototypes, and centers on the most challenging part of working on prototypes for me, which is the balance between expediency of implementation and robustness. Prototyping involves prioritizing the former over the latter, but it’s unwise to throw engineering principles out the door: for instance, a prototype that constantly crashes or runs slowly may not be usable enough to dogfood, and one whose implementation is poorly designed can be difficult to iterate and evolve. My tutorial attempts to present some of the factors one should take into account to produce prototypes that are both quick to implement and robust enough to dogfood.

The other session was called Prototyping with jQuery but it included a heavy dose of Firebug as well.

For the second session, I created a prototype of something that I’ve wanted to make for a while: Open Web Challenges.

These are essentially a series of interactive web-based problems that require “hacking the page” using real-world tools to solve. They’re inspired by a number of my favorite pedagogical dilemmas, such as the time someone in LambdaMOO made me program my way out of a paper bag; the inventive exercises from Graham Nelson’s Inform Designer’s Manual 4th Edition; the labs from Bryant and O’Hallaron’s Computer Systems: A Programmer’s Perspective; and the mathematical proofs from Carol Schumacher’s Chapter Zero. Obviously, being a one-day hack, these Open Web Challenges pale in comparison, but the prototype was fun to build and I’d like to continue creating more interactive exercises like this.

All in all, I thought the Design Challenge was an awesome opportunity for students around the world to learn more about open-source design and development, as well as a great way for Mozilla folks to get a chance to talk to students, teach, and obtain a better understanding of what we need to do to make the Web as a platform easier to learn. If you’re interested, I recommend checking out the rest of the tutorials at design-challenge.mozilla.com/spring09.

I’d also like to thank Pascal Finette for putting the Design Challenge together—it’s unquestionably a success and I’m looking forward to participating in more Mozilla Education projects in the future.

Automatic Bug Reporting for Firefox Extensions

We want to make Ubiquity awesome at reporting errors. In our original release, a transparent message with JavaScript exception information was displayed, which wasn’t very useful to the average user, and was downright annoying when dozens of exceptions were logged in the same instant.

At present, running a command that raises an error just results in that message being logged to the JS Error console, which very few people know how to access—so most people are left scratching their heads and wondering why their command is taking so long to run.

For the next release of Ubiquity, we’re going to be trying something more user-friendly: if a command encounters an error, a transparent message will be displayed telling the user that it didn’t work. The message will also recommend using the “report-bug” command to send information about the bug to the Ubiquity team. If the user decides to run this command, a page is opened that looks like this:

Aside from inviting the user to describe their problem, a lot of information is included about their system: what OS they’re using, what extensions and plugins they have installed, what recent exceptions were thrown, and so forth. We’re hoping this will lower the barrier to entry both for receiving and providing technical support, since most of the information needed to describe and investigate a problem is contained in a single link.

We don’t yet have an interface for browsing existing bugs, but we do have a display for viewing them. It looks pretty similar to the page for submitting a bug:

One interesting aspect of our bug reporting system is that we’re not using numbers to identify bug reports: they get big fast as many reports are submitted, and big numbers are hard to remember. Instead, we’re mashing two random words from the dictionary together. For instance, the first bug I reported using this system was called anaphorically-spinach.

Under The Hood

The bug reporting system is pretty lightweight: mostly it’s just static HTML/JavaScript code that talks to a web service that’s implemented in under 100 lines of Python. The actual bug report is just a JSON object, and is deposited into a CouchDB server.

The big advantage of using CouchDB here is that we’ll be able to easily create really rich queries using plain old JavaScript. For instance, here’s a query that shows all reported error messages that contain the text “Invalid chrome URI”. It won’t be hard to create complex queries that, for instance, give us all the bug reports in which the user had a certain extension installed and had a command crash at a particular line in a particular file.

A Public Asset

Right now all reports are submitted to the public domain, and as such the report database is a public asset; users are informed of this before they submit the bug, and encouraged to look at the additional data that’s being sent with their report to ensure that there’s nothing sensitive in there. In the future, it’d be nice to allow the user to click on any parts of the data that are personally identifying, so that they can submit a version of the report that masks out the sensitive information.


The bug report system has been designed to be decoupled from Ubiquity itself. For instance, the report viewing application is designed as a reusable JavaScript component, so it should ultimately be easy to embed into any web page. In other words, it should be easy to use as a bug reporting mechanism for any Firefox extension—and possibly for any web application in general. If you’re interested in adding the component to your own project, please let us know; the code is still a work-in-progress, and any contributions or comments are appreciated.

Browsing and Searching in China

Mike Beltzner recently wrote an excellent blog post that puts the newly-released Firefox China Edition in a cultural context:

I’m used to a very search-based culture, and was shocked to discover that search – while still important – was a secondary task for all of my Chinese colleagues. Their normal pattern would be to first visit an authoritative source (a portal of some form, either a media hub, a news site, or a topic-oriented site like one for music) and then drill into the information presented. For example, if I’m interested in going to the movies, I would search for “showtimes toronto” and then navigate from there. My colleagues, on the other hand, would more likely navigate to a place where they knew they could find reliable data, follow links to showtimes, and only then perhaps invoke search on the individual movies to find out more about them.

Beltzner goes on to say that “the ways in which people like to interact with that information is likely to be heavily influenced by their cultural contexts”, implying that there’s something about Chinese culture that promotes a browsing-based approach rather than a search-based one. As a result, Firefox China Edition takes on some new features to make it more amenable to browsing.

At the risk of sounding culturally insensitive, I’d like to play the devil’s advocate here. The browsing, drill-down approach that Beltzner describes above actually sounds like the way I used the internet ten years ago. Or, in the context of Silicon Valley, it’s about the Yahoo world-view vs. the Google world-view.

Over the past decade, Google has done a lot to “convert” me to using search rather than browsing and drilling-down; one of the best examples has been Gmail, where they transformed a traditionally hierarchical and sorting-based paradigm into a search-based one, thereby making it much easier for me to find the information I’m looking for. So I guess that a part of me wonders if this isn’t so much “cultural” as it is the case that the “search meme” hasn’t arrived in China yet. If that’s the case, then it’s possible that promoting the use of search could be useful in gaining early adopters.

At the same time, I’m not saying that browsing or drilling-down is useless outside of Chinese culture, either: to that extent, the Chinese edition has some really awesome features that would be useful to me personally, such as the built-in Juice addon (which has some functionality that we’d like to get into Ubiquity).

I could be totally off-base here—if I am, I’m very interested in finding out what it is about Chinese culture that results in different browsing habits. And regardless, the Chinese edition is definitely a very interesting experiment.

November Labs Night, Thunderbird Awesomeness

Last night we held a really fun Labs Night at Mozilla’s Building K in Mountain View, California. The Thunderbird team was here for their work week, some folks from Seedcamp dropped in, and Dion and Ben of the Ajaxian and the new Mozilla Developer Tools Lab were all here, which made for a night of innovative presentations that got lots of interesting conversations started.

The evening started out with Jono presenting a quick overview of all the currently active Labs projects while wearing a large sombrero. This was followed by Ben and Dion presenting an incredibly cool demo of something they worked on before they joined Mozilla, which wowed everyone in the audience. Personally, I was equally impressed by the way that they were able to literally finish each other’s sentences as a buzzer went off at random intervals, signaling them to switch speakers.

After that, Dave Ascher stepped up to present some really terrific new prototypes of Thunderbird user interface experiments. One of them, currently in the form of an extension called the ThunderBar, is essentially a Thunderbird translation of Firefox’s touted AwesomeBar: instead of showing you items from your browsing history and bookmarks, it shows you contacts and mail messages that match your search criteria in real-time, using Thunderbird’s brand-new global database extension dubbed “Gloda”.

Ascher also showed off a very cool prototype of a Gmail-style conversation view, along with a mashup of email data with the MIT SIMILE widget that presented a timeline of the user’s messaging activity.

He then explained that they were doing a lot of this new work using standard HTML rather than XUL, the XML UI language that comprises the UI of most Mozilla-powered applications. Among other things, this allowed the Thunderbird team to easily and quickly leverage the work of an incredible number of people working on the open web—an extremely well-documented and flexible platform used by designers and coders alike—rather than using what ultimately amounts to a user interface platform with few consumers and little documentation, tailored specifically for the functionality needed by Firefox and little else.

Coincidentally, this is the exact same reason that Ubiquity features as little XUL as possible; the command prompt is done entirely in HTML, and everything that might normally be a XUL window in an ordinary extension is done as an HTML page loaded in a browser tab. Aside from its many other benefits, using open web technologies in Mozilla client-side code also drastically lowers the barrier to entry for anyone to contribute to such projects, since it allows contributors to reuse skills that they’re likely to already have.

The Thunderbird presentation got me really excited about Thunderbird and its many possibilities; over the past few days that the Thunderbird team has been here, I’ve switched from Mail.app to nightly builds of “Shredder”, the codename for the upcoming Thunderbird 3, and I’m looking forward to seeing this project progress. I’m currently quite addicted to Gmail, but I think that Thunderbird has the potential to far surpass its awesomeness while being extremely respectful of my privacy.

After some lively discussion about all this, we took a quick break and came back to a bevy of 5-minute lightning talks, powered by Myk Melez’s egg timer to ensure that no one went past the time limit.

Jono kicked off the lightning talks by presenting his explorations in the land of pie menus. This was followed by a presentation by Alex Peake on his new world-bettering startup, EmpowerThyself.com. Vladimir Oane then presented his Seedcamp startup uberVU, a cool aggregator for conversations that span websites and web services. I did a quick talk on Ambient News, which was followed by an intriguing static HTML mock-up Bryan Clark made for conversation views in Thunderbird. Last to talk was Christopher Clay, who gave a presentation of soup.io, an interesting new service that lets people express themselves in a lot of different ways through the use of what appears to be an elegant, humane UI with plenty of support for undo.

All in all, I thought this Labs Night went really well, and I was particularly impressed with all the cool ideas that non-Mozilla folks brought to the table. Labs itself is meant to be a community of innovators, and in this respect I thought that last evening’s gathering brought us closer to what we’d ideally like to have: a place where everyone participates and contributes to the ongoing dialogue of figuring out how to make technology less frustrating and more empowering.

We just need to take pictures next time.

Online Business and Reciprocity

Farhad Manjoo recently wrote an article on Slate promoting the notion of online businesses like Facebook charging people for services. It’s an interesting business argument, but I wanted to address this situation from a more social perspective.

There’s some notable differences that emerge when I compare my two favorite web-based businesses, Google and Amazon. I feel very comfortable in my relationship with Amazon, largely because I understand how they help me and how I help them: I give them money, they give me goods or services. I know exactly how I’m helping them out, and I know exactly what I’m getting in return. It’s very easy for me to weigh the costs and benefits and make sound economic decisions based on them.

This kind of relationship is a well-known cultural norm that’s as old as our civilization: it’s called reciprocity. Assuming that all individuals and businesses have self-interest in mind, it’s actually a mechanism that helps build trust, because it makes intentions transparent. Almost everything that Amazon knows about me is based on the reciprocal relationship I’ve had with them, and as a result the information that they extrapolate based in it is not only highly accurate, but also welcome and appreciated. For instance, whenever I get an email from them suggesting a product that I might be interested in, there’s quite a good chance that I’ll actually be interested in it, because their suggestion is based on very good evidence—i.e., previous purchases myself and thousands of others have made with them. Because their business model is based around this reciprocal relationship, it’s in their best interests to offer supporting infrastructure to help their customers make informed decisions about their transactions with them. For them, this means my increased loyalty and purchasing; for me, this means an online resource that I find no less useful than Wikipedia.

Not having reciprocity in a relationship, on the other hand, can lead to suspicion and mistrust. If someone were to continuously give me, say, incredibly useful search results, an email client with an outstanding user interface, and an awesome code-hosting service completely free-of-charge, I’d wonder what their ulterior motive was. I’m referring, of course, to my relationship with Google, which I’ve received a tremendous boon from and Google has asked nothing for. Apparently they like to mine the personal data I’m giving them, but I have no idea what they’re doing with it. They throw ads at me, but I don’t visit Google to buy things; I visit them to search, receive and send email, and host my code, so the ads simply aren’t in the best interests of the user experience the way they are on Amazon.

It also makes no sense for Google to predict what I might pay money for, because they don’t know anything about what I’ve paid money for before. For instance, just because I have an email conversation with someone about our World of Warcraft raid last night doesn’t mean that I want to buy gold online. Even if the advertisement is potentially useful, there’s no social information to help me make a decision, such as the user-ranked ratings and reviews present on Amazon, and there also isn’t a trusted intermediary like Amazon to ensure that I’ll receive what I pay for. And I don’t expect Google to ever offer such things because they make money off selling advertisements to other companies, not selling products to me.

So, my relationship with Amazon mirrors my relationship with the store on main street, which itself is part of a functional social dynamic that’s been in place for hundreds of years, if not thousands. I’m not sure what my unbalanced relationship with Google mirrors, because technology has never actually allowed anything like it to exist before.

I do think there’s a word for a business relationship that doesn’t involve reciprocity, though: it’s called creepy. Check out Jenny Boriss’ excellent blog post titled Facebook is acting like your mother, and she’s very disappointed in you. The bottom line is that if I have to pay companies like these a monthly fee so that they can turn a profit and give me great service and not be creepy anymore, I’ll gladly do it.

Because reciprocity is awesome.

Ambient News: The Movie

A few weeks ago, I made my first screencast—a pitch for Ambient News on the Mozilla Labs Concept Series:

The screencast was recorded with Vara Software’s ScreenFlow; the title cards were composed in Adobe Photoshop CS3 and typeset in Helvetica Neue light.

I thought I’d write a few notes about some of the thoughts and experiences that went into the making of this.

I intentionally gave this video a target run-time of 45 seconds. In part, this was influenced by the practice of one of my favorite Philosophy professors at Kenyon, whereby the length of any paper we wrote was limited to be no more than 2 pages. This forced us to make extremely concise arguments. The word is analogous to the second in video and audio, and limiting the run-time of my screencast to 45 seconds was my way of forcing myself to treat every moment of the viewer’s attention as the precious resource that it is.

It took me about 3 hours to make this screencast, in part because I’d never used ScreenFlow before, and in part because the video had to be ready in time to be included on the New Tab Concepts post on the Labs blog that would go live later the same day. I believe I gave myself 3 takes to get the recording right, and I wish I was able to give myself more—there’s a number of intonation changes in my voice that I dislike, and a few unplanned things going on in the video that are a bit distracting and confusing. At the same time, though, I like the fact that I didn’t have gobs of time to spend on obsessively perfecting this (a practice which I am wont to do). Being comfortable with making something that has rough edges is probably a healthy thing, and as such I’m reasonably pleased with the final result.

Ambient News

As some people know, it’s possible to get the latest news about our favorite sites on a single page through a fairly ubiquitous technology called web syndication. The advantage of this is that we can look at all the news we want in a single place, instead of having to visit dozens of websites per day.

Unfortunately, actually setting up web syndication can be a chore—and often, a confusing one at that. For instance, the way Firefox lets the user know if syndication is available for a page they’re looking at is by using an icon on the URL bar:

It’s that funky thing to the left of the star that looks like some concentric quarter-circles on a blue background. As Aza has explained in his post The End of an Icon, using a cryptic graphic can make it difficult for an end-user to know what the icon means unless someone tells them. So that’s the first barrier.

There’s more, though. On many pages, clicking on the aforementioned icon gives you a pop-up menu that looks like this:

RSS 2.0, RSS 0.92, and Atom 0.3 are all different formats for conveying essentially the same information. I personally have no idea what the differences between them are, and I imagine that most people don’t either. So presenting end-users with a fairly meaningless and intimidating question is yet another barrier to taking advantage of this technology.

But there’s even more. At this point, the user is presented with a page that requires them to choose a program to actually read their news with. After doing some research and picking a reader and learning how to use it, they need to manually subscribe to all the sites that they visit often.

All in all, this process is such a hassle that most people I know don’t bother using web syndication. I’ve only been an infrequent user of it myself; my newsreader tends to fall into disuse when my subscription list inevitably becomes out-of-sync with the sites that I actually visit.

So, in an attempt to solve this problem and explore the possibility of ambient information in the browser, I’ve started a little experiment. It’s a Firefox Extension called “Ambient News”, and its goal is to provide the user with zero-cost news about the sites that they visit frequently. The extension requires no configuration; you just install it and see if it helps you out.

One of the many great things about Firefox 3 is its Places subsystem—this isn’t so much a user-facing feature as it is an underlying engine that makes it really easy to create functionality that takes the user’s web-browsing history into account. So Ambient News leverages this to automatically figure out what sites you visit most frequently. When you visit them, it sees if they have news associated with them. And whenever you open a new browser tab, the blank page that shows up doesn’t stay blank. News about the sites you visit gently fades in, and you can click on any of it to view the new content.

For instance, shortly after installing the extension, I visit Planet Mozilla and Joel on Software. When I create a new tab, first news about Planet fades in, and then news about Joel-on fades in, which results in the following:

The Planet Mozilla news shows up before the Joel-on news because Ambient News has used the Places subsystem to figure out that I visit Planet more often than Joel-on. It can automatically access protected information like LiveJournal friends-only posts and intranet forums as long as I’m logged in to the relevant sites. And it all perfectly preserves my privacy, because the information that Ambient News mines is on my computer and stays there—it never goes to some company’s server for analysis and indexing.

Right now the extension is pretty primitive, and doesn’t do a lot of things that I’d like it to. But it’s good enough to start dogfooding and experimenting with, so if you’re brave and would like to try it out, feel free to install version 0.0.6 alpha. And if you’re a developer, you can check out the HG repository.

EDIT: The original version posted was 0.0.3 alpha, but bugfixes have been made since then.