Does Privacy Matter?

January 27th, 2014

A few years ago, I made a tool called Collusion in an attempt to better understand how websites I’d never even heard of were tracking my adventures across the Internet.

The results my tool showed me were at best a bit creepy. I didn’t really mind terribly that third parties I’d never heard of had been watching me, in collusion with the sites I visited. I just wish they’d asked me first (through something more approachable than an inscrutable privacy policy).

But, as the old adage goes, I had nothing to hide. What do I care if some advertising companies use my data to offer me better services? Or even if the NSA mines it to determine whether I’m a terrorist?

I’m still struggling to answer these questions. I don’t know if I’ll ever be able to answer them coherently, but after reading a few books, I have some ideas.

For one thing, I don’t think it matters whether one has nothing to hide. What matters is if they look like they have something to hide.

One of the most invisible things about the Internet is that there are hordes of robots constantly scrutinizing your aggregate online behavior and determining whether you fit a certain profile. If you do, as Daniel Solove argues, your life could become a bit like that of Josef K. from Kafka’s The Trial. Or—to cite a true story—like that of Sarah Abdurrahman of On The Media, whose family was detained and aggressively interrogated for several hours at the US-Canada border for unknown reasons.

What determines whether you look like you have something to hide? The robot builders have it in their best interests to keep that secret: otherwise, the people with something to hide would simply start gaming the system. Yet this can also result in a chilling effect: innocent people self-censoring their online behavior based on what they think the robots might be looking for.

These robots don’t have to be working for the government, either. They could be working for, say, your health insurance company, looking for prior conditions that you might be hiding from them. The robots might even ostensibly work for “the people” in the name of transparency and openness, as Evgeny Morozov argues, distorting the public’s perception of you in ways that you can’t control.

What can one do to protect their privacy? One of the problems with using a tool like PGP or Tor to protect one’s privacy is that it paradoxically makes one look like they’re hiding something. When everyone lives in a glass house, you’ll look suspicious if you don’t.

Privacy problems are systemic, and I think their protections are necessarily systemic too: in order for one to not look like they’re trying to hide something, privacy needs to be a default, not something one opts-in to. Not only does this need to be done with technology, but it also needs to be accomplished through legislation and social norms.

Clarifying Coding

December 10th, 2013

With the upcoming Hour of Code, there’s been a lot of confusion as to the definition of what “coding” is and why it’s useful, and I thought I’d contribute my thoughts.

Rather than talking about “coding”, I prefer to think of “communicating with computers”. Coding, depending on its definition, is one of many ways that a human can communicate with a computer; but I feel that the word “communicating” is more powerful than “coding” because it gets to the heart of why we use computers in the first place.

We communicate with computers for many different reasons: to express ourselves, to create solutions to problems, to reuse solutions that others have created. At a minimum, this requires basic explorational literacy: knowing how to use a mouse and keyboard, using them to navigate an operating system and the Web, and so forth. Nouns in this language of interaction include terms like application, browser tab and URL; verbs include click, search, and paste.

These sorts of activities aren’t purely consumptive: we express ourselves every time we write a Facebook post, use a word processor, or take a photo and upload it to Instagram. Just because someone’s literacies are limited to this baseline doesn’t mean they can’t do incredibly creative things with them.

And yet communicating with computers at this level may still prevent us from doing what we want. Many of our nouns, like application, are difficult to create or modify using the baseline literacies alone. Sometimes we need to learn the more advanced skills that were used to create the kinds of things that we want to build or modify.

This is usually how coders learn how to code: they see the digital world around them and ask, “how was that made?” Repeatedly asking this question of everything one sees eventually leads to something one might call “coding”.

This is, however, a situation where the journey may be more important than the destination: taking something you really care about and asking how it’s made–or conversely, taking something imaginary you’d like to build and asking how it might be built–is both more useful and edifying than learning “coding” in the abstract. Indeed, learning “coding” without a context could easily make it the next Algebra II, which is a terrifying prospect.

So, my recommendation: don’t embark on a journey to “learn to code”. Just ask “how was that made?” of things that interest you, and ask “how might one build that?” of things you’d like to create. You may or may not end up learning how to code; you might actually end up learning how to knit. Or cook. Or use Popcorn Maker. Regardless of where your interests lead you, you’ll have a better understanding of the world around you, and you’ll be better able to express yourself in ways that matter.

How Colorblindness Blinds Us

November 4th, 2013

When will we (finally) become a colorblind society? The pursuit of colorblindness makes people impatient. With courage, we should respond: Hopefully never.

— Michelle Alexander

In her excellent book The New Jim Crow, Michelle Alexander makes an argument that the notion of colorblindness is a deeply flawed principle that has proved catastrophic for African Americans in the post-civil rights era.

This is a notion that I find confusing, and I don’t claim to fully understand Alexander’s argument. One aspect I can relate, to, however, are the effects that occur when we set unreasonable expectations for our own inevitable prejudices.

I was particularly struck by two seemingly trivial racially-charged faux pas that occurred earlier this year. The first occurred when Lisa Lampanelli, a celebrity I’ve never heard of, posted the following Tweet:

The other was when actress Julianne Hough decided to wear blackface as part of her Halloween costume.

What surprised me wasn’t the incidents themselves, but the public’s response to them, which was to inundate the celebrities with scorn and derision. No discussions were had, and no rationales were given: the actions of these celebrities were instantaneously judged vile and offensive, and everyone was told never to repeat the same mistakes:

Race isn’t brought up very often in today’s public discourse; when it is, it usually follows this familiar pattern of a faux pas followed by scorn and a hasty apology. This is partly due to the colorblind principle, which polarizes the notion of prejudice: we’re supposed to be colorblind, and if we’re not, we’re a racist bigot. Therefore, it’s safest to never, ever mention race, at the risk of being labeled.

In 2008, a study at Northwestern University’s Department of Social Psychology found that “white subjects [are] so afraid of being branded as racist, they indicated a preference for avoiding all contact with black people.” This is something I’ve felt personally, despite not being white. And it’s no surprise, given the outcome of the aforementioned incidents and countless others like them.

The net effect of our reaction to race in public discourse—that is, the instinct to brand anyone who isn’t colorblind as racist—blinds us to everything important in our culture that is actually race-based, such as the multitude of issues surrounding the school-to-prison pipeline that Alexander addresses in her book.

In today’s world, I’d argue that racism actually has very little to do with calling a friend “my nigga” or wearing blackface. It has everything to do with the sense of fear I feel when I realize a black man is walking behind me on my way home. But until we stop pretending that we’re colorblind and building a culture of fear around conversations about race, we won’t even realize this kind of racism exists, let alone have a truthful dialogue about it.

Audio Things!

September 6th, 2013

I’ve really gotten into podcasts this summer. Normally, I find them difficult to focus my attention on, but some habits I’ve picked up recently have helped with this: I started running regularly, and I started playing Euro Truck Simulator 2. In fact, I liked the latter so much that I started a blog about it at eurotruckin.tumblr.com.

Just as French Fries are my delivery vehicles for ketchup, these new activities are my delivery vehicles for podcasts.

Well, I haven’t only been listening to podcasts. In particular, while driving my virtual truck around Europe, I’ve been listening to the BBC World Service. This was largely motivated by my desire to feel European, but it’s an excellent station nonetheless.

I’ve also been listening to audiobooks, which has been made particularly enjoyable by Amazon’s Whispersync for Voice technology. This allows me to effortlessly switch between the Kindle and audio versions of a book, depending on the context (both media can be purchased together for a low price). Using this, I alternately read and listened to Michelle Alexander’s The New Jim Crow, which I highly recommend to anyone living in America.

And then there are the podcasts. Some of them are the staples that most people I know have heard of, like Radiolab and This American Life; I listen to them and they’re amazing for pretty obvious reasons. But there’s a few potentially lesser-known ones I’d like to highlight:

  • Life of the Law is my latest obsession. I first discovered this through the 99% Invisible episode An Architect’s Code, which both shows collaborated on. It’s a fascinating podcast that contextualizes our legal system in ways that make people like me, who are normally bored to tears by the law, utterly enthralled. This is probably aided by the fact that, like Planet Money—another unexpectedly fascinating show—every episode is relatively short and focused.
  • On The Media is consistently interesting to me because it examines the way the media covers current events. I’m not always interested in current events in and of themselves, but I am fascinated by the way the media covers them, so this podcast is often my gateway to understanding what’s going on in the world.
  • Spark was recommended to me by Mark Surman and I love it because it’s a show about technology for people who aren’t, well, obsessed with it. This means that topics often focus on the impact of technology on society, with a great balance of coverage between its positive and negative effects.

I’ve been using an iPhone app called Downcast to listen to these, and have found it much more convenient and usable than the default Podcasts app.

If there are any podcasts you regularly listen to and think I might enjoy, please feel free to tweet your suggestions @toolness.

A HTML Microformat for Open Badges

July 31st, 2013

Sometimes a person wanders by the #badges IRC channel and asks us how to issue a badge.

The response usually involves asking the user what kind of technical expertise they have; if they’re a programmer, we point them at the specification. If they’re not, well, we usually point them to a place like badg.us or credly.

One of the problems with pointing people at the specification is that it’s highly technical. JSON, the format the badge takes, is unfamiliar to non-programmers and doesn’t support code comments to make things a bit easier to grasp. Once a badge is hosted as JSON, the URL to the JSON file needs to either be opaquely “baked” into a PNG file, or it needs to be given to the Open Badges Issuer API behind the scenes, which requires additional programming. Furthermore, the JSON file needs to at least specify a criteria URL, which necessitates the creation of a human-readable HTML page.

That’s a lot of parts.

But what if a badge were just a Web page, formatted in a consistent way that made it easy for machines to read? What if issuing a badge was as easy as filling out a form, copying out a resulting HTML snippet and pasting it into your blog or website?

Microformats can help us do this, because they were designed precisely for this kind of purpose.

Now let’s look at the other solution: hosting one’s badges through third-party services like badg.us or credly. While incredibly easy, one of the problems is that the badge metadata is hosted on—and therefore issued by—a domain that the badge’s creator doesn’t actually own. This will be particularly confusing for recipients and verifiers who discover that their badge was issued by a domain they may never have heard of.

When badges can be represented as HTML, however, we make it really easy for people to host badges on domains they already own. If someone’s presence on the internet is already represented by their blog or website, shouldn’t we make it as easy as possible for them to issue badges from there, rather than an unrelated domain?

I made a proof-of-concept Web service that allows you to play around with this idea at badge-bridge.herokuapp.com. Just fill out the form and paste the resulting HTML snippet in your blog or website, and you’re good to go. The snippet even includes a “push to backpack” button that allows the recipient to push the badge to their backpack.

One of the limitations with my service is that it’s really just a “bridge”, or hack, that translates between the Badge microformat I’m proposing and the JSON specification that Open Badge tools currently support. As a result, the issuer of the badge will appear to be badge-bridge.herokuapp.com rather than your actual blog or website. If we add an HTML microformat to the Open Badges specification, however, we won’t need a bridge, so this problem will go away.

For more information on the technical details of the microformat, including potential security concerns, see the README for the Github project.

On Enforcing Mandatory Code Review

January 17th, 2013

Many software projects enforce mandatory code reviews, even for their most senior developers. While I’ve mentioned before that code reviews can be very useful, I also think that mandatory code reviews among trusted members of a software team can have a number of downsides.

First and foremost, developers don’t have a common consensus on what code review actually means. How much time should it take? Does it mean acting like a human computer and painstakingly processing every line of code like a computer would? Does it mean evaluating the high-level architecture of a patch, or finding formatting errors? Does it mean just skimming the code and vaguely understanding it enough to take care of it if the original author gets hit by a bus? Does it mean evaluating the big-O complexity of an algorithm? Does it mean all of these things?

Many people who ask for mandatory code reviews have no idea what they’re asking for—because it’s mandatory, so they just have to—and the people who do the code reviews are in a similar position. As a result, while code reviews often improve software quality, in some environments a mandatory review policy can amount to an ill-defined bureaucratic ritual of unknown value.

Because of this, I’ve seen a lot of reviewers—myself included—offer nothing but so-called “nitpicks” in their code review comments, as a way of appearing to perform a useful act while in fact slowing a project down, destroying morale, and optimizing for their own minimal time investment by engaging in days or weeks of asynchronous pedantry. Other times, because reviewers have been asked to do something extremely vague, they often procrastinate, which causes code to bit-rot and sets a project back even further.

But what happens when we make code reviews voluntary, instead of mandatory? Well, then it’s called asking for advice.

Many of the most useful “code reviews” I’ve experienced came not from asking someone’s permission to land code, but from simply being uncertain of very specific aspects of my own code, and asking my peers for help. Sometimes this has come in the form of a github pull request; other times it’s been in the form of pair programming; other times it’s just involved me dumping some source code into a webpage and asking someone over IRC about it.

There are a number of things I like about this practice. The first is that it’s my choice to ask for advice, which is far more empowering than asking for permission, which is what a mandatory code review policy implies. Even if I had the exact same conversations through mandatory code reviews that I would through voluntary code reviews, I would still enjoy the latter more, because they’re my decision rather than my obligation.

Another advantage of voluntary code reviews is that I know exactly what I’m asking for. If I feel insecure about my own code, I can introspect and understand why I’m feeling that way, which leads me to specific questions. Often different questions are best answered by different people, some of whom may even work on different projects; when I ask those people to review my code, I’m requesting very specific things that are highly relevant to their expertise. I’m also targeting my questions in a way that ensures that I don’t take up too much of their time. And because it’s viewed by them as a well-defined, time-boxed favor rather than a vague obligation, they’re typically much more responsive and excited about helping me than they would be if it were a mandatory code review.

In conclusion, rather than decreasing software quality, I believe that the social incentives inherent in voluntary code review policies encourage developers to take ownership of the code they write by paying close attention to its needs and valuing the time of others who may need to take a look at it.

Building Bridges Between GUIs and Code With Markup APIs

January 7th, 2013

Recently the Twitter Bootstrap documentation gave a name to something that I’ve been excited about for a pretty long time: Markup API.

Markup APIs give superpowers to HTML. Through the use of class attributes, data attributes, X-Tags, or other conventions they effectively extend the behavior of HTML, turning it into a kind of magic ink. Favorite examples of mine include Twitter Bootstrap, Wowhead Tooltips, and my own Instapoppin.

The advantages of a markup API over a JavaScript API are numerous:

  • They mean that an author only needs to know HTML, whose syntax is very easy to learn, rather than JavaScript, whose syntax is comparatively difficult to learn.
  • Because the API is in HTML rather than JavaScript, it’s declarative rather than imperative. This makes it much easier for development tools to intuit what a user is trying to do—by virtue of a user specifying what they want rather than how to do it. And when a development tool has a clearer idea of what the user wants, it can offer more useful context-sensitive help or error messaging.
  • Because of HTML’s simple and declarative structure, it’s easy for tools to modify hand-written HTML, especially with a library like Slowparse, which help ensure that whitespace and other formatting is preserved. Doing the same with JavaScript, while possible with libraries like esprima, can be difficult because the language is so complex and dynamic.

These advantages make it possible to create GUI affordances atop hand-coded HTML that make it much easier to write. As an example of this, I hacked up prototype slideshow demo and physics demo in July of last year. Dragging an element with the class thimble-movable in the preview pane changes (or adds) CSS absolute positioning properties in the source code pane in real-time, and holding down the shift key modifies width and height. This allows users to size and position elements in a way that even a professional developer would find far more preferable to the usual “guess a number and see how it looks” method. Yet this mechanism still places primacy on the original source code; the GUI is simply a humane interface to change it.

This is the reverse of most authoring tools with an “export to HTML” feature, whereby an opaque internal data model is compiled into a blob of HTML, CSS, and JavaScript that can’t be re-imported into the authoring tool. Pedagogically, this is unfortunate because it means that there’s a high cost to ever leaving the authoring tool—effectively making the authoring tool its own kind of “walled garden”. Such applications could greatly facilitate the learning of HTML and CSS by defining a markup API for their content and allowing end-users to effortlessly switch between hand-coding HTML/CSS and using a graphical user interface that does it for them.

Building Experiences That Work Like The Web

December 5th, 2012

Much has been said about the greatness of the Web, yet most websites don’t actually work like the Web does. And some experiences that aren’t even on the web can still embody its spirit better than the average site.

Here are three webbish characteristics that I want to see in every site I use, and which I try my best to implement in anything I build.

  • “View Source” for every piece of user-generated content. Many sites that support user comments allow users to use some kind of markup language to format their responses. Flickr allows some HTML with shortcuts for embedding other photos, user avatars, and photo sets; Github permits a delicious smorgasboard of HTML and Markdown.

    The more powerful a site’s language for content creation, the more likely it is that one user will see another’s content and ask, “how did they do that?”. If sites like Flickr and Github added a tiny “view source” button next to every comment, it would become much easier for users to create great things and learn from one another.

    I should note that by “source” I don’t necessarily mean plain-text source code: content created by Popcorn Maker, for instance, supports a non-textual view-source by making it easy for any user to transition from viewing a video to deconstructing and remixing it.

  • Outbound Linkability. Every piece of user-generated content should be capable of “pointing at” other things in the world, preferably in a variety of ways that support multiple modes of expression. For instance, a commenting system should at the very least make it trivially easy to insert a clickable hyperlink into a comment; one step better is to allow a user to link particular words to a URL, as with the <a> tag in HTML. Even better is to allow users to embed the content directly into their own content, as with the <img> and <iframe> tags.

  • Inbound Linkability. Conversely, any piece of user-generated content should be capable of being “pointed at” from anywhere else in the world. At the very least, this means permalinks for every piece of content, such as a user comment. Even better is making every piece of content embeddable, so that other places in the world can frame your content in different contexts.

As far as I know, the primary reason most sites don’t implement some of these features is due to security concerns. For example, a naïve implementation of outbound linkability would leave itself open to link farming, while allowing anyone to embed any page on your site in an <iframe> could make you vulnerable to clickjacking. Most sites “play it safe” by simply disallowing such things; while this is perfectly understandable, it is also unfortunate, as they disinherit much of what makes the Web such a generative medium.

I’ve learned a lot about how to mitigate some of these attacks while working through the security model for Thimble, and I’m beginning to think that it might be useful to document some of this thinking so it’s easier for people to create things that work more like the Web. If you think this is a good (or bad) idea, feel free to tweet @toolness.

Questions: Designing for Accessibility on the Web

July 7th, 2012

Marco Zehe recently wrote a good, sobering blog post comparing the accessibility of Web apps to those of native ones.

Much of what I’ve seen on supporting accessibility on the Web has to do with using the right standards: always providing alt attributes for images, for example, or adding semantic ARIA metadata to one’s markup.

As a designer, however, I don’t have much interest in these standards because they don’t seem to address human factors. What’s more interesting to me is understanding how screen readers present the user interface to vision-impaired people and how usable that interface is to them. This would parallel my own experience of designing for non-impaired users, where I use my understanding of human-computer interaction to create interfaces from first principles.

I’ve been meaning to actually get a screen reader and try browsing the Web for a few days to get a better idea of how to build usable interfaces, but I haven’t gotten around to it yet, and I’m also not sure if it’s the best way to empathize with vision-impaired users. In any case, though, my general concern is that there seems to be a distinct lack of material on “how to build truly usable web applications for the vision impaired.” Instead, I only see articles on how to be ARIA standards-compliant, which tells me nothing about the actual human factors involved in designing for accessibility.

So, I’ll be spending some time looking for such resources, and trying to get a better idea of what it’s like to use the internet as someone who is vision-impaired. If you know of any good pointers, please feel free to tweet at me. Thanks!

Empathy For Software Conservatives

July 6th, 2012

Recently my friend Jono wrote an excellent blog post entitled Everybody hates Firefox updates. I agree with pretty much everything he says.

What most struck me during my time in Mozilla’s Mountain View office was a complete lack of empathy for people who might want to “stay where they were” and not upgrade to the latest version of our flagship product. Whenever someone asked a question like, “what if the user wants to stay on the old version of Firefox?”, the response was unequivocally that said user must be delusional: no one should ever want to stay on an old version of a product. It simply doesn’t make sense.

I’ve always credited this bizarre line of thinking to everything I dislike about the Silicon Valley bubble, though I don’t ultimately know if it’s endemic to the Valley or just tech companies in general. I personally have always despised upgrading anything–not just Firefox–for exactly the reasons that Jono outlines in his post. Most people at Mozilla–and likely other software companies–seem to think that it’s just the outliers who dislike such disruption. Maybe they are, I have no idea. But I and most people I know outside of the software industry view upgrades as potential attacks on our productivity, not as shiny new experiences. If there’s any desire at all within Mozilla to cater to such “software conservatives”, the first step is having actual empathy for folks who might want to stay right where they are, rather than treating them as an irrelevant minority.