January 27, 2014

Does Privacy Matter?

A few years ago, I made a tool called Collusion in an attempt to better understand how websites I’d never even heard of were tracking my adventures across the Internet.

The results my tool showed me were at best a bit creepy. I didn’t really mind terribly that third parties I’d never heard of had been watching me, in collusion with the sites I visited. I just wish they’d asked me first (through something more approachable than an inscrutable privacy policy).

But, as the old adage goes, I had nothing to hide. What do I care if some advertising companies use my data to offer me better services? Or even if the NSA mines it to determine whether I’m a terrorist?

I’m still struggling to answer these questions. I don’t know if I’ll ever be able to answer them coherently, but after reading a few books, I have some ideas.

For one thing, I don’t think it matters whether one has nothing to hide. What matters is if they look like they have something to hide.

One of the most invisible things about the Internet is that there are hordes of robots constantly scrutinizing your aggregate online behavior and determining whether you fit a certain profile. If you do, as Daniel Solove argues, your life could become a bit like that of Josef K. from Kafka’s The Trial. Or—to cite a true story—like that of Sarah Abdurrahman of On The Media, whose family was detained and aggressively interrogated for several hours at the US-Canada border for unknown reasons.

What determines whether you look like you have something to hide? The robot builders have it in their best interests to keep that secret: otherwise, the people with something to hide would simply start gaming the system. Yet this can also result in a chilling effect: innocent people self-censoring their online behavior based on what they think the robots might be looking for.

These robots don’t have to be working for the government, either. They could be working for, say, your health insurance company, looking for prior conditions that you might be hiding from them. The robots might even ostensibly work for “the people” in the name of transparency and openness, as Evgeny Morozov argues, distorting the public’s perception of you in ways that you can’t control.

What can one do to protect their privacy? One of the problems with using a tool like PGP or Tor to protect one’s privacy is that it paradoxically makes one look like they’re hiding something. When everyone lives in a glass house, you’ll look suspicious if you don’t.

Privacy problems are systemic, and I think their protections are necessarily systemic too: in order for one to not look like they’re trying to hide something, privacy needs to be a default, not something one opts-in to. Not only does this need to be done with technology, but it also needs to be accomplished through legislation and social norms.

© Atul Varma 2021