I recently discovered a quirk in my Google news “alert” system: for some serendipitous reason, the system confuses “online privacy” (one of the key terms I’ve selected) with “online piracy” (a term that I did NOT select). Over the past few weeks, consequently, I’ve received a lot of articles about SOPA, which Google apparently thinks stands for the “Stop Online Privacy Act.” I didn’t pay much attention to what seemed at first to be a trivial — albeit suggestive — glitch in Google’s mass customization algorithm until I encountered info-tech Guru Jaron Lanier’s op-ed piece in the New York Times on SOPA (which, of course, actually stands for the now deferred or defunct Stop Online Piracy Act). Lanier’s closing remarks, which claim that a culture of online sharing necessarily contributes to the erosion of online privacy, neatly echo Google’s confusion.
After expressing his understandable distaste for online platforms like Facebook that make money by tracking and targeting users, Lanier scolds what he calls “the ‘open’ internet movement” (complete with scare quotes) for helping usher in the hyper-commodified era of personal data-mining in the digital era. Those who argue that information should be free – as in both free beer and free speech (which certainly doesn’t include the open source movement, though it’s not clear what Lanier means by the “open internet movement”) – should expect to have their own data freely collected and traded. He claims that what happens to music and movies online will also necessarily happen to personal information. As Lanier puts it, “We in Silicon Valley undermined copyright to make commerce become more about services instead of content…The inevitable endgame was always that we would lose control of our own personal content, our own files.”
It turns out that, in the alternative universe imagined by Lanier, if we had only respected copyright and paid for online content, corporations wouldn’t be using the interactive capacity of the internet to mine our data, track our behaviour, and experiment with new forms of target marketing. Back in our own universe, of course, companies created large markets in transactional data and other types of market research long before the birth of Napster, P2P file sharing, or the internet. When, early in the 20th century, market research pioneer Archibald Crossley (who started the broadcast ratings system) experimented with sorting through people’s trash to see what they were buying, he didn’t have to get them to sign over the rights to the information. In Lanier’s imagined universe, intellectual property protects transactional data, but that certainly isn’t the case in our universe, otherwise we could copyright our credit card transactions and send cease-and-desist letters to the companies that collect, share, and sell data about our activity online and off.
Lanier’s mistake is a telling one, because it highlights the general confusion around discussions of privacy in the digital era and replicates the tendency of corporate America to claim that, despite what people may say, their actions show they don’t care about online privacy. Lanier’s observations are not just historically inaccurate, they are borderline nonsensical: the capture and use of personal data have little to do with a culture of openness, but are all about new forms of privatization that allow companies to make money by creating proprietary databases accessible only to do those who are willing to pay. In short, this is not a simple story about the end of privacy and private property, but about the creation of new forms of private property based on the collection and aggregation of personal data. Far from undermining the development of private databases, file sharing contributes to it, as illustrated by companies like Big Champagne that capture and sell data about file-sharing activity.
One of the challenges posed by the digital era is to come up with a language for discussing the control of personal data in ways that avoid reinforcing the regimes of privatization underpinned by the property-rights inflected language of privacy. Lanier got it wrong – we do not actually have an intellectual property right in our transactional data (and creating one probably wouldn’t help matters much) – but the confusion is understandable: privacy and private property have an apparently natural affinity. We are learning, however, that the connection is more complex than the simple equation proposed by Lanier.
Lanier’s broader point is that we have forced companies to develop a commercial model based on detailed monitoring and tracking by our reluctance to pay directly for online content. This is a somewhat more coherent argument, but the notion that companies would forswear creating secondary markets in personal information if we would simply agree to pay directly for online content is suspect, both historically and practically. It might make more sense to point out that, to the extent that we rely on a private, for-profit, commercial model for the provision of online content, we are committing ourselves to a world characterized by increasingly comprehensive and sophisticated forms of consumer monitoring, tracking, and manipulation.