Read Next
Denaturalizing “The New Oil”: The Ethical Questions of Big Data

Un-Building Continuous Automated Surveillance: A Q&A with Nick Couldry

June 20, 2017 • By Nick Couldry Un-Building Continuous Automated Surveillance: A Q&A with Nick Couldry

What is the topic of your Enhancing Life Project research? Has there been anything recently in the news that's struck you as relevant to your topic? 

The general topic is to look at the possible tension between a good life—an enhanced life—and the continuous automated surveillance which is now built into most of the platforms and programs with which we interact online. The whole basis of the internet—all the business models of internet-based businesses—depends on gathering data, using the data, and tracking us. 

The very idea of keeping people under continuous surveillance was long regarded as absolutely inconsistent not only with democracy, but also with any possible good life: our vision of hell is being subject to continuous surveillance, because it interferes with the very space in which we exist and into which we grow as beings that reflect on the world and constantly transform ourselves in relation to it. 

This business question about surveillance and data collection—which everyone agrees is at the heart of the way the internet developed—is a key ethical issue for the quality of contemporary life. That's the basic tension we're exploring, particularly focused on the discourses surrounding data collection.

An example might be in relation to WhatsApp. We all know that Facebook observes us online. It generates data from what we do online, and so on, and so forth. WhatsApp has been regarded as in some way an oasis from this surveillance, because all the messages we send on WhatsApp are known to be encrypted, which is regarded as one of its unique selling points. However, WhatsApp was bought for rather a lot of money—tens of billions of dollars—by Facebook. It was never clear why Facebook would pay that huge sum of money. However, for a long time, WhatsApp said "everything's encrypted, and nothing could possibly happen with the data”—but then it started to send signals to its users that they would be sharing the data with Facebook. 

At that point, the European Regulation Authority on Data said that this was an unacceptable breach of its terms, and Facebook has been fined in relation to this. This is an example of legal authorities getting active in dealing with the practical implications of what I think is a much broader ethical problem.

There are obviously a lot of places where your research is relevant to public discourse. One might be the fact that a lot of times, people who are concerned about big data end up getting portrayed as paranoid, or stuck in the past or something. Either you're "stuck in the past" or you're "ready for progress." What are some ways that your project gives some insight into how this framework might not be the most useful or appropriate way to discuss big data becoming entirely ubiquitous? 

If you describe what's going on here as it is—in other words, as the building of continuous automated surveillance of potentially everything we do online—while bearing in mind the very possibilities of the services that are offered to us, there has to be a potential conflict with that surveillance and autonomy/freedom. We don't spend all the time arguing through that. That’s the argument that I'm writing out in the book—it’s a vastly more theoretical and philosophical position, but it's a starting point for the empirical work. 

What we've been doing in the empirical work in the past year and a half is looking at how the conversation has very sharp, dramatic contradictions. These get covered over in discourse, and you've just given some good examples of how that works—people say "well, that's the way we used to think, but notions of privacy have changed,” and they often refer to young people as having a different notion of privacy. People also say that even if we still feel a bit of nervousness, the convenience of these services is so overwhelming that surely we will be happy to trade in the inconvenience. 

Again, our starting point, philosophically (which we don't argue through in the empirical work, because it's not really an empirical point) is that you can't really trade in your autonomy. The space in which each of us exists as a human being, and develops and grows until we die, is what we are—is what we have. Of course, we're not isolated from the world, but we draw on Hegel's notion of autonomy, which emphasizes that our very existence is grounded in the social world—but at the same time, at the core, through this social grounding, what is preserved is this space of autonomy, this "space close to us," as he describes, a "home with us", where we are ourselves. It's that very space that is about to have a camera installed into it. If we accept that, we're trading in what is behaviorally one of the most basic features of human beings. So, it's not tradable, as far as we're concerned, but it's presented as if it's an easy trade-off. 

Another thing that's often said is that somehow the data already exists: a very common thing we find lots of examples of is talk of data as the "new oil”—a sort of natural resource. That's a very strange metaphor because it ignores the very fact that we think is most important in our project, which is the collection of the data in the first place through surveillance. Using this kind of metaphor starts the conversation when the data already exists, after its collection. Often, it is referred to as "data exhaust," a side effect of just moving along the road. Within that metaphor, you might say, "well, why wouldn't you make good something of this otherwise toxic stuff?" But, again, that ignores the “engine” that produces the exhaust. In this case, the “engine” is not something that humans have designed to move around the world; it's a decision by corporations to surveil us and to extract the data that is generated as we move around in corporate spaces. 

We think it's a big mistake to regard data as "naturally there." It's precisely the mistake that gets covered over—the ethical problem that is so important to, if you like, any possible "enhanced life.” We very much see our project as at the core of the ethical concerns of the whole Enhancing Life Project: we’re trying to pick away at the hidden ethical problems that aren't even seen as ethical problems. 

One thing that I’ve heard brought up in conversations about big data is that there's "no going back." Would there hypothetically be a way to kind of "give the data back to the people," if the problem is the corporate control of this data? Or is it a problem more inherent to the very collection of the data itself? 

I do think there's a problem in the profit-motivated collection of data, for the simple reason that corporate interests and personal interests exist on a different scale; they're aimed at totally different things. And, therefore, there has to be a gap between the corporate goal and the goal of the individuals as a collective—they cannot be the same, ever. There can be cases, though (and they're very important) in which we can imagine a community deciding that it would be valuable to find out more about a certain aspect of the behavior of everyone in that community—perhaps to learn what causes it. Within a certain city, people might say, let's observe ourselves for a period, and we can learn some lessons that can improve the nature of the city. These examples would be civic uses of data. But both those cases—and, of course, the promotional version of those cases, like tracking ourselves to get fitter with FitBit, and various medical tracking devices—all of them are for specific purposes. If the purpose is fully agreed by people—and, of course, limited to the particular purpose—it’s not necessarily a problem. But when it adds up to a default state wherein everything is under surveillance all the time, and the data can be used for whatever purpose, even if we don't know what that purpose is—that’s a very different proposition.

This may be a silly question, but if some cell phone company, or Google, or something, were to design—kind of like how cigarette boxes have big warning labels that say "we are compelled to tell you that there might be some danger to this”—if you could design a warning label for this data collection, what would it say? 

I don't think that would be enough, because I don’t think our basic autonomy as human subjects is something we can trade: that would be giving up on being human subjects. To take a very extreme example—the examples we're dealing with here are very different—part of what horrifies us to the core about concentration camps is this idea of absolute surveillance: literally watching people all the time, taking away any conceivable little space they have to be themselves.

Now, there's no violent intent in the corporate case at all, but the act of taking away space is the same. It is often said, of course, that we're “getting used to it anyway,” but I'm very skeptical as to whether that's true. People actually have a lot of reservation about this issue: for example, the strong public interest generated by Dave Eggers' novel "The Circle," which is coming out as a film any week now, pushes us toward the fact that this topic can be very disturbing, and can generate a lot of reaction. So it's not clear to me that people are "getting used to it." 

But even if that's true, there are increasingly legal scholars who are becoming concerned at the need to think hard about, to denaturalize, and to potentially find ways to challenge constant surveillance. This leads us to another very important point about what we're “getting used to”: there’s a very, very big difference between the American market-based liberal tradition, which broadly acknowledges the primacy of markets, and the role of corporations broadly is to do their best to generate profits, unless they create actual harm—versus the European tradition, which puts much more emphasis on fundamental rights, which it's simply not acceptable for corporations to trespass on. Of course, you have the Constitution in the US, but most lawyers at the moment don't believe that freedom of speech and the right to protection of property are sufficient to get at the uses of data that we're talking about. 

In American law, there's a bit of a gulf at the moment. However, in Europe, coming into force next year is the European Data Protection Regulation, a massive document that will require corporations to notify individuals when they collect any data. It gives users a plain language account of what data they're collecting, why they collected it, and how long they're going to keep it, and gives them the option at any point to say, "no, I don't want you to keep that” (to summarize a hundred pages of the law in just a sentence or two). That law starts out from the principle that there are certain fundamental human rights that are potentially damaged, and that are certainly affected, by the gathering of data.  

There’s also a principle in German constitutional law, the "right to free development of the personality.” So, when we say we're "getting used to this," parts of the world with very different normative traditions from the United States are not getting used to it—in fact, they're developing legal codes to challenge it.

That way of looking at this could be particularly helpful in an American context. Since the first thing that came to mind for me was the public discourse of "get big government's hands off of our corporations" and then simultaneously "...but get corporations' hands off of our data,” might your framework provide a way into resolving that tension? 

Yes: One paradox of the American situation is that recent legal cases in the Supreme Court have extended the fundamental right of Freedom of Speech, and the First Amendment, to corporations themselves. This is extremely controversial, of course. If corporations write code to gather data, under the American law, they are entitled to argue that they're exercising their right to freedom of speech in gathering data on human individuals—which is, from most points of view, quite paradoxical. 

But at the same time, until very recently, the kind of surveillance that we've been much more focused on has been state surveillance. There's a very good reason: until the end of the ‘80s, only states had this power to track people continuously; corporations didn't have it, and they certainly didn't want it. Now, corporations want it, and they have it, and they have it much more effectively than states do. 

That was one of the paradoxes of the Snowden revelation. The scandal was about the data that the NSA took from all the corporate platforms—Google, Facebook, and others—that have, of course, been collecting it continuously. But the scandal was about the state getting its hands on it, not about the fact that it's being collected continuously by the corporations, which is partly missing the point of what's scandalous about this.

Will you be teaching an Enhancing Life Studies course on this topic? 

Yes, a course on this topic just got approved. This is a big and difficult topic that students find very exciting. But it'll take time to fully develop that course—it’s not the sort of thing you can just take off the shelf. They're big, new issues that students have to be somehow introduced to and allowed to approach in an effective way. 

What are some ways that Enhancing Life Studies as a discipline has provided a framework for designing the course? 

It's not normal in sociology, or at least it's not very common, to introduce a normative perspective. In fact, some sociologists think that it's actually wrong to do that. Here, given the nature of the topic, it's essential. Unless you start to think normatively, then you just accept that it’s convenient the way things are, and so on, and so forth—you need ethics to interrupt what's going on. The Enhancing Life Project encourages and authorizes me, as a sociologist, to add in that normative perspective, which many sociologists would say isn't appropriate.

What are some of the main insights you might hope that your students will glean from the course? With the addition of a normative ethical standard to the course, how can you envision your students coming around to such a framework being the appropriate way into this particular question? 

The first thing they'll learn is that it's possible to look at what seems inevitable, what seems to just be going on anyway, what you "can't do anything about,” and to take an academic, intellectual perspective on it by studying the history of how these practices evolved—the long term development of media institutions, and power, and so on. 

The second thing they'll learn is that, as they evolve, the prevailing approaches to these systems are used not just to gather data, but to try to act on the law, to change the law, to categorize people. Forms of gathering data are part of the social order: surely there's an implied power question. If students are thinking about power in any way, they should be thinking about the long-term consequences of practices, even if on the face of it they feel like they’ve "got nothing to hide.” They'll gain a power perspective so they can take a broader view of the long-term consequences of this. 

The third thing is attaching a long-term historical approach to a contemporary perspective. This goes beyond a purely descriptive approach—“this is what's going on in this or that society”—and takes the condition of life for human beings and compares it with a much broader, less contextual account of how human beings need to live to be fully human. That's hopefully where students will see that the ethical approach gives them a way to strengthen their other forms of doubt about these practices. That's where the link-up with legal theory comes in—legal theory combines an underlying ethical approach with, of course, a very practical approach of how the world is being organized now, and how it might be organized and regulated differently. I think it's a very good applied form of ethics. 

I can imagine the students will bring a range of ethical perspectives to the course. There are some people that celebrate all of this, and say that we're developing into a sort of collective mind, that we don't care about ourselves as individual entities—what we care about is the collective intelligence. 

I think this is a pure myth that ignores all the things that are fundamental to Western civilization, and most civilization. It’s an important myth, though, and I think it'd be very good to have a direct argument with students, who have been told about how we're apparently "developing." Kevin Kelly, one of the founding editors of Wired Magazine, argues that this data collection is literally "enhancing life." I fundamentally disagree with that—that’s the ethical starting point of this whole project, to disagree with that. 

An Enhancing Life approach is directly relevant to understanding and hearing what business leaders are saying about the changes that they obviously benefit from. I think that students will find at the end of the term that the ethical approach, the Enhancing Life approach, is directly useful in thinking about practical changes going on in the world.