close
MENU
16 mins to read

Book extract: Surveillance in the post-Snowden era – Part I

Victoria University media studies lecturer Kathleen M Kuehn explains why concerns about privacy are distorting the debate on the role of surveillance.

Kathleen Kuehn
Tue, 03 Jan 2017

 

In this chapter, "Surveillance Ex-Snowden," Victoria University media studies lecturer Kathleen M Kuehn explains why concerns about privacy are distorting the debate on the role of surveillance.

What does it mean to speak of surveillance in a post-Snowden world? How can we take what we now know to better understand where we might be heading? It has been three years since the files were made public, and many of the programs revealed in them date from the previous decade. There have been no major surveillance leaks since then, and little sign of more transparency on the part of intelligence agencies, so we can only continue to speculate how mass surveillance is evolving.

But if the files have confirmed anything, it is the value of our smartphones, iPads, laptops, watches, body trackers and smart home devices for the monitoring and control of populations. And the industries behind these devices are growing. The ‘big data’ movement is only beginning to take root in New Zealand, but little evidence points to the slowing down of surveillance-based technologies, societies or cultures.

Much like surveillance itself, big data is not fundamentally good or bad. Consider its productive uses in disaster management, early intervention programs, stock market valuation or environmental risk assessment. Data-driven decisions are not of themselves sinister but their capacity for misuse is vast. Some worry big data extends the scope of surveillance ‘by co-opting individuals into participating in the surveillance of their own private lives’. 

As privacy scholar Helen Nissenbaum says, ‘We are complicit in an invasion of our own privacy that ultimately we find objectionable.’ Yet the more useful big data is perceived to be for governments and corporations, producers and consumers, the much harder it is to resist.

The growth of public–private partnerships in New Zealand and elsewhere reflects the ‘insatiable appetite for our data’, as security expert Bruce Schneier puts it. Some of these alliances have deep histories. But regional developments between local authorities and commercial firms looking to harness big datasets are relatively new. And many of them are aimed at expanding surveillance in unprecedented ways.

One such example is Auckland Transport’s 2014 contract with US computer firm Hewlett-Packard Development to upgrade the city’s high-definition CCTV with facial recognition software, for the purpose of monitoring ‘traffic flows, vandalism and safety’. It involves an assemblage of flows from 800 security and traffic cameras, road and environmental sensors and ‘real-time social media and news feeds’, with all content processed by HP cloud servers in California. This visionary project blurs public and private across international borders in significant ways.

Wellington has already tested sensor-equipped cameras on Cuba St in order to ‘detect screaming, smell paint fumes from graffiti and sense people in groups who may end up in fights’. The project’s second phase extended data collection to foot traffic around the city by using wi-fi trackers to anonymously follow people’s movements. The public–private partnership is a collaboration between the city council and the Japanese company NEC, a global tech firm that offers ‘solutions for society’. 

The data collected not only help the council to make decisions about improving city resources, infrastructure and security, but can also be sold to third parties. Potential clients – for instance, in tourism, retail and civil defence – can then use it in their planning. These kinds of contracts allow private firms to identify ‘problems’ for which they also provide the solutions.

Sensor technologies
Sensor-based networks are a key part of the post-Snowden world. Anonymised data are indiscriminately collected from seemingly banal activities, such as walking down the street or queuing for something. Just as the GCSB and NSA do, private companies and public agencies then use it to manage, influence and control our behaviour. More and more sensors are being installed in homes, businesses and public spaces in order to monitor such things as water use, traffic, air quality and the movement of buildings during earthquakes, or to prevent crime. These initiatives all depend largely on the ‘dataveillance’ of personal and non-personal information.

The ‘Sensing City’ initiative in Christchurch is another good example. By one description, ‘the premise is to use sensors to measure as many variables as possible such as traffic flow, footfall, cellphone traffic, noise, luminosity, temperature, energy use, and water consumption. It will also bring together datasets about the city from a range of sources that are currently held separately’.

As data producers, we actively contribute to these initiatives. In some cases, we also do the watching. Wellington City Council depends on community volunteers to monitor the CCTV cameras it operates in partnership with the New Zealand Police. ‘Crowdsourcing’ surveillance may seem strange to some, but it’s increasingly the norm.

Surveillance thus becomes ubiquitous, so that to live is to be surveyed. It is no longer just the business of governments; such a view ‘now appears too restricted in a society where both state and non-state institutions are involved in massive efforts to monitor different populations’. For many ordinary people, consumer surveillance arguably has far more impact on their daily lives than its state counterpart. It not only uses many of the same datasets as the state, it also involves (if not more effectively hones) the analytical techniques that turn discrete data into information and intelligence. And for both governments and private entities, the final ‘product’ depends on predictive models that, for better or worse, influence an individual’s range of options.

As Google CEO Eric Schmidt said in 2010, ‘We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.’ If Google can do that through web browsing behaviour, then so can agents of the state. The turn towards digitally enabled sensor networks expands this potential exponentially. That such initiatives are often publicly funded raises a whole range of questions about privacy, governance and social control.

Data doubles
The process of categorising and sorting individuals in order to make decisions about them is the primary governing logic of both state and consumer surveillance. The Canadian scholars Kevin Haggerty and Richard Ericson describe this process as the reassembly of individuals into ‘data doubles’. Human lives are broken down into discrete data flows, taken from their original context and then put back together.

Evaluations are then made on the basis, not of the corporeal human being, but of the decontextualised data double. Privacy debates around mass surveillance often seem to begin and end with the claim that people have nothing to fear if they have nothing to hide. But who knows how data doubles are assembled? Such disembodied creations can only ever be a partial, distorted view of a person, because details are inevitably (or purposely) omitted, neutralised, outdated or wrong.
 

Social sorting as social control
Although mass surveillance aims to capture everyone, not all people are always equally assessed. Sorting and classification are a central part of the analytical process; but to classify is to also exclude. Surveillance is always disproportionately applied to some individuals and groups over others.

Surveillance scholar Oscar Gandy has done extensive work on the ‘social sorting’ practices based on predictive models. He defines social sorting as the identification, classification and evaluation of persons to determine whether and how we relate to them. It is based on probabilities – the assumption that past behaviour is a guide to the likelihood of the same thing happening again. The aim, in other words, is to identify behaviour before it happens; it is not about who people are or what they do, but what their data doubles suggest they will be and do at some time in the immediate or long-term future.

One of Gandy’s main concerns is the way in which social sorting can limit life chances, particularly when it involves racial or ethnic categories, religion, class, gender or other social identifiers that may create or reinforce stigma by defining people as different or ‘other’ (e.g., dangerous, irresponsible). These problems are magnified when aspects of one’s identity are correlated with datasets on violence, crime, health or addiction, for example. The result here is the reproduction of existing stigmas and stereotypes, coupled with disproportionate policing and surveillance.

On a basic level, social sorting manifests as racial profiling. It explains the seven-fold increase in the number of Asian people stopped and searched by the British transport police after the June 2005 London bombings (none of which led to terrorism charges). To cite another example, the Five Eyes agencies may ‘indiscriminately’ hoover up everyone’s personal data equally, but at the analytical phase their practices are quite discriminating. The Snowden archives show that a good deal of spying is done with an ethnic or religious bias. The ‘FISA recap’ file, for instance, identifies systemic prejudice in the surveillance of Muslim-Americans. Between 2002 and 2008 US intelligence agencies monitored nearly 7500 email addresses of individuals with suspected links to terrorist organisations; many on the list were Muslim-American citizens. They included two well-respected university professors, a long-time Republican Party official, a lawyer (whose clients include defendants involved in terrorism-related cases) and a civil rights activist.

Discrimination against Muslims
In New Zealand, there’s similar discrimination in the monitoring of Muslim communities, which are disproportionately targeted compared to other social and religious groups. The deportation of Rayed Mohammed Abdullah Ali to Saudi Arabia in 2006 is a specific example of how the correlation of certain data flows informs the state’s actuarial approach to threat assessment. The government designated Abdullah a national security threat because of his personal associations with Hani Hanjou, one of the presumed 9/11 hijackers. Abdullah had lived and trained with Hanjou in the US before living in New Zealand, had appeared in the 9/11 commission report and had been training as a pilot himself in New Zealand. Reports also referenced his leadership at a US Islamic cultural centre as evidence of risk. In this case, the assessment of threat was based on personal associations (social networks), cultural affiliations (Islamic centre), and hobbies and interests (flying). Decisions based on probabilities are not always about being right or wrong, but the odds of avoiding risk.

There is certainly a case to be made that the problem of social sorting lies not with the techniques themselves, but the ways in which they are used or misused. But even beneficial uses can have unintended consequences. The New Zealand Data Futures Forum, which typically celebrates big data’s problem-solving potential, has noted this potential for misuse. Its case study refers to an unnamed New Zealand firm that constructs ‘individual household profiles’ from big datasets:

This information is sold to insurance and other companies who then use it to sell life insurance to people. Or, in case you have been identified as a high-risk person, to try to avoid selling life insurance to you or (for example) people from your ethnicity. This is a case in which your personal data is potentially being used against you.

Unintended consequences
Another example of unintended consequences came in 2015, when the Ministry of Social Development sought to quantify ‘the risk of a newborn baby experiencing substantiated child abuse by the time the child turns five’. The trial model combined 132 variables from different government agencies including demographics, socio-economic status and parental history of child abuse. Children with a high-risk score would ‘trigger a voluntary, targeted response with the aim of preventing child maltreatment’. The ministry did not, however, adopt the program, partly because of concerns about specific ethnic groups and people receiving benefits being stigmatised.

To be fair, many of the programs that depend on the mass collection of personal data are designed to rationalise governance, increase efficiencies and/or boost the economy, not to intentionally discriminate or harm. But as Haggerty and Ericson note, this is the point of data doubles. ‘Rather than being accurate or inaccurate portrayals of real individuals, they are a form of pragmatics: differentiated according to how useful they are in allowing institutions to make discriminations among populations.’

In short, social sorting practices are embedded in existing ideological and pragmatic approaches to modern governance, not anomalous to it. And the cost-cutting measures that legitimise linking sensitive data coordinates like race, ethnicity or gender in predictive models can inadvertently result in unanticipated forms of bias, which are fundamentally anti-democratic.

Certainly data protection laws and due process are designed to regulate the power to scrutinise or make discriminatory assumptions about people. Particularly in New Zealand, such laws are intended to limit what governments, private organisations or even other people are permitted to know and ‘to prevent certain forms of unwarranted social exclusion’.24 But discrimination on the basis of who we are – or rather, who our data doubles say we will be – happens nonetheless. And because the categories of suspicion change over time, we have no way of knowing how what we do now will be reflected in our data double later. Individual privacy protections may no longer be the most effective means of ensuring democratic freedom in the post-Snowden era.

Privacy post-Snowden
Social sorting is one of the many aspects of mass surveillance that raise questions about democratic rights and liberties. It tends to be subordinated to generalised discourses of privacy, however, rather than being treated as discriminatory in its own right. While the Snowden revelations reinvigorated global debates about the privacy/security tradeoff, I want to suggest that the era of ubiquitous surveillance demands a rethink of the way those debates are configured. If any meaningful consensus on the function of surveillance in ‘free’ societies is to be achieved, new conversations are desperately needed.

But first it is worth briefly reviewing how privacy is usually discussed. Western liberal societies have historically regarded it as a basic human right – a presumed requisite to democracy, and fundamental to other liberties such as free speech and property rights. Defined in Warren and Brandeis’s landmark 1890 law review as ‘the right to be let alone’, it has been long used to protect personal, sensitive or intimate information.

Normative conceptualisations of privacy, such as those put forth by governments, policymakers and private businesses, typically define it as an individual rather than social or collective value. In this view, as Swiss cultural scholar Felix Stalder explains, ‘Privacy is a personal space; space under the exclusive control of the individual.’ It is that which protects the individual’s intimate realm (be it the home, personal property, family, sexuality, etc.) from interference by the government or other people.

Other theories of privacy emphasise its importance to one’s identity and security, be it emotional, mental, psychological or physical. Being able to protect the self from the scrutiny of others helps individuals to live freely and on their own terms. This is especially true in liberal democracies, where they must be able to enact their beliefs and values without fear of judgment, in order to participate fully as citizens in society.

Schneier, Glenn Greenwald, Nicky Hager and even Snowden himself have all pointed to the different ways in which state surveillance can have a chilling effect on the right to live freely and openly. As Snowden says, people should be able to call or send text messages to their loved ones, travel by train, buy a book online, and purchase an airline ticket ‘without wondering about how these events are going to look to an agent of the government … how they’re going to be misinterpreted and what they’re going to think your intentions were’.

A 2014 study by PEN American Center found that journalists and writers from more than 50 countries reported an increase in self-censorship in fear of government surveillance since the Snowden revelations. Participants said they now avoided writing, speaking or emailing about particular topics, and were curtailing certain social media activities or web searches that might raise suspicion. The same chilling effect can also deter potential whistle-blowers or hobble a free press.29 A fearful Fourth Estate is a particularly troubling prospect in a social and political climate that arguably demands more transparency than ever.

Chilling effect
When privacy diminishes, individuals often respond by conforming to social norms or expectations. Speaking to this point, Judith Wagner DeCew defends privacy from a broader view, saying it’s what ensures a space for social life to unfold. Acting as a ‘shield’, the value of privacy ‘lies in the freedom and independence it provides for us’.30 This might refer to free speech but also the freedom to associate with certain people or organisations, mobility, assembly and the actualisation of other fundamental democratic rights and behaviours.

Examples of the way that surveillance erodes the ‘shield’ are pervasive. ‘Signal’ – a tool that monitors social media activity for the purposes of ‘crime prevention and public safety’ – is used by the New Zealand Police to thwart mass gatherings before they happen, including legitimate public protests. Police in the US have also reportedly used intercept technologies to monitor pro-Palestine and anti-war activists, NATO demonstrators and the Black Lives Matter and Occupy movements. (Notably, these examples all illustrate the way military-grade mass surveillance technologies ‘creep’ into the local level for policing purposes, and the way the surveillant assemblage merges policing and intelligence work).

Surveillance, then, is a social process – so an individualised approach to privacy is inadequate in the era of mass surveillance. As Stalder says, ‘it applies a 19th-century conceptual framework to a 21st-century problem.’

Turning towards a social definition of privacy, we see that it is fundamentally about relationships. Charles Fried and Robert Gerstein theorise it as providing the necessary context for love, friendship and trust. Human relationships are defined by levels of disclosure; what we share with our doctor or work colleagues differs from what we share with romantic partners or family members. The right to maintain some agency over what, when and to whom we as individuals choose to disclose information is as central to social relations as it is to personal autonomy.

Framing privacy as an individual issue ignores many of the social and structural aspects of contemporary surveillance. It is not only a matter of individual responsibility when you are taken aside by airport security for special screening but cannot know why. Maybe it is because you were randomly selected from a database by numerical generation. Maybe the employees were simply bored. Maybe you are on a security watchlist. The problem is, as Stalder says, ‘you don’t know whether this kind of discrimination is taking place, and have no way of fighting against it’.

'Informed consent'
To take a social approach to privacy is to challenge many of the ways that the agents of surveillance ‘do business’. Consider the notion of ‘informed consent’. Data holders often say they fulfil their privacy obligations by means of the lengthy ‘terms of service’ agreements that individuals have to tick. Most of these policies involve fairly standardised explanations about data collection making services more ‘efficient’ and ‘relevant’ (usually for targeted ads or other content). In order to gain access, users must give their binding, contractual consent, regardless of whether they like, agree with or even understand such policies.

But as two US researchers show, most people aren’t even reading these policies. An experiment on privacy policy and terms of service reading behaviour found that 74 per cent of users skipped these policies when signing up for a fictitious social media site. Others spent around 1 minute reading what should take an average reader 30 minutes. What’s more, 98 per cent of the participants missed fake ‘gotcha’ clauses embedded in the policies they consented to. These included the right to share their data with the NSA and employers, and agreeing to give up their first-born child for access to the social network.

Yet data privacy laws are based on the assumption that every user not only reads but understands and makes a rational, informed choice about the types of collection to which they submit. To expect such ‘informational self-determination’ is unreasonable in a surveillance society. None of us can anticipate that filling out a census form, obtaining a RealMe ID or filing for divorce might someday interfere with a parent’s right to raise her child because she is deemed at risk of maltreating it. That’s too high a cognitive load for anyone to reasonably manage.

While New Zealand’s data privacy laws offer far more legal and social protection than US or UK laws, they do little to hold data-based control systems to account. The Privacy Act of 1993, for instance, limits the power to collect, store and use personal information but it cannot completely control how datasets go on to function. The kinds of public–private partnerships and big-data programs discussed in this book typically rely upon anonymised data and automated gathering techniques – and, as noted elsewhere, data do not need to be personally identifiable to re-identify a person. Personal data privacy laws also cannot address the structural underpinnings of an increasingly privatised public sphere that has more control over our movements than ever before.

Rather than seeing contemporary surveillance as a series of individual privacy invasions, then, ‘we have to see them as part of a new landscape of social power’. As the surveillant assemblage transcends institutional boundaries, tracing privacy breaches back to a single bureaucracy, institution or even dataset is a near impossible task. The Snowden documents make this exceptionally clear. Restricting the surveillance capacity of one actor, agency or even technology does little to stop the functioning of others. The situation is also complicated by cases where ‘systems intended to serve one purpose find other uses’, as they often do. Again, these are all areas where reliance on data privacy laws alone is not enough.

The nature of the assemblage makes it far too difficult to know when, how and what data streams are brought together, in what formation and for what purposes. Instead of fighting these assemblages, which can have equally useful, ambivalent or harmful outcomes, Stalder argues that we should ‘reconceptualise what these connections do’. This begins by focusing on the structure of social power:

Rather than continuing on the defensive, by trying to maintain an ever-weakening illusion of privacy, we have to shift to the offensive and start demanding accountability of those whose power is enhanced by the new connections. In a democracy, political power is, at least ideally, tamed by making the government accountable to those who are governed, not by carving out areas in which the law doesn’t apply.

Ideally, this means not only state power but corporate, as well. Another suggestion Stalder makes is moving beyond ‘privacy’ altogether. 

Read Part II.

© Extracted with permission from Chapter 4 The Post-Snowden Era: Mass Surveillance and Privacy in New Zealand, by Kathleen Kuehn, Published by Bridget Williams Books, Wellington, 2016

Kathleen Kuehn
Tue, 03 Jan 2017
© All content copyright NBR. Do not reproduce in any form without permission, even if you have a paid subscription.
Book extract: Surveillance in the post-Snowden era – Part I
64117
false