The Privacy Shield – dead on arrival

Captain America Shield 04Five months after the ECJ ruled  that the Safe Harbor agreement is invalid the EU Commission has presented a new “Privacy Shield” that will replace the old agreement.

The Privacy Shield does contain some improvements regarding the rights of European citizens. It does not, however, fundamentally change the national security exception which brought down the agreement in the first place.

Recital 55 of the draft agreement reads as follows:

The Commission’s analysis shows that U.S. law contains clear limitations on the access and use of personal data transferred under the EU-U.S. Privacy Shield for national security purposes as well as oversight and redress mechanisms that provide sufficient safeguards for those data to be effectively protected against unlawful interference and the risk of abuse.

The Commission refers to Presidential Policy Directive 28 (“PPD-28”) regarding limitations on signal intelligence, issued by President Obama on January 17, 2014. The PPD-28 extends the same level of protection to non-US citizens as US citizens.

Sec.4 Safeguarding Personal Information Collected Through
Signals Intelligence

All persons should be treated with dignity and respect, regardless of their nationality or wherever they might reside, and all persons have legitimate privacy interests in the handling of their personal information.

That being said, any presidential policy directives may be overturned by future presidents, as this is a policy document, not an amendment to existing law (which permits the surveillance of non-US nationals, see FISAAA 2008 section 702). If you have the time, see the late Caspar Bowden’s excellent presentation on why agreements such as the Privacy Shield are doomed to fail:

Even if the PPD-28 would be allowed to stay in force AND the PPD-28 would be respected, it still endorses the mass collection of data:

Sec. 2 Limitations on the Use of Signals Intelligence Collected in Bulk
Locating new or emerging threats and other vital national security information is difficult, as such information is often hidden within the large and complex system of modern global communications. The United States must consequently collect signals intelligence in bulk.

Granted, the PPD-28 states that bulk collection must be used only for the detection and countering of (1) espionage, (2) terrorism, (3) weapons of mass destruction, (4) cybersecurity, (5) threats to US armed forces and their allies, and (6) transnational criminal threats. However, by its very definition, bulk collection means that all data is retained and accessible by the intelligence community, and there is no effective oversight on how that data is used. Let me cite Edward Snowden on what this boils down to in practice:

“In the course of their daily work they stumble across something that is completely unrelated to their work, for example an intimate nude photo of someone in a sexually compromising situation but they’re extremely attractive,” he said. “So what do they do? They turn around in their chair and they show a co-worker. And their co-worker says: ‘Oh, hey, that’s great. Send that to Bill down the way.’  (New York Times, July 20, 2014)

Advertisements

Data power conference (June, 22-23), part 1: Disconnect

I recently attended and presented at “Data Power” in what turns out was an excellent conference organized by the University of Sheffield. The conference had called upon academics to submit papers that approached the question of big data from a  societal (& critical) perspective. That being said, the conference papers were more often than not empirically founded and the presenters refrained from adapting a conspiratorial mindset, which might sometimes be the case when discussing big data.

Here are some of the key points that I picked up from attending the different panels:

Disconnect & Resignation / tradeoff fallacy

Stefan Larsson (Lund University) and Mark Andrejevic (Pomona College) both stressed that there is a disconnect between commercial claims that people happily trade their privacy for discounts and services and how people actually feel. In reality, people feel that they are “forced or bribed” to give up their data in order to access a service. Joseph Turow, Michael Hennessy and Nora Draper have recently published a survey on what they call the “tradeoff fallacy” which supports the disconnect and resignation hypothesis put forth by Larsson and Andrejevic.

Access rights are rarely respected

Clive Norris (University of Sheffield) and Xavier L’Hoiry (University of Leeds) had investigated if companies or the public sector (data controllers) actually respect that people have the right to access their own data according to current data protection legislation. Turns out, they don’t:

• “20 % of data controllers cannot be identified before submitting an access request;
• 43 % of requests did not obtain access to personal data;
• 56 % of requests could not get adequate information regarding third party data sharing;
• 71 % of requests did not get adequate information regarding automated decision making processes.”

Instead, the controllers consulted applied what Norris & L’Hoiry call “discourses of denial”, either questioning the rights themselves (we do not recognize them), falsely claiming that only law enforcement would have access to this data or even claiming that the researches were insane to make such a claim (why would you possibly want this information?). The most common response was, however, none at all. Deafening silence is an effective way to tackle unpopular requests.

Self-management of data is not a workable solution

Jonathan Obar (University of Ontario Institute of Technology & Michigan State University) showed that data privacy cannot possibly be better protected through individual auditing of how companies and officials use your personal data, calling this approach a “romantic fallacy”.

Even if data controllers would respect the so-called ARCO rights (access to data, rectification of data, cancellation of data & objection to data processing), it is far too difficult and time-consuming for regular citizens to manage their own data. Rather, Obar suggests that either data protection authorities (DPAs) or private companies would oversee how our data is used, a form of representative data management. The problem with this solution is of course the significant resources it would require.

There is no such thing as informed consent in a big data environment

Mark Andrejevic emphasized that data protection regulation and big data practice are based on opposing principles: big data on data maximization and data protection on data minimization. The notion of relevance does not work as a limiting factor for collecting data, since the relevance of data is only determined afterwords by aggregating data and looking for correlations. This makes informed consent increasingly difficult: what are we consenting to if we do not know the applications of the collection?

Is sensitive personal information becoming desensitized?

With all the hype surrounding big data, it should come as no surprise that people are worried about how their data is used.

"How concerned are you about the unnecessary disclosure of personal information?" Source: Eurobarometer 359, p. 59.
“How concerned are you about the unnecessary disclosure of personal information?” Source: Eurobarometer 359, p. 59.

Some data points have always been part of the modern bureaucratic state (see Giddens, 1985, for example). These are usually referred to as objective data points, which indicate facts such as births, deaths, age and income. Many of these data points are simply necessary to run a well-functioning state apparatus.

The reason why smartphones and social media is changing our relation to personal data is that subjective data points, such as what people read, what they search for, who they secretly stalk on Facebook, are much easier to come by than before. What’s more, people readily submit information on themselves on social networks that used to be hidden deep under the surface. There is a huge gap between what data protection officials think is sensitive information and what’s actually happening.

Fore example, the UK’s Information Commissioner’s Office defines sensitive personal data as something which has information on a) the racial or ethnic origin of a person, b) his/her political opinions or religious beliefs, c) his/her sexual life, among other things.

Facebook sees your sexual preferences, religious beliefs and political views as “basic info”. Not even “details about you”, but “basic”; information which is, undeniably, potentially very sensitive in many parts of the world.

The gist is that while marketers and companies are hoping to gather more and more sensitive information on potential customers, they really, REALLY don’t want to have their customer databases defined as collections of “sensitive data”. Because when that happens, you are suddenly forced to follow strict rules regarding what you can do with the information. Funnily enough, the best way to avoid that is not by refraining from collecting sensitive information, but rather by claiming that the information you have gathered is not on “real, identifiable people” but just “profiles”.

Giddens, Anthony (1985): The Nation-State and Violence: Volume Two of a Contemporary Critique of Historical Materialism. Cambridge: Polity Press.

Eurobarometer survey 359 on data protection