Big data and the promise of bureaucratic efficiency

One of the fundamental questions of my PhD thesis has been to conceptualize privacy and surveillance in a way which not only describes the society we live in, but also explains why the current information society with its fetishization of data looks the way it does. I have looked to various theories on surveillance and socio-legal conceptualizations of information privacy to address this question, but I was never really satisfied with the answer.

Michel Foucault’s panopticon deals with the psychological effects of being under visible surveillance, yet does not adequately explain life in the era of databases and electronic surveillance. Philosopher Manuel DeLanda’s excellent War in the Age of Intelligent Machines (1991), addresses the intelligence community’s perverse data collection logic, but does not really expand on the political economy of surveillance. Oscar Gandy does a better job at that, but descriptions and theories based on the US context are not directly applicable in Europe.

Socio-legal theories and some communication research address how people perceive privacy, but it is increasingly difficult to connect ideal notions of privacy to what is actually happening in the world, and the gap between norms of privacy, data practices, and laws of privacy is growing ever wider.

During the past two years I’ve delved into the legislative process of the new data protection law in the EU, the General Data Protection Regulation, which will enter into force in May 2018. One of my earliest observations was the inaccessibility of the language and the complexity of the document that addresses a very basic human need: to be able to choose when one is out of sight. Instead, the end result is an intricate web of rules and exceptions to the collection of personal information with very vague references to actual perceptions of privacy.

After reading David Graeber’s Utopia of Rules I came to an insight that had previously existed only as a side note in my conceptualization of surveillance societies: the role of bureaucracies. Rather than thinking of data collection as an element of discipline in the Foucauldian sense, I started to think of data collection as part of the bureaucratic system’s inherent logic that is independent from the actual power of surveillance.

The utopian promise of big data is not that of control but of efficiency. The present logic of data maximization defies traditional ideals of data minimization according to which data can only be processed for a specific purpose. The collection of data points is such an essential part of modern bureaucracies, private and public alike, that its role in society is treated as a given. This is why attitudes to data collection and privacy are not divided along the public/private or even the left/right spectra but rather along the lines of strange bedfellows such as anarchism and libertarianism versus socialism and fascism. The goals are of course very different, but the means are similar.

By seeing questions of privacy and surveillance through this lens the GDPR’s legislative process started to make more sense to me. The discourses employed by corporate and public lobbyists were not really about control over information flows, nor were they about disciplinary power. They were about the promise of bureaucratic efficiency.

Advertisements

ECJ invalidates the Safe Harbour agreement: will all data transfers to the US stop?

Map from http://www.submarinecablemap.com/
Map from http://www.submarinecablemap.com/

Following the recommendation of Attorney General Yves Bot, the ECJ ruled today that the Safe Harbor agreement is invalid:

the Court declares the Safe Harbour Decision invalid. This judgment has the consequence that the Irish supervisory authority is required to examine Mr Schrems’ complaint with all due diligence and, at the conclusion of its investigation, is to decide whether, pursuant to the directive, transfer of the data of Facebook’s European subscribers to the United States should be suspended on the ground that that country does not afford an adequate level of protection of personal data.

The full judgement is available here.

This means that first of all, national Data Protection Authorities (DPAs) are granted power to decide whether or not data transfers are legitimate or not. The decision by the court will thus not stop all transfers to the US, it simply means that national DPAs may now block any transfers if they so see fit, as they are no longer required to follow the Safe Harbor agreement.

The Safe Harbor agreement did not fall because it was a self-regulatory instrument with a long history of compliance issues. It fell because US public authorities would not be required to follow the agreement, and because US law would always override it.

There was even a “national security exception” in the agreement, which makes the mass surveillance of Facebook data possible:

Adherence to these Principles may be limited: (a) to the extent necessary to meet national security, public interest, or law enforcement requirements; (b) by statute, government regulation, or case law that create conflicting obligations or explicit authorizations, provided that, in exercising any such authorization, an organization can demonstrate that its non-compliance with the Principles is limited to the extent necessary to meet the overriding legitimate interests furthered by such authorization;

(EC: Commission Decision 2000/520 Annex I)

What now?

Although this does not mean that data transfers between the EU and the US will stop immediately, this means that DPAs have the power to block them. IT companies will probably start applying for Binding Corporate Rules and using model contract clauses. But the weakness of the Safe Harbour agreement, the national security exception, is present in those cases as well. If DPAs decide to crack down on IT companies this might mean that more and more data centres will have to be established on European soil. For the IT giants this will just be a huge headache, but for SMEs this might mean that EU customers are off limits if the data isn’t stored in Europe, a cost which smaller startups might not be able to cover.

It is unlikely, however, that things will go that far. The enforcement of data protection rules will probably not go that far, and trade relations are at stake if this decision is interpreted strictly. The Safe Harbour agreement was always a political solution. The Commission knew that the US would never have information privacy laws adequate by European standards, and so a self-regulatory initiative was concocted. Now they will need a new agreement, but it will be much harder to come up with one that is seen as legitimate in light of the NSA leaks. It will be interesting to see them try.

Data power conference (June, 22-23), part 1: Disconnect

I recently attended and presented at “Data Power” in what turns out was an excellent conference organized by the University of Sheffield. The conference had called upon academics to submit papers that approached the question of big data from a  societal (& critical) perspective. That being said, the conference papers were more often than not empirically founded and the presenters refrained from adapting a conspiratorial mindset, which might sometimes be the case when discussing big data.

Here are some of the key points that I picked up from attending the different panels:

Disconnect & Resignation / tradeoff fallacy

Stefan Larsson (Lund University) and Mark Andrejevic (Pomona College) both stressed that there is a disconnect between commercial claims that people happily trade their privacy for discounts and services and how people actually feel. In reality, people feel that they are “forced or bribed” to give up their data in order to access a service. Joseph Turow, Michael Hennessy and Nora Draper have recently published a survey on what they call the “tradeoff fallacy” which supports the disconnect and resignation hypothesis put forth by Larsson and Andrejevic.

Access rights are rarely respected

Clive Norris (University of Sheffield) and Xavier L’Hoiry (University of Leeds) had investigated if companies or the public sector (data controllers) actually respect that people have the right to access their own data according to current data protection legislation. Turns out, they don’t:

• “20 % of data controllers cannot be identified before submitting an access request;
• 43 % of requests did not obtain access to personal data;
• 56 % of requests could not get adequate information regarding third party data sharing;
• 71 % of requests did not get adequate information regarding automated decision making processes.”

Instead, the controllers consulted applied what Norris & L’Hoiry call “discourses of denial”, either questioning the rights themselves (we do not recognize them), falsely claiming that only law enforcement would have access to this data or even claiming that the researches were insane to make such a claim (why would you possibly want this information?). The most common response was, however, none at all. Deafening silence is an effective way to tackle unpopular requests.

Self-management of data is not a workable solution

Jonathan Obar (University of Ontario Institute of Technology & Michigan State University) showed that data privacy cannot possibly be better protected through individual auditing of how companies and officials use your personal data, calling this approach a “romantic fallacy”.

Even if data controllers would respect the so-called ARCO rights (access to data, rectification of data, cancellation of data & objection to data processing), it is far too difficult and time-consuming for regular citizens to manage their own data. Rather, Obar suggests that either data protection authorities (DPAs) or private companies would oversee how our data is used, a form of representative data management. The problem with this solution is of course the significant resources it would require.

There is no such thing as informed consent in a big data environment

Mark Andrejevic emphasized that data protection regulation and big data practice are based on opposing principles: big data on data maximization and data protection on data minimization. The notion of relevance does not work as a limiting factor for collecting data, since the relevance of data is only determined afterwords by aggregating data and looking for correlations. This makes informed consent increasingly difficult: what are we consenting to if we do not know the applications of the collection?

Perverse repercussions of the Charlie Hebdo attack

Many media outlets have seen the attack on Charlie Hebdo as a grave threat to the freedom of expression. While the attack  itself can cause news outlets to self-censor their publications out of fear, it is not very fruitful to evaluate the state of civil liberties in the wake of terrorist attacks. Perversely, rather than strengthen the fundamental principles on which the freedom of expression is based, terror attacks have been used to limit communication rights: the Patriot Act was enacted just months after 9/11, while the Data Retention Directive was a direct consequence of the London and Madrid attacks.

Usually these laws have been enacted in the countries that have suffered from the attacks, but now David Cameron has proposed that Britain’s intelligence agencies should be allowed to break into the encrypted communications of instant messaging apps such as iMessage:

“In extremis, it has been possible to read someone’s letter, to listen to someone’s call, to mobile communications … The question remains: are we going to allow a means of communications where it simply is not possible to do that? My answer to that question is: no, we must not. The first duty of any government is to keep our country and our people safe.”

The proposed measure is not only a textbook example of treating the symptoms and not the disease, but essentially a threat to the very freedom which several political leaders swore to protect after the Charlie Hebdo attack. Freedom of expression is inherently connected to the freedom from surveillance. Censorship cannot exist without the surveillance of communications. This proposed ban on encrypted communications would greatly impede the media outlets’ capacity to protect their informants, because as Cory Doctorow points out, weakening the security of communications also means that foreign spies, evil hackers and other wrongdoers will be able to access British communications, not only MI5.

It’s like the NSA/GCGQ leak never happened.

Not so neutral net neutrality?

Finnish MEP candidate Otso Kivekäs from the Greens recently compared the internet to road infrastructure. In his analogy, he said that removing net neutrality would be like allowing “the company that builds the highways to put different speed limits on different car brands: Audis could drive 100 [km/h] and all others 60 [km/h]. And everybody would pay the toll.”

If ISPs could discriminate as they wished and simply adjust the speed of different services, either according to the highest bidder or simply because of their own preferences (goodbye, Skype), this would surely be a great injustice to all internet users. A removal of net neutrality left completely unregulated is not ideal, to say the least. But let’s look at it from another perspective.

Wonder why Netflix, Google and Apple are such great supporters of this “consumer right”? Because 30 % of US internet traffic is used by Netflix, 15 % by YouTube and 2 % by iTunes, according to a study by Sandivine.  Still, these companies pay nothing for the infrastructure yet benefit from it immensely. It is not difficult to see why ISPs would seek to receive compensation from the two companies that use half of all bandwidth.

Sandivine: internet traffic statistics

Netflix and YouTube obviously top the charts because they are immensely popular services and because video uses a lot of data, especially full HD.

Now, using the road analogy once more, this is essentially a case where two companies fill up all the roads with their trucks and continue to be treated like regular commuters on their way to work.

An article from Financial Times also addressed this issue, and according to the editor, “net neutrality no longer works.”  FT applauded the FCC’s decision, expressing that “if customers are willing to pay more for a premium service, as they do with mobile phone contracts or business class travel, then they ought to have the right to do so.”

Business class travel is hardly a good analogy, as people already pay for faster internet subscriptions. Rather, one could take the road analogy one step further. There are a lot of rules which already dictate traffic in order to make it more effective: bike lanes, bus lanes, toll discounts for car pools, the obligation to give way to trams and so on.

What if internet traffic that serves the public interest could be given right of way? Setting aside the difficulty of defining the public interest for a moment, one could think of at least access to public documents and services to begin with. Services that eat up a disproportionate amount of all bandwidth (read: Netflix and YouTube), on the other hand, could then be “taxed” for their  excessive bandwidth use, a fee which would be earmarked for investment in new broadband infrastructure. Now, this might not be what the FCC (or FT) had in mind, but it is a conceivable alternative to net neutrality as we think of it today. Doesn’t positive net discrimination have a nice ring to it?

This would also prevent ISPs from using broadband infrastructure investment as their hobbyhorse excuse. Whenever proposals might make business more difficult (or less profitable) for ISPs, they always state that new regulation will slow down investment on broadband infrastructure (see here, here and here). The arguments are, of course,  just corporate BS, since in Europe broadband infrastructure has been highly dependent on tax money (roads anyone?). By combining so-called “premium access fees” with broadband investment, the ICT giants could help maintain the infrastructure they make their billions from.

 

Fair use and the removal of data roaming charges

The Connected Continent Regulation was widely celebrated as a victory for net neutrality and data roaming Europeans. Although credited for removing data roaming charges across Europe, the actual proposal for a regulation did no such thing. For most people (and MEPs especially), the removal of roaming charges is a very welcome initiative, but if it sounds too good to be true, it probably is:

Article 6a
Abolition of retail roaming charges
With effect from 15 December 2015, roaming providers shall not levy any surcharge in comparison to the charges for mobile communications services at domestic level on roaming customers in any Member States for any regulated roaming call made or received, for any regulated roaming SMS/MMS message sent and for any regulated data roaming services used, nor any general charge to enable the terminal equipment or service to be used abroad.

Yes, roaming charges will be  removed, but the concept of “fair usage” is added instead. Article 6b of the new proposal states the following:

1.  By way of derogation from article 6a, and to prevent anomalous or abusive usage of retail roaming services, roaming providers may apply a ‘fair use clause’ to the consumption of regulated retail roaming services provided at the applicable domestic price level, by reference to fair use criteria. 

What are these fair use criteria then? No one knows.  The Body of European Regulators for Electronic Communications (BEREC) will lay down some “guiding principles” for how they should be applied by the end of this year, but this will probably not affect pricing significantly.

Why is the removal of roaming charges, a development which would benefit consumers all over Europe, so hard? What are the problems associated with removing data roaming charges?

The short version: because the telecommunications industry doesn’t like it.

The long version:

1. Mobility is uneven.

Certain telecom networks would be put under much more stress than others. For example, France is one of the most visited countries in the world in terms of tourists. This means that French telcos would be put under much more stress than telcos in less-visited countries. If no restrictions apply, people would probably use their phones as much as  at home.

2. Pricing is uneven.

In Finland, TeliaSonera offers 10 GB of data and free roaming in the Nordic countries as well as the Baltic countries for 20€/month at a speed of 21 mbps. In Sweden, TeliaSonera offers 10 GB of data at full 4G for 599 SEK (70€) but only inside Sweden’s borders. Same company, completely different price level. This means that if the telecommunications market were truly single, telcos would have to compete on a European level – same prices everywhere. According to market logic, this would drive prices down.

Critics of the proposal to remove all charges say that such a development would likely drive prices up nationally especially in smaller countries, since the telecoms would have to find new sources of income.  But if the market were truly single, there would be no such thing as “national markets” and thus a telco from France could offer their services to Swedish citizens.

3. Roaming is profitable.

According to a study by Informa Telecoms & Media, European companies made $19.7 billion in roaming fees in 2013. Few telcos would like to see that cash cow go.