Tag Archives: right to be forgotten

Communicating Responsibilities: The Spanish DPA targets Google’s Notification Practices when Delisting Personal Information

In this guest post, David Erdos, University Lecturer in Law and the Open Society, University of Cambridge, considers the 2016 Resolution made by the Spanish Data Protection Authority in relation to Google’s approach to de-listing personal information. 

Spanish Data protection authorityThe Court of Justice’s seminal decision in Google Spain (2014) represented more the beginning rather than the endpoint of specifying the European data protection obligations of search engines when indexing material from the web and, as importantly, ensuring adherence to this.

In light of its over 90% market share of search, this issue largely concerns Google (even Bing and Yahoo come in a very distant second and third place).  To its credit, Google signalled an early willingness to comply with Google Spain.  At the same time, however, it construed this narrowly.  Google argued that it only had to remove specified URL links following ex post demands from individual European citizens and/or residents (exercising the right to erasure (A. 12c) and or objection (A. 14)), only as regards searches made under their name, only on European-badged search searches (e.g. .uk, .es) and even if the processing violated European data protection standards not if the processing was judged to be in the ʻpublic interestʼ.

It also indicated that it would inform the Webmasters of the ʻoriginalʼ content when de-listing took place (although it signalled that it would stop short of its usual practice of providing a similar notification to individual users of its services, opting instead for a generic notice only).

In the subsequent two and a half years, Google’s approach has remained in broad terms relatively stable (although from early 2015 it did stop notifying Webmasters when de-listing material from malicious porn sites (p. 29) and from early 2016 it has deployed (albeit imperfect) geolocation technology to block the return of de-listing results when using any of version of the Google search engine (e.g. .com) from the European country from where the demand was lodged).

Many (although not all) of these limitations are potentially suspect under European data protection, and indeed private litigants have (successfully and unsuccessfully) already brought a number of challenges.  No doubt partly reflecting their very limited resources, European Data Protection Authorities (DPAs) have adopted a selective approach, targeting only those issues which they see as the most critical.  Indeed, the Article 29 Working Party November 2014 Guidelines focussed principally on two concerns:

  • Firstly, that the geographical scope of de-listing was too narrow. To ensure “effective and complete protection” of individual data subjects, it was necessary that de-listing be “effective on all relevant domains, including .com”.
  • Secondly, that communication to third parties of data concerning de-listing identifiable to particular data subjects should be both very limited and subject to strong discipline. Routine communication “to original webmasters that results relating to their content had been delisted” was simply unlawful and, whilst in “particularly difficult cases” it might in principle be legitimate to contact such publishers prior to making a de-listing decision, even here search engines must then “take all necessary measures to properly safeguard the rights of the affected data subject”.

Since the release of the Guidelines, the French DPA has famously (or infamously depending on your perspective!) adopted a strict interpretation of the first concern requiring de-listing on a completely global scale and fining Google €100K for failing to do this.  This action has now been appealed before the French Conseil d’État and much attention has been given to this including by Google itself.  In contrast, much less publicity has been given to the issue of third party communication.

Nevertheless, in September 2016 the Spanish DPA issued a Resolution fining Google €150K for disclosing information identifiable to three data subjects to Webmasters and ordered it to adopt measures to prevent such practices reoccurring.  An internal administrative appeal lodged by Google against this has now been rejected and a challenge in court now seems inevitable.  This piece explores the background to, nature of and justification for this important regulatory development.

The Determinations Made in the Spanish Resolution

Apart from the fact that they had formally complained, there was nothing unusual in the three individual cases analysed in the Spanish Resolution.  Google had simply followed its usual practice of informing Webmasters that under data protection law specified URLs had been deindexed against a particular (albeit not directly specified) individual name.  Google sought to defend this practice on four separate grounds:

  • Firstly, it argued that the information provided to Webmasters did not constitute personal data at all. In contrast, the Spanish regulator argued that in those cases where the URL led to a webpage in which only one natural person was mentioned then directly identifiable data had been reported, whilst even in those cases where several people were mentioned the information was still indirectly identifiable since a simple procedure (e.g. conducting a search on names linked to the webpage in question) would render the information fully identified.  (Google’s argument here in any case seemed to be in tension with its practice since September 2015 of inviting contacted Webmasters to notify Google of any reason why the de-listing decision should be reconsidered – this would only really make sense if the Webmaster could in fact deduce what specific de-listing had in fact taken place).
  • Second, it argued that, since its de-listing form stated that it “may provide details to webmaster(s) of the URLs that have been removed from our search results”, any dissemination had taken place with the individual’s consent. Drawing especially on European data protection’s requirement that consent be “freely given” (A. 2 (h)) this was also roundly rejected.  In using the form to exercise their legal rights, individuals were simply made to accept as a fait accompli that such dissemination might take place.
  • Third, it argued that dissemination was nevertheless a compatible” (A. 6 (1) (b)) processing of the data given the initial purpose of its collection, finding a legal basis as necessary” for the legitimate interests (A. 7 (f)) of Webmasters regarding this processing (e.g. to contact Google for a reconsideration). The Spanish DPA doubted that Webmasters could have any legitimate interest here since “search engines do not recognize a legal right of publishers to have their contents indexed and displayed, or displayed in a particular order”, the Court of Justice had only referenced that the interests of the search engine itself and Internet users who might receive the information were engaged and, furthermore, had been explicit that de-listing rights applied irrespective of whether the information was erased at source or even if publication there remained lawful.  In any case, it emphasized that any such interest had (as article 7 (f) explicitly states) to be balanced with the rights and freedoms of data subjects which the Court had emphasized must be “effective and complete” in this context.  In contrast, Google’s practice of essentially unsafeguarded disclosure of the data to Webmasters could result in the effective extinguishment of the data subject’s rights since Webmasters had variously republished the deindexed page against another URL, published lists of all URLs deindexed or even published a specific news story on the de-listing decision.
  • Fourth, Google argued that its practice was an instantiation of the data subject’s right to obtain from a controller “notification to third parties to whom the data have been disclosed of any rectification, erasure or blocking” carried out in compliance with the right to erasure “unless this provides impossible or involves a disproportionate effort” (A. 12 (c)). The Spanish regulator pointed out that since the data in question had originally been received from rather than disclosed to Webmasters, this provision was not even materially engaged.  In any case, Google’s interpretation of it was in conflict with its purpose which was to ensure the full effectiveness of the data subject’s right to erasure.

Having established an infringement of the law, the Spanish regulator had to consider whether to pursue this as an illegal communication of data (judged ʻvery seriousʼ under Spanish data law) or only as a breach of secrecy (which is judged merely as ʻseriousʼ).  In the event, it plumped for the latter and issued a fine of €150K which was in the mid-range of that set out for ʻseriousʼ infringements.  As previously noted, it also injuncted Google to adopt measures to prevent re-occurrence of these legal failings and required that these be communicated to the Spanish DPA.

Analysis

This Spanish DPA’s action tackles a systematic practice which has every potential to fundamentally undermine practical enjoyment of rights to de-listing and is therefore at least as significant as the ongoing regulatory developments in France which relate to the geographical scope of these rights.  It was entirely right to find that personal data had been disseminated, that this had been done without consent, that the processing had nothing to do with the right (which, in any case, is not an obligation) of data subjects to have third parties notified in certain circumstances and that this processing was “incompatible” with the initial purpose of data collection which was to ensure data subject’s legal rights to de-listing.

It is true that the Resolution was too quick to dismiss the idea that original Webmasters do have “legitimate interests” in guarding against unfair de-listings of content.  Even in the absence of a de jure right to such listings, these interests are grounded in their fundamental right to “impart” information (and ideas), an aspect of freedom of expression (ECHR, art. 10; EU Charter, art. 11).   In principle, these rights and interests justify search engines making contact with original Webmasters, at the least as the Working Party itself indicated in particularly difficult de-listing cases.

However, even here dissemination must (as the Working Party also emphasized) properly safeguard the rights and interest of data subjects.  At the least this should mean that, prior to any dissemination, a search engine should conclude a binding and effectively policeable legal contract prohibiting Webmasters from disseminating the data in identifiable form.  (In the absence of this, those Webmasters out of European jurisdiction or engaged in special/journalistic expression cannot necessarily be themselves criticized for making use of the information received in other ways).

In stark contrast to this, Google currently engages in blanket and essentially unsafeguarded reporting to Webmasters, a practice which has resulted in a breakdown of effective protection for data subjects not just in Spain but also in other European jurisdictions such as the UK – see here and here.  Having been put on such clear notice by this Spanish action, it is to be hoped the Google will seriously modify its practices.  If not, then regulators would have every right to deal with this in the future as a (yet more serious) illegal and intentional communication of personal data.

Future Spanish Regulatory Vistas

The cases investigated by the Spanish DPA noted in this Resolution also involved the potential dissemination of data to the Lumen transparency database (formally Chilling Effects) which is hosted in the United States, the potential for subsequent publication of identifiable data on its publicly accessible database and even the potential for a specific notification to be provided to Google users conducting relevant name searches detailing that “[i]n response to a legal requirement sent to Google, we have removed [X] result(s) from this page.  If you wish, you can get more information about this requirement on LumenDatabase.org.

This particular investigation, however, failed to uncover enough information on these important matters.  Google was adamant that it had not yet begun providing information to Lumen in relation to data protection claims post-Google Spain, but stated that it was likely to do so in the future in some form.  Meanwhile, it indicated that the specific Lumen notifications which were found on name searches regarding two of the claimants concerned pre-Google Spain claims variously made under defamation, civil privacy law and also data protection.  (Even putting to one side the data protection claim, such practices would still amount to a processing of personal data and also highlight the often marginal and sometimes arbitrary distinctions between these very related legal causes of action).

Given these complications, the Spanish regulator decided not to proceed directly regarding these matters but rather open more wide-ranging investigatory proceedings concerning both Google’s practices in relation to disclosure to Lumen and also notification provided to search users.  Both sets of investigatory proceedings are ongoing.  Such continuing work highlights the vital need for active regulatory engagement to ensure that the individual rights of data subjects are effectively secured.  Only in this way will basic European data protection norms continue to ʻcatch upʼ not just with Google but with developments online generally.

David Erdos, University Lecturer in Law and the Open Society, Faculty of Law & WYNG Fellow in Law, Trinity Hall, University of Cambridge.

(I am grateful to Cristina Pauner Chulvi and Jef Ausloos for their thoughts on a draft of this piece.)

This post first appeared on the Inforrm blog. 

Your next social network could pay you for posting

In this guest post, Jelena Dzakula from the London School of Economics and Political Science considers what blockchain technology might mean for the future of social networking. 

You may well have found this article through Facebook. An algorithm programmed by one of the world’s biggest companies now partially controls what news reaches 1.8 billion people. And this algorithm has come under attack for censorship, political bias and for creating bubbles that prevent people from encountering ideas they don’t already agree with.

blockchainNow a new kind of social network is emerging that has no centralised control like Facebook does. It’s based on blockchain, the technology behind Bitcoin and other cryptocurrencies, and promises a more democratic and secure way to share content. But a closer look at how these networks operate suggests they could be far less empowering than they first appear.

Blockchain has received an enormous amount of hype thanks to its use in online-only cryptocurrencies. It is essentially a ledger or a database where information is stored in “blocks” that are linked historically to form a chain, saved on every computer that uses it. What is revolutionary about it is that this ledger is built using cryptography by a network of users rather than a central authority such as a bank or government.

Every computer in the network has access to all the blocks and the information they contain, making the blockchain system more transparent, accurate and also robust since it does not have a single point of failure. The absence of a central authority controlling blockchain means it can be used to create more democratic organisations owned and controlled by their users. Very importantly, it also enables the use of smart contracts for payments. These are codes that automatically implement and execute the terms of a legal contract.

Industry and governments are developing other uses for blockchain aside from digital currencies, from streamlining back office functions to managing health data. One of the most recent ideas is to use blockchain to create alternative social networks that avoid many of the problems the likes of Facebook are sometimes criticised for, such as censorship, privacy, manipulating what content users see and exploiting those users.

Continue reading

The Bubble Reputation: Protecting, Inflating, Deflating and Preserving It

james-michael-ialsVenue:  Institute of Advanced Legal Studies
Charles Clore House
17 Russell Square
London, WC1B 5DR
6pm – 8pm, 8 March 2017

Booking: This event is free but advanced registration is required using the IALS Events Calendar.  

Speaker: James Michael, Senior Associate Research Fellow, IALS; Chair, IALS Information Law and Policy Centre

The Bubble Reputation: Protecting, Inflating, Deflating and Preserving It (or a Right to be Known, Unknown and Remembered?)

Does, or should, everyone have a right to a reputation, and if so, should that be the reputation that is desired, deserved, or created? If there is a right to a reputation, should it be malleable to the point of infinity, to be extended, amended, or deleted? And is a posthumous reputation the property of the dead, the next of kin, or a larger community? Cases and statutes from various jurisdictions give varying answers, sometimes reflecting national and regional cultural and historical differences, but the contrasts may point the way for international standards.

“Right to be forgotten” requires anonymisation of online newspaper archive

In this post, Hugh Tomlinson QC discusses the implications of a ruling in the Belgian justice system for the application of the “right to be forgotten” for news organisations. Tomlinson is a member of Matrix Chambers and an editor of the Inforrm blog. The post was first published on the Inforrm blog and is cross-posted here with permission. 

In the case of Olivier G v Le Soir (29 April 2016, n° C.15.0052.F [pdf]) the Belgian Court of Cassation decided that, as the result of the “right to be forgotten”, a newspaper had been properly ordered to anonymise the online version of a 1994 article concerning a fatal road traffic accident.

The applicant had been convicted of a drink driving offence as a result of the accident but his conviction was spent and the continued online publication of his name was a violation of his Article 8 rights which outweighed the Article 10 rights of the newspaper and the public.

Continue reading

Eerke Boiten: Privacy watchdog takes first step against those undermining right to be forgotten

This guest post by Eerke Boiten, University of Kent, considers the implications of granting an individual the right to be de-listed from online search results: should new articles about de-listed content be removed too? 

The UK’s data privacy watchdog has waded into the debate over the enforcement of the right to be forgotten in Europe.

The Information Commissioner’s Office issued a notice to Google to remove from its search results newspaper articles that discussed details from older articles that had themselves been subject to a successful right to be forgotten request.

The new reports included, wholly unnecessarily, the name of the person who had requested that Google remove reports of a ten-year-old shoplifting conviction from search results. Google agreed with this right to be forgotten request and de-linked the contemporary reports of the conviction, but then refused to do the same to new articles that carried the same details. Essentially, Google had granted the subject’s request for privacy, and then allowed it to be reversed via the back door.

The ICO’s action highlights the attitude of the press, which tries to draw as much attention to stories related to the right to be forgotten and their subjects as possible, generating new coverage that throws up details of the very events those making right to be forgotten requests are seeking to have buried.

There is no expectation of anonymity for people convicted of even minor crimes in the UK, something the press takes advantage of: such as the regional newspaper which tweeted a picture of the woman convicted of shoplifting a sex toy. However, after a criminal conviction is spent, the facts of the crime are deemed “irrelevant information” in the technical sense of the UK Data Protection Act.

The arrival of the right to be forgotten, or more accurately the right to have online search results de-linked, as made explicit by the EU Court of Justice in 2014, does not entail retroactive censorship of newspaper reports from the time of the original event. But the limited cases published by Google so far suggest that such requests have normally been granted, except where there was a strong public interest.

Stirring up a censorship storm

It’s clear Google does not like the right to be forgotten, and it has from early on sent notifications to publishers of de-listed links in the hope they will cry “censorship”. Certainly BBC journalist Robert Peston felt “cast into oblivion” because his blog no longer appeared in search results for one particular commenter’s name.

It’s not clear that such notifications are required at all: the European Court of Justice judgment didn’t call for them, and the publishers are neither subject (as they’re not the person involved) nor controller (Google in this case) of the de-listed link. Experts and even the ICO have hinted that Google’s efforts to publicise the very details it is supposed to be minimising might be viewed as a privacy breach or unfair processing with regard to those making right to be forgotten requests.

The Barry Gibb effect

De-listing notifications achieve something similar to the Streisand effect, where publicity around a request for privacy leads to exactly the opposite result. I’ve previously called the attempt to stir up publisher unrest the Barry Gibb effect, because it goes so well with Streisand. So well, maybe it oughta be illegal.

[youtube https://www.youtube.com/watch?v=nVyeNZCENZA?wmode=transparent&start=0]

Some publishers are happy to dance to Google’s tune, accumulating and publishing these notifications in their own lists of de-listed links. Presumably this is intended to be seen as a bold move against censorship – the more accurate “List of things we once published that are now considered to contain irrelevant information about somebody” doesn’t sound as appealing.

In June 2015, even the BBC joined in, and comments still show that readers find salacious value in such a list.

Upholding the spirit and letter of the law

While some reporters laugh at the idea of deleting links to articles about links, this misses the point. The ICO has not previously challenged the reporting of stories relating to the right to be forgotten, or lists of delisted links – even when these appear to subvert the spirit of data protection. But by naming the individual involved in these new reports, the de-listed story is brought straight back to the top of search results for the person in question. This is a much more direct subversion of the spirit of the law.

Google refused the subject’s request that it de-list nine search results repeating the old story, name and all, claiming they were relevant to journalistic reporting of the right to be forgotten. The ICO judgment weighed the arguments carefully over ten pages before finding for the complainant in its resulting enforcement notice.

The ICO dealt with 120 such complaints in the past year, but this appears to be the only one where a Google refusal led to an enforcement notice.

The decision against Google is a significant step. However, its scope is narrow as it concerns stories that unwisely repeat personally identifying information, and again it only leads to de-listing results from searches of a particular name. It remains to be seen whether other more subtle forms of subversion aimed at the right to be forgotten will continue to be tolerated.

Eerke Boiten is Senior Lecturer, School of Computing and Director of Academic Centre of Excellence in Cyber Security Research at University of Kent.

This article was originally published on The Conversation. Read the original article.

The Conversation

Open Letter to Google From 80 Internet Scholars: Release RTBF Compliance Data

I am among the signatories of a letter from 80 academics  to Google, asking for more data and transparency on ‘right to be forgotten’ or de-listing decisions and policy, following the ECJ’s judgment in Google Spain v AEPD and Mario Costeja González in May last year. Importantly, this letter unites scholars with a range of views about the merits of the ruling: some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts. But we all believe that implementation of the ruling should be much more transparent. The letter was published in full on the Guardian site and reported (with a response from Google) here. Professor Ellen Goodman has published it on Medium here. Hats off to Julia Powles, University of Cambridge, Faculty of Law (@juliapowles) and Ellen P. Goodman, Rutgers University School of Law (@ellgood) for pulling it together in time for the anniversary of the decision’s publication. More academic commentary can be found here.

The letter in full

What We Seek

Aggregate data about how Google is responding to the >250,000 requests to delist links thought to contravene data protection from name search results. We should know if the anecdotal evidence of Google’s process is representative: What sort of information typically gets delisted (e.g., personal health) and what sort typically does not (e.g., about a public figure), in what proportions and in what countries?

Why It’s Important

Google and other search engines have been enlisted to make decisions about the proper balance between personal privacy and access to information. The vast majority of these decisions face no public scrutiny, though they shape public discourse. What’s more, the values at work in this process will/should inform information policy around the world. A fact-free debate about the RTBF is in no one’s interest.

Why Google

Google is not the only search engine, but no other private entity or Data Protection Authority has processed anywhere near the same number of requests (most have dealt with several hundred at most). Google has by far the best data on the kinds of requests being made, the most developed guidelines for handling them, and the most say in balancing informational privacy with access in search. We address this letter to Google, but the request goes out to all search engines subject to the ruling.


One year ago, the European Court of Justice, in Google Spain v AEPD and Mario Costeja González, determined that Google and other search engines must respond to users’ requests under EU data protection law concerning search results on queries of their names. This has become known as the Right to Be Forgotten (RTBF) ruling. The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the RTBF in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

Google reports that it has received over 250,000 individual requests concerning one million URLs in the past year. It also reports that it has delisted from name search results just over 40% of the URLs that it has reviewed. In various venues, Google has shared some 40 examples of delisting requests granted and denied (including 22 examples on its website), and it has revealed the top sources of material requested to be delisted (amounting to less than 8% of total candidate URLs). Most of the examples surfaced more than six months ago, with minimal transparency since then. While Google’s decisions will seem reasonable enough to most, in the absence of real information about how representative these are, the arguments about the validity and application of the RTBF are impossible to evaluate with rigour.

Beyond anecdote, we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

The RTBF ruling addresses the delisting of links to personal information that is “inaccurate, inadequate, irrelevant, or excessive for the purposes of data processing,” and which holds no public interest. Both opponents and supporters of the RTBF are concerned about overreach. Because there is no formal involvement of original sources or public representatives in the decision-making process, there can be only incidental challenges to information that is delisted, and few safeguards for the public interest in information access. Data protection authorities seem content to rely on search engines’ application of the ruling’s balancing test, citing low appeal rates as evidence that the balance is being appropriately struck. Of course, this statistic reveals no such thing. So the sides do battle in a data vacuum, with little understanding of the facts — facts that could assist in developing reasonable solutions.

Peter Fleischer, Google Global Privacy Counsel, reportedly told the 5th European Data Protection Days on May 4 that, “Over time, we are building a rich program of jurisprudence on the [RTBF] decision.” (Bhatti, Bloomberg, May 6). It is a jurisprudence built in the dark. For example, Mr. Fleischer is quoted as saying that the RTBF is “about true and legal content online, not defamation.” This is an interpretation of the scope and meaning of the ruling that deserves much greater elaboration, substantiation, and discussion.

We are not the only ones who want more transparency. Google’s own Advisory Council on the RTBF in February 2015 recommended more transparency, as did the Article 29 Working Party in November 2014. Both recommended that data controllers should be as transparent as possible by providing anonymised and aggregated statistics as well as the process and criteria used in delisting decisions. The benefits of such transparency extend to those who request that links be delisted, those who might make such requests, those who produce content that is or might be delisted, and the wider public who might or do access such material. Beyond this, transparency eases the burden on search engines by helping to shape implementation guidelines and revealing aspects of the governing legal framework that require clarification.

Naturally, there is some tension between transparency and the very privacy protection that the RTBF is meant to advance. The revelations that Google has made so far show that there is a way to steer clear of disclosure dangers. Indeed, the aggregate information that we seek threatens privacy far less than the scrubbed anecdotes that Google has already released, or the notifications that it is giving to webmasters registered with Google webmaster tools. The requested data is divorced from individual circumstances and requests. Here is what we think, at a minimum, should be disclosed:

  1. Categories of RTBF requests/requesters that are excluded or presumptively excluded (e.g., alleged defamation, public figures) and how those categories are defined and assessed.
  2. Categories of RTBF requests/requesters that are accepted or presumptively accepted (e.g., health information, address or telephone number, intimate information, information older than a certain time) and how those categories are defined and assessed.
  3. Proportion of requests and successful delistings (in each case by % of requests and URLs) that concern categories including (taken from Google anecdotes): (a) victims of crime or tragedy; (b) health information; (c) address or telephone number; (d) intimate information or photos; (e) people incidentally mentioned in a news story; (f) information about subjects who are minors; (g) accusations for which the claimant was subsequently exonerated, acquitted, or not charged; and (h) political opinions no longer held.
  4. Breakdown of overall requests (by % of requests and URLs, each according to nation of origin) according to the WP29 Guidelines categories. To the extent that Google uses different categories, such as past crimes or sex life, a breakdown by those categories. Where requests fall into multiple categories, that complexity too can be reflected in the data.
  5. Reasons for denial of delisting (by % of requests and URLs, each according to nation of origin). Where a decision rests on multiple grounds, that complexity too can be reflected in the data.
  6. Reasons for grant of delisting (by % of requests and URLs, each according to nation of origin). As above, multi-factored decisions can be reflected in the data.
  7. Categories of public figures denied delisting (e.g., public official, entertainer), including whether a Wikipedia presence is being used as a general proxy for status as a public figure.
  8. Source (e.g., professional media, social media, official public records) of material for delisted URLs by % and nation of origin (with top 5–10 sources of URLs in each category).
  9. Proportion of overall requests and successful delistings (each by % of requests and URLs, and with respect to both, according to nation of origin) concerning information first made available by the requestor (and, if so, (a) whether the information was posted directly by the requestor or by a third party, and (b) whether it is still within the requestor’s control, such as on his/her own Facebook page).
  10. Proportion of requests (by % of requests and URLs) where the information is targeted to the requester’s own geographic location (e.g., a Spanish newspaper reporting on a Spanish person about a Spanish auction).
  11. Proportion of searches for delisted pages that actually involve the requester’s name (perhaps in the form of % of delisted URLs that garnered certain threshold percentages of traffic from name searches).
  12. Proportion of delistings (by % of requests and URLs, each according to nation of origin) for which the original publisher or the relevant data protection authority participated in the decision.
  13. Specification of (a) types of webmasters that are not notified by default (e.g., malicious porn sites); (b) proportion of delistings (by % of requests and URLs) where the webmaster additionally removes information or applies robots.txt at source; and (c) proportion of delistings (by % of requests and URLs) where the webmaster lodges an objection.

As of now, only about 1% of requesters denied delisting are appealing those decisions to national Data Protection Authorities. Webmasters are notified in more than a quarter of delisting cases (Bloomberg, May 6). They can appeal the decision to Google, and there is evidence that Google may revise its decision. In the remainder of cases, the entire process is silent and opaque, with very little public process or understanding of delisting.

The ruling effectively enlisted Google into partnership with European states in striking a balance between individual privacy and public discourse interests. The public deserves to know how the governing jurisprudence is developing. We hope that Google, and all search engines subject to the ruling, will open up.

Jef Ausloos
Researcher
KU Leuven, ICRI/CIR — iMinds

Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA School of Law

Eduardo Bertoni
Global Clinical Professor. New York University School of Law
Director of the Center for Studies on Freedom of Expression and Access to Information -CELE-
Palermo University School of Law

Reuben Binns
Researcher
University of Southampton

Michael D. Birnhack
Professor of Law
Tel-Aviv University, Faculty of Law

Eerke Boiten
Director of Cyber Security Centre
University of Kent

Oren Bracha
Howrey LLP and Arnold, White & Durkee Centennial Professor
University of Texas School of Law

George Brock
Professor of Journalism
City University London

Sally Broughton Micova
LSE Fellow & Acting Director, LSE Media Policy Project
London School of Economics and Political Science

Ian Brown
Professor of Information Security and Privacy
University of Oxford, Oxford Internet Institute

Robin Callender Smith
Professorial Fellow in Media Law, Centre for Commercial Law Studies
Queen Mary University of London

Caroline Calomme
MJur candidate
University of Oxford

Ignacio Cofone
Researcher
Erasmus University Rotterdam

Julie E. Cohen
Mark Claster Mamolen Professor of Law & Technology
Georgetown Law

Ray Corrigan
Senior Lecturer in Maths, Computing and Technology
Open University

Jon Crowcroft
Marconi Professor of Communications Systems
University of Cambridge, Computer Laboratory

Angela Daly
Postdoctoral Research Fellow, Swinburne University of Technology
Research Associate, Tilburg University — TILT

Richard Danbury
Postdoctoral Research Fellow
University of Cambridge, Faculty of Law

Leonhard Dobusch
Assistant Professor on Organization Theory
Freie Universitaet Berlin

Lilian Edwards
Professor of Internet Law
University of Strathclyde

Niva Elkin-Koren
Professor of Law
University of Haifa

David Erdos
University Lecturer in Law and the Open Society
University of Cambridge, Faculty of Law

Gordon Fletcher
Senior Lecturer in Information Systems
University of Salford

Michelle Frasher
Non-resident Visiting Scholar, Fulbright-Schuman Scholar
University of Illinois, European Union Center

Brett M. Frischmann
Professor of Law
Benjamin N. Cardozo School of Law

Martha Garcia-Murillo
Professor of Information Studies
Syracuse University

David Glance
Director, UWA Centre for Software Practice
University of Western Australia

Ellen P. Goodman
Professor of Law
Rutgers University

Andres Guadamuz
Senior Lecturer in IP Law
University of Sussex

Edina Harbinja
Law Lecturer
University of Hertfordshire

Woodrow Hartzog
Associate Professor, Samford University, Cumberland School of Law
Affiliate Scholar, Stanford Law School, Center for Internet & Society

Andrew Hoskins
Professor
University of Glasgow

Martin Husovec
Legal Advisor, European Information Society Institute
Affiliate Scholar, Stanford Law School, Center for Internet & Society

Agnieszka Janczuk-Gorywoda
Assistant Professor
Tilburg University — TILEC

Lorena Jaume-Palasí
PhD candidate and Lecturer
Ludwig Maximilians University

Bert-Jaap Koops
Professor of Regulation and Technology
Tilburg University — TILT

Paulan Korenhof
Researcher
Tilburg University — TILT

Aleksandra Kuczerawy
Researcher
KU Leuven, ICRI/CIR — iMinds

Stefan Kulk
Researcher
Utrecht University

Rebekah Larsen
MPhil candidate
University of Cambridge, Judge Business School

David S. Levine
Associate Professor, Elon University School of Law
Visiting Research Collaborator, Princeton Center for Information Technology Policy
Affiliate Scholar, Stanford Law School, Center for Internet & Society

Michael P. Lynch
Professor of Philosophy and Director, Humanities Institute
University of Connecticut

Orla Lynskey
Assistant Professor of Law and Warden, Sidney Webb House
London School of Economics and Political Science

Daniel Lyons
Associate Professor of Law
Boston College Law School

Ian MacInnes
Associate Professor, School of Information Studies
Syracuse University

Robin Mansell
Professor, Department of Media and Communications
London School of Economics and Political Science

Alan McKenna
Lecturer
University of Kent Law School

Shane McNamee
Research Assistant, Research Centre for Consumer Law
University of Bayreuth

Maura Migliore
LL.M. candidate, Centre for Commercial Law Studies
Queen Mary University of London

Christian Moeller
Internet Policy Observatory, Center for Global Communication Studies, Annenberg School for Communication, University of Pennsylvania
University of Applied Sciences Kiel

Maria Helen Murphy
Lecturer in Law
Maynooth University

Andrew Murray
Professor of Law
London School of Economics and Political Science

John Naughton
Professor, Wolfson College
University of Cambridge

Abraham Newman
Associate Professor, School of Foreign Service
Georgetown University

Kieron O’Hara
Senior Research Fellow, Electronics and Computer Science
University of Southampton

Marion Oswald
Senior Fellow, Head of the Centre for Information Rights
University of Winchester

Pablo A. Palazzi
Professor of Law
San Andres University

Frank Pasquale
Professor of Law
University of Maryland Carey School of Law

Richard J. Peltz-Steele
Professor
University of Massachusetts Law School

Julia Powles
Researcher
University of Cambridge — Faculty of Law

Artemi Rallo
Constitutional Law Professor and Former Director, Spanish Data Protection Agency
Jaume I University

Giovanni Sartor
Professor of Legal Informatics and Legal Theory
European University Institute

Evan Selinger
Associate Professor of Philosophy
Rochester Institute of Technology

Sophie Stalla-Bourdillon
Associate Professor in IT law
University of Southampton

Konstantinos Stylianou
Fellow, Centre for Technology and Society
FGV Direito Rio

Dan Jerker B. Svantesson
Professor
Bond University Faculty of Law

Damian Tambini
Research Director and Director of the Media Policy Project
London School of Economics and Political Science

Judith Townend
Director, Centre for Law and Information Policy
Institute of Advanced Legal Studies

Alexander Tsesis
Professor of Law
Loyola University School of Law

Siva Vaidhyanathan
Robertson Professor, Department of Media Studies
University of Virginia

Peggy Valcke
Professor of Law, Head of Research
KU Leuven — iMinds

Alfonso Valero
Principal Lecturer, College of Business Law & Social Sciences
Nottingham Law School

Brendan Van Alsenoy
Researcher
KU Leuven, ICRI/CIR — iMinds

Joris van Hoboken
Research Fellow
New York University School of Law

Asma Vranaki
Postdoctoral Researcher, Centre for Commercial Law Studies
Queen Mary University of London

Kevin Werbach
Associate Professor of Legal Studies & Business Ethics
University of Pennsylvania, The Wharton School

Abby Whitmarsh
Web Science Researcher
University of Southampton

Tijmen Wisman
PhD candidate and Lecturer
VU University Amsterdam

Lorna Woods
Professor of Internet Law
University of Essex

Nicolo Zingales
Assistant Professor
Tilburg University — TILEC