This guest post by Eerke Boiten, University of Kent, considers the implications of granting an individual the right to be de-listed from online search results: should new articles about de-listed content be removed too?
The UK’s data privacy watchdog has waded into the debate over the enforcement of the right to be forgotten in Europe.
The Information Commissioner’s Office issued a notice to Google to remove from its search results newspaper articles that discussed details from older articles that had themselves been subject to a successful right to be forgotten request.
The new reports included, wholly unnecessarily, the name of the person who had requested that Google remove reports of a ten-year-old shoplifting conviction from search results. Google agreed with this right to be forgotten request and de-linked the contemporary reports of the conviction, but then refused to do the same to new articles that carried the same details. Essentially, Google had granted the subject’s request for privacy, and then allowed it to be reversed via the back door.
The ICO’s action highlights the attitude of the press, which tries to draw as much attention to stories related to the right to be forgotten and their subjects as possible, generating new coverage that throws up details of the very events those making right to be forgotten requests are seeking to have buried.
There is no expectation of anonymity for people convicted of even minor crimes in the UK, something the press takes advantage of: such as the regional newspaper which tweeted a picture of the woman convicted of shoplifting a sex toy. However, after a criminal conviction is spent, the facts of the crime are deemed “irrelevant information” in the technical sense of the UK Data Protection Act.
The arrival of the right to be forgotten, or more accurately the right to have online search results de-linked, as made explicit by the EU Court of Justice in 2014, does not entail retroactive censorship of newspaper reports from the time of the original event. But the limited cases published by Google so far suggest that such requests have normally been granted, except where there was a strong public interest.
Stirring up a censorship storm
It’s clear Google does not like the right to be forgotten, and it has from early on sent notifications to publishers of de-listed links in the hope they will cry “censorship”. Certainly BBC journalist Robert Peston felt “cast into oblivion” because his blog no longer appeared in search results for one particular commenter’s name.
It’s not clear that such notifications are required at all: the European Court of Justice judgment didn’t call for them, and the publishers are neither subject (as they’re not the person involved) nor controller (Google in this case) of the de-listed link. Experts and even the ICO have hinted that Google’s efforts to publicise the very details it is supposed to be minimising might be viewed as a privacy breach or unfair processing with regard to those making right to be forgotten requests.
The Barry Gibb effect
De-listing notifications achieve something similar to the Streisand effect, where publicity around a request for privacy leads to exactly the opposite result. I’ve previously called the attempt to stir up publisher unrest the Barry Gibb effect, because it goes so well with Streisand. So well, maybe it oughta be illegal.
Some publishers are happy to dance to Google’s tune, accumulating and publishing these notifications in their own lists of de-listed links. Presumably this is intended to be seen as a bold move against censorship – the more accurate “List of things we once published that are now considered to contain irrelevant information about somebody” doesn’t sound as appealing.
In June 2015, even the BBC joined in, and comments still show that readers find salacious value in such a list.
Upholding the spirit and letter of the law
While some reporters laugh at the idea of deleting links to articles about links, this misses the point. The ICO has not previously challenged the reporting of stories relating to the right to be forgotten, or lists of delisted links – even when these appear to subvert the spirit of data protection. But by naming the individual involved in these new reports, the de-listed story is brought straight back to the top of search results for the person in question. This is a much more direct subversion of the spirit of the law.
Google refused the subject’s request that it de-list nine search results repeating the old story, name and all, claiming they were relevant to journalistic reporting of the right to be forgotten. The ICO judgment weighed the arguments carefully over ten pages before finding for the complainant in its resulting enforcement notice.
The ICO dealt with 120 such complaints in the past year, but this appears to be the only one where a Google refusal led to an enforcement notice.
The decision against Google is a significant step. However, its scope is narrow as it concerns stories that unwisely repeat personally identifying information, and again it only leads to de-listing results from searches of a particular name. It remains to be seen whether other more subtle forms of subversion aimed at the right to be forgotten will continue to be tolerated.
Eerke Boiten is Senior Lecturer, School of Computing and Director of Academic Centre of Excellence in Cyber Security Research at University of Kent.
This article was originally published on The Conversation. Read the original article.
Interesting post – thanks. I hope you don’t mind if I take issue with a couple of aspects of the argument. First, with the notion that there is precious little sensible rationale for Google and so on notifying web masters or the public when they delist. There is. Whether, of course, this rationale is the reason Google seek to notify is a distinct question. As is the question of whether notification is appropriate, given other tensions with (for example) privacy. As is the more general question of the relative importance of notification to the interests involved in individuals’ privacy. But notification to the audience of delisting can be appropriate as it is a means of putting the audience on notice that the material they are viewing has been filtered. Notification to the poster of the information can be defended on the grounds that they may have a speech interest which may not properly being considered by the decision to delist.
The second point is that the idea that the press ‘take advantage of’ the lack of an expectation of anonymity for convictions for minor offences. That’s a rather prejudicial way of expressing what could also be called the idea of the freedom of the press to report judicial proceedings.
Richard, I think you’ve identified two core tensions here, which haven’t been given enough consideration in public debates.
[1] Speech rights (beyond Google’s) – and how they should be represented in the formal de-listing process. Which originators of content (publishers, writers, commenters) should be involved, and how?
[2] The handling of judicial proceedings: Eerke’s comment on the regional paper reporting implies he doesn’t see the need/value of reporting an interesting detail (interesting to the public not necessarily in the public interest etc. etc.). And Richard points out that the press has the freedom to make such choices. One of the frustrating things about this RTBF debate is that there has been limited discussion about the content itself. We haven’t actually stepped back and had a proper debate about courts-generated information and how long it should be in the public domain – and how accessibly. Open justice and a right to report courts is entrenched in English common law and supported by ECHR Articles 6 and 10; clearly there’s a tension with privacy rights/rehabilitation efforts etc. but there hasn’t been an informed or thorough consideration about the processing of courts data, and how that should be managed in digital environments.
Thanks for your comments, good to have some discussion.
I think assuming speech rights of others besides Google in this scenario opens a very interesting can of worms. If the appearance of something you wrote elsewhere on Google search results is indeed a “speech right” issue, then Google’s role in ordering and presenting search results is itself a speech right issue of a much larger magnitude than “RtbF”. The challenge to Google’s power over this “right” is only just beginning, with the EU claims that Google advantages its own services over competitors’ in search results, and there is certainly no transparency or accountability in this area yet.
It does seem like a main area for “justifiable” RtbF requests is in the publicity around old crimes – perpetrators as well as victims and bystanders. [If only Google wasn’t so mean with info on past Rtbf claims.] Indeed the discussion on such info being in public domain, how, and for how long, would be meaningful at this point.
Much, I think depends on the idea of misleading the audience. If those who read google search think that it is a dispassionate display of information evaluated according to a best estimate of relevance, then it is manipulative if the information has been weighted according to commercial advantage. So I agree, that freedom of speech law ought to have things to say about Google search, just as it has about misleading advertising.
One way of resolving this, is for Google (or whoever) to label clearly information that it is not portraying in a neutral manner. (Or as neutral as it normally does). So it’s fine to label clearly material it gives preference to for commercial reasons. The audience should not be mislead – their autonomy should not be manipulated.
But here’s an interesting consequence for the RtBF argument: if the audience should not be manipulated, and if the ranking of information – when not neutral – ought to be labelled as not neutral – then surely this is a reason to notify users that material has been removed for DPA purposes? Sauce for the goose, sauce for the gander. If you tinker with the ranking, you need to tell the audience you’ve tinkered with the ranking.
On the other point, I’m sceptical that the main focus of the courts will be on old criminal cases. Ashley Hurst has recently published an article explaining why, in his view, DP claims have become the weapon of choice for reputation management lawyers, seeking to protect their clients. As with most legal developments, changes in the law are a tool that can be used for both good and ill.
https://inforrm.wordpress.com/2015/05/14/data-privacy-and-intermediary-liability-striking-a-balance-between-privacy-reputation-innovation-and-freedom-of-expression-part-1-ashley-hurst/
Google admitting to their own explicit bias, if there is one, is just one thing. The bigger issue is behind words like “dispassionate” and “best estimate” – the most generous assumption we can currently make is that it’s an imperfect measurement of a subjective notion. Making search outcomes “free speech” means these need to be open enough to be challenged in court.
Transparency does mean having to say what makes for “relevance”, but that doesn’t automatically imply having to state which particular criteria impacted on any individual ranking choice. Put differently, I don’t think you can talk about “tinkering with the ranking”, the ranking is tinkering itself.
I wasn’t saying that the majority of RtbF claims would be around spent crime – rather that many of the most defensible accepted claims as reported in public have been in that area.
And thanks for the Ashley Hurst link, very useful and instructive material. Almost stopped reading when he cited the very weak Independent “ambulance chasing” story on RtbF, glad I didn’t.