Securing workable, balanced and effective individual rights regarding personal data disseminated online is vital to the future of data protection and should be a significant focus of attention for the European Data Protection Board going forward.

Consequent to the Court of Justice’s C-131/12 Google Spain (2014) judgment, the right to delisting and related ex post action by search engines has assumed particular practical importance.  The European Data Protection Board (EDPB)’s draft guidelines on this topic – which recently closed for consultation – is, therefore, very welcome.

Nevertheless, it is also vital that in due course the Board produce more comprehensive guidance.  Indeed, under the General Data Protection Regulation (GDPR), it has a specific legal duty to “issue guidelines, recommendations, and best practices on procedures for erasing links, copies or replications of personal data from publicly available communication services” (GDPR, art. 70(1)(d)).  This guidance will clearly need to encompass a much wider range of online actors than just search engines including individual websites, social networking sites such as Facebook and other online platforms such as Twitter.  Nevertheless, the current Guidelines will be important starting point and so, based on what I sent into the consultation, I set out below some thoughts on the detail of the draft and how it might be improved.  This is divided into the three main topics addressed: (1) the scope of the guidance and of ex post rights vis-à-vis search engines, (2) the substantive grounds for exercising these ex post rights, and (3) the substantive exemptions from these ex post rights.

  1. The Scope of Guidance and of Ex Post Data Protection Rights vis-à-vis Search Engines:

Once completed by detailed substantive criteria which will be set out a second part of the document (not yet published even in draft), these Guidelines seek to provide a general overview of the applicability of ex post rights vis-à-vis search engines as regards their indexing of personal data published online.  This is put under the heading of the ‘right to be forgotten’ (RtbF) which is often used in public debate as a catch-all for rights to restrict to dissemination of personal data especially online.  Nevertheless, the Guidelines quite rightly also distinguish this use from the specific and new addition of a formal RtbF in Article 17 of the GDPR.  Article 17 details the RtbF alongside the right to erasure.  However, the addition it makes compared with the Data Protection Directive – which only used the language of the right to erasure –  is to require those who have been subject to a bona fide erasure request and who have made this this public to help the data subject limit its further spread by others (GDPR, art. 17(2)).

As the guidance points out, the specific provision has no applicability in relation to search engines indexing as they are merely processing personal data which has already been made public by others.  It is also important to note that the right to erasure (which is sometimes itself called the RtbF) is not the only ex post right which may be invoked against search engines.  Indeed, the Google Spain judgment itself emphasised that the right to object (now found in GDPR, art. 21) is equally applicable.  Also of potential relevance are the possibility of direct enforcement of the basic duties of data protection especially as set out in chapter II of the GDPR (through, for example, the data subject lodging a complaint with a Data Protection Authority under GDPR, art. 77), invoking of the right to restrict processing (GDPR, art. 18) and/or invoking of the right to rectification of inaccurate or incomplete processing (GDPR, art. 17).  None of these provisions has been subject to detailed Court of Justice exploration and all must be interpreted in line with freedom of expression and, in particular, permissible exemptions applicable to the special data category rules (GDPR, art. 9(1)(g)), the criminal-related data rules (GDPR, art. 10) and the more general restrictions permissible where necessary to protect the “rights and freedoms of others” (GDPR, art. 23(1)(i)).  Nevertheless, none can be entirely ignored and so they should at least be mentioned in any final Guidelines.  Indeed, the potential relevance of the right to rectification has been highlighted by C-136/17 GC et. al. v CNIL which held that, in cases where a delisting of data is not applicable, a search engine is

in any event required, at the latest on the occasion of the right for de-referencing, to adjust the list of results in such a way that the overall picture it gives the internet user reflects the [individual’s] current legal position, which means in particular that links to web pages containing information on that point must appear in first place on the list. (at [78])

Incidentally, the Court’s reference to “at the latest on the occasion of the right for de-referencing” suggests that in some circumstances a search engine may even have certain ex ante ‘duty of care’ responsibilities.  For example, if a search engine’s attention is repeatedly drawn to its indexing of a website dedicated to the dissemination of manifestly illegal revenge pornography, then does it acquire a ‘duty of care’ responsibility under the GDPR to deindex this illegal material even before specific contact by each and every individual?  These sorts of questions are important to pose but given the focus of the Guidelines on ex post duties this issue will only be noted here.

Turning back to consider these ex post duties further, the Google Spain judgment indicated that search engines would only acquire direct duties under data protection where their activities are “liable to affect significantly and additionally” the fundamental rights to privacy and the protection of personal data and would then only need to act within the framework of their “responsibilities, powers and capabilities” (at [38]). It is not entirely clear where the Court saw the grounding of these restrictions, as they do not obviously follow the logic of the statutory European data protection scheme and yet the need for a proportionate balance with freedom of expression was also not argued for here either.  Nevertheless, this language was repeated in the much more recent case of GC et. al. v CNIL (at [37]) – a judgment which did give some emphasis to freedom of expression – and so (subject to the continuing and potentially broader role of indirect or secondary liability where publication at source is illegal) these restrictions should be taken to be good law.

Nevertheless, notwithstanding the approach adopted by large search engines to date, name-based searches are only a “particular” (GC et. al. v CNIL at [46]) rather than the only example of processing which clearly satisfies the first threshold.  Processing by reference to other widely used identifiers such as an image or a job position may also be significantly and additionally impactful on rights.  Indeed, as regards the first example, the Article 29 Working Party suggested as far back as 2008 (p. 14) that image processing may be particularly intrusive and require specific attention.  Meanwhile, the latter type of identifier has even been subject to recent concrete enforcement action by the Italian Data Protection Authority (DPA).  Albeit only in certain restricted circumstances, it may even be possible for search engine activity to be significantly and additional impactful where its processing is not even by reference to a clear identifier.   An example might be the enabling of search which combined reference to a small hamlet and a highly stigmatic allegation – e.g. being a ‘convicted child sex offender’ – which brought up totally false details alleging that an identified person within that neighbourhood was responsible for such offences.  The ability of anyone interested in the hamlet to readily find and potentially believe this false allegation would clearly have the potential to be highly damaging to such an innocent individual and additionally so compared with initial publication.  This example goes to show the importance of ensuring that individuals are able to make a case to a search engine that the processing is significantly and additionally impactful on their rights and for both the search engine and on appeal the DPA to fairly consider it according to the standard laid down in CJEU case law.  Obviously, and as with the analysis of name-based searches, any such claim must be interpreted consistently with freedom of expression including a search engine’s right to facilitate communication by original publishers and the right of internet users to receive information.  This will be dealt with further below.  Nevertheless, parts of the Board’s draft Guidelines which seem to assume that only name-based searches are within scope (e.g. at p. 14) should be deleted and the initial part of the Guidelines (p. 2) which do to some extent recognise the need for a broader analysis made more complete and systematic.

It has been universally accepted that the second threshold –  acting within the context of “responsibilities, powers and capabilities” – requires search engines to deindex specific URLs which are brought to its attention by data subjects and whose continuing processing is shown to violate core data protection standards.  This is certainly an important minimum requirement.  However, it has proved insufficient for individuals faced with regular and repeated uploads of the same problematic personal data and which, thereby, are continuously subject to impactful automatic indexing which is not in conformity with core data protection standards.  The criteria set down in Google Spain suggests that, once put on notice that such impactful and problematic indexing is taking place, search engines should have a responsibility to bring its processing into compliance with data protection standards as far as their “powers” and “capabilities” allow.  Notwithstanding the resistance of major search engines on this topic, this may well extend beyond the bare deindexing of specifically notified URLs.  For example, where the listing of certain images (e.g. manifest revenge pornography) is ipso facto violative of data protection standards, then the major search engines possess PhotoDNA technology enabling them to ensure that indexing of these images ceases.  Following appropriate notice, this should therefore be deployed.  In this regard, it should be noted that courts and regulators have recognised this as an issue to confront but that relevant cases have been settled (see especially Hegglin v Google (2014) and Mosley v Google (2015)) and so the required obligations remain opaque.  This makes it all the more important that clear understandings are laid down in these Guidelines.

Finally, one responsibility which search engines clearly have is to ensure that they adopt all reasonable safeguards to ensure that own processing of deindexing claims does not either directly or indirectly lead systematically to processing which violates the purpose compatibility and/or associated legality standards to which they are bound.  In reality, at least Google’s practice of unsafeguarded disclosure of deindexing claims to original webmasters has led to data subject claims being made public, sometimes at great scale and with the link with the underlying data at issue also being made (or, in particular cases, even further combination with private data such as claims an individual has made to self-regulatory bodies).  As I have argued at length in a recent working paper, the necessity and other data protection standards which bind Google and other search engines should require that the identifiable disclosure of deindexing claims to original websites should only take place where this is at least reasonably necessary for the purpose of determining or checking the legal need for any deindexing.  Even more importantly, any such disclosure must be subject to robust and effective safeguards to ensure that the disclosed data is not used for incompatible purposes which would clearly include making private data public in identifiable form.  The Board’s current draft Guidelines do mention this issue (at p. 5) but is all too brief and, unlike the Article 29 Working Party’s 2014 Guidelines, they fail to mention the requirement in any case to “take all necessary measures to properly safeguard the rights of the affected data subject” (p. 10).  In contrast to this earlier document, the draft also fails to highlight that it would also be incompatible with data protection for a search engine to itself arrange for deindexing claims to be made public in this way.  Given the relationship between Google and the US-based Lumen database over recent years, the suggestion that Google might start disclosing deindexing data to Lumen and Lumen’s usual practice of publishing ‘removal’ claims to the world at large, this is also an unfortunate gap.

David Erdos, Faculty of Law and Trinity Hall, University of Cambridge

This article was originally published on Inforrm, and is reposted here with permission and thanks.