Tag Archives: data protection

Communicating Responsibilities: The Spanish DPA targets Google’s Notification Practices when Delisting Personal Information

In this guest post, David Erdos, University Lecturer in Law and the Open Society, University of Cambridge, considers the 2016 Resolution made by the Spanish Data Protection Authority in relation to Google’s approach to de-listing personal information. 

Spanish Data protection authorityThe Court of Justice’s seminal decision in Google Spain (2014) represented more the beginning rather than the endpoint of specifying the European data protection obligations of search engines when indexing material from the web and, as importantly, ensuring adherence to this.

In light of its over 90% market share of search, this issue largely concerns Google (even Bing and Yahoo come in a very distant second and third place).  To its credit, Google signalled an early willingness to comply with Google Spain.  At the same time, however, it construed this narrowly.  Google argued that it only had to remove specified URL links following ex post demands from individual European citizens and/or residents (exercising the right to erasure (A. 12c) and or objection (A. 14)), only as regards searches made under their name, only on European-badged search searches (e.g. .uk, .es) and even if the processing violated European data protection standards not if the processing was judged to be in the ʻpublic interestʼ.

It also indicated that it would inform the Webmasters of the ʻoriginalʼ content when de-listing took place (although it signalled that it would stop short of its usual practice of providing a similar notification to individual users of its services, opting instead for a generic notice only).

In the subsequent two and a half years, Google’s approach has remained in broad terms relatively stable (although from early 2015 it did stop notifying Webmasters when de-listing material from malicious porn sites (p. 29) and from early 2016 it has deployed (albeit imperfect) geolocation technology to block the return of de-listing results when using any of version of the Google search engine (e.g. .com) from the European country from where the demand was lodged).

Many (although not all) of these limitations are potentially suspect under European data protection, and indeed private litigants have (successfully and unsuccessfully) already brought a number of challenges.  No doubt partly reflecting their very limited resources, European Data Protection Authorities (DPAs) have adopted a selective approach, targeting only those issues which they see as the most critical.  Indeed, the Article 29 Working Party November 2014 Guidelines focussed principally on two concerns:

  • Firstly, that the geographical scope of de-listing was too narrow. To ensure “effective and complete protection” of individual data subjects, it was necessary that de-listing be “effective on all relevant domains, including .com”.
  • Secondly, that communication to third parties of data concerning de-listing identifiable to particular data subjects should be both very limited and subject to strong discipline. Routine communication “to original webmasters that results relating to their content had been delisted” was simply unlawful and, whilst in “particularly difficult cases” it might in principle be legitimate to contact such publishers prior to making a de-listing decision, even here search engines must then “take all necessary measures to properly safeguard the rights of the affected data subject”.

Since the release of the Guidelines, the French DPA has famously (or infamously depending on your perspective!) adopted a strict interpretation of the first concern requiring de-listing on a completely global scale and fining Google €100K for failing to do this.  This action has now been appealed before the French Conseil d’État and much attention has been given to this including by Google itself.  In contrast, much less publicity has been given to the issue of third party communication.

Nevertheless, in September 2016 the Spanish DPA issued a Resolution fining Google €150K for disclosing information identifiable to three data subjects to Webmasters and ordered it to adopt measures to prevent such practices reoccurring.  An internal administrative appeal lodged by Google against this has now been rejected and a challenge in court now seems inevitable.  This piece explores the background to, nature of and justification for this important regulatory development.

The Determinations Made in the Spanish Resolution

Apart from the fact that they had formally complained, there was nothing unusual in the three individual cases analysed in the Spanish Resolution.  Google had simply followed its usual practice of informing Webmasters that under data protection law specified URLs had been deindexed against a particular (albeit not directly specified) individual name.  Google sought to defend this practice on four separate grounds:

  • Firstly, it argued that the information provided to Webmasters did not constitute personal data at all. In contrast, the Spanish regulator argued that in those cases where the URL led to a webpage in which only one natural person was mentioned then directly identifiable data had been reported, whilst even in those cases where several people were mentioned the information was still indirectly identifiable since a simple procedure (e.g. conducting a search on names linked to the webpage in question) would render the information fully identified.  (Google’s argument here in any case seemed to be in tension with its practice since September 2015 of inviting contacted Webmasters to notify Google of any reason why the de-listing decision should be reconsidered – this would only really make sense if the Webmaster could in fact deduce what specific de-listing had in fact taken place).
  • Second, it argued that, since its de-listing form stated that it “may provide details to webmaster(s) of the URLs that have been removed from our search results”, any dissemination had taken place with the individual’s consent. Drawing especially on European data protection’s requirement that consent be “freely given” (A. 2 (h)) this was also roundly rejected.  In using the form to exercise their legal rights, individuals were simply made to accept as a fait accompli that such dissemination might take place.
  • Third, it argued that dissemination was nevertheless a compatible” (A. 6 (1) (b)) processing of the data given the initial purpose of its collection, finding a legal basis as necessary” for the legitimate interests (A. 7 (f)) of Webmasters regarding this processing (e.g. to contact Google for a reconsideration). The Spanish DPA doubted that Webmasters could have any legitimate interest here since “search engines do not recognize a legal right of publishers to have their contents indexed and displayed, or displayed in a particular order”, the Court of Justice had only referenced that the interests of the search engine itself and Internet users who might receive the information were engaged and, furthermore, had been explicit that de-listing rights applied irrespective of whether the information was erased at source or even if publication there remained lawful.  In any case, it emphasized that any such interest had (as article 7 (f) explicitly states) to be balanced with the rights and freedoms of data subjects which the Court had emphasized must be “effective and complete” in this context.  In contrast, Google’s practice of essentially unsafeguarded disclosure of the data to Webmasters could result in the effective extinguishment of the data subject’s rights since Webmasters had variously republished the deindexed page against another URL, published lists of all URLs deindexed or even published a specific news story on the de-listing decision.
  • Fourth, Google argued that its practice was an instantiation of the data subject’s right to obtain from a controller “notification to third parties to whom the data have been disclosed of any rectification, erasure or blocking” carried out in compliance with the right to erasure “unless this provides impossible or involves a disproportionate effort” (A. 12 (c)). The Spanish regulator pointed out that since the data in question had originally been received from rather than disclosed to Webmasters, this provision was not even materially engaged.  In any case, Google’s interpretation of it was in conflict with its purpose which was to ensure the full effectiveness of the data subject’s right to erasure.

Having established an infringement of the law, the Spanish regulator had to consider whether to pursue this as an illegal communication of data (judged ʻvery seriousʼ under Spanish data law) or only as a breach of secrecy (which is judged merely as ʻseriousʼ).  In the event, it plumped for the latter and issued a fine of €150K which was in the mid-range of that set out for ʻseriousʼ infringements.  As previously noted, it also injuncted Google to adopt measures to prevent re-occurrence of these legal failings and required that these be communicated to the Spanish DPA.

Analysis

This Spanish DPA’s action tackles a systematic practice which has every potential to fundamentally undermine practical enjoyment of rights to de-listing and is therefore at least as significant as the ongoing regulatory developments in France which relate to the geographical scope of these rights.  It was entirely right to find that personal data had been disseminated, that this had been done without consent, that the processing had nothing to do with the right (which, in any case, is not an obligation) of data subjects to have third parties notified in certain circumstances and that this processing was “incompatible” with the initial purpose of data collection which was to ensure data subject’s legal rights to de-listing.

It is true that the Resolution was too quick to dismiss the idea that original Webmasters do have “legitimate interests” in guarding against unfair de-listings of content.  Even in the absence of a de jure right to such listings, these interests are grounded in their fundamental right to “impart” information (and ideas), an aspect of freedom of expression (ECHR, art. 10; EU Charter, art. 11).   In principle, these rights and interests justify search engines making contact with original Webmasters, at the least as the Working Party itself indicated in particularly difficult de-listing cases.

However, even here dissemination must (as the Working Party also emphasized) properly safeguard the rights and interest of data subjects.  At the least this should mean that, prior to any dissemination, a search engine should conclude a binding and effectively policeable legal contract prohibiting Webmasters from disseminating the data in identifiable form.  (In the absence of this, those Webmasters out of European jurisdiction or engaged in special/journalistic expression cannot necessarily be themselves criticized for making use of the information received in other ways).

In stark contrast to this, Google currently engages in blanket and essentially unsafeguarded reporting to Webmasters, a practice which has resulted in a breakdown of effective protection for data subjects not just in Spain but also in other European jurisdictions such as the UK – see here and here.  Having been put on such clear notice by this Spanish action, it is to be hoped the Google will seriously modify its practices.  If not, then regulators would have every right to deal with this in the future as a (yet more serious) illegal and intentional communication of personal data.

Future Spanish Regulatory Vistas

The cases investigated by the Spanish DPA noted in this Resolution also involved the potential dissemination of data to the Lumen transparency database (formally Chilling Effects) which is hosted in the United States, the potential for subsequent publication of identifiable data on its publicly accessible database and even the potential for a specific notification to be provided to Google users conducting relevant name searches detailing that “[i]n response to a legal requirement sent to Google, we have removed [X] result(s) from this page.  If you wish, you can get more information about this requirement on LumenDatabase.org.

This particular investigation, however, failed to uncover enough information on these important matters.  Google was adamant that it had not yet begun providing information to Lumen in relation to data protection claims post-Google Spain, but stated that it was likely to do so in the future in some form.  Meanwhile, it indicated that the specific Lumen notifications which were found on name searches regarding two of the claimants concerned pre-Google Spain claims variously made under defamation, civil privacy law and also data protection.  (Even putting to one side the data protection claim, such practices would still amount to a processing of personal data and also highlight the often marginal and sometimes arbitrary distinctions between these very related legal causes of action).

Given these complications, the Spanish regulator decided not to proceed directly regarding these matters but rather open more wide-ranging investigatory proceedings concerning both Google’s practices in relation to disclosure to Lumen and also notification provided to search users.  Both sets of investigatory proceedings are ongoing.  Such continuing work highlights the vital need for active regulatory engagement to ensure that the individual rights of data subjects are effectively secured.  Only in this way will basic European data protection norms continue to ʻcatch upʼ not just with Google but with developments online generally.

David Erdos, University Lecturer in Law and the Open Society, Faculty of Law & WYNG Fellow in Law, Trinity Hall, University of Cambridge.

(I am grateful to Cristina Pauner Chulvi and Jef Ausloos for their thoughts on a draft of this piece.)

This post first appeared on the Inforrm blog. 

Call for papers: Critical Research in Information Law

Deadline 15 March 2017

The Information Law Group at the University of Sussex is pleased to announce its annual PhD and Work in Progress Workshop on 3 May 2017. The workshop, chaired by Professor Chris Marsden, will provide doctoral students with an opportunity to discuss current research and receive feedback from senior scholars in a highly focused, informal environment. The event will be held in conjunction with the Work in Progress Workshop on digital intermediary law.

We encourage original contributions critically approaching current information law and policy issues, with particular attention on the peculiarities of information law as a field of research. Topics of interest include:

  • internet intermediary liability
  • net neutrality and media regulation
  • surveillance and data regulation
  • 3D printing
  • the EU General Data Protection Regulation
  • blockchain technology
  • algorithmic/AI/robotic regulation
  • Platform neutrality, ‘fake news’ and ‘anti-extremism’ policy.

How to apply: Please send an abstract of 500 words and brief biographical information to Dr Nicolo Zingales  by 15 March 2017. Applicants will be informed by 30 March 2017 if selected. Submission of draft papers by selected applicants is encouraged, but not required.

Logistics: 11am-1pm 3 May in the Moot Room, Freeman Building, University of Sussex.

Afternoon Workshop: all PhD attendees are registered to attend the afternoon workshop 2pm-5.30pm F22 without charge (programme here).

Financial Support: Information Law Group can repay economy class rail fares within the UK. Please inform the organizers if you need financial assistance.

Book launch: ‘Private Power, Online Information Flows and EU Law: Mind The Gap’

angela-daly-eu-bookBook launch at: The Conservatory, Bloomsbury Publishing Plc 
50 Bedford Square
London
WC1B 3DP
6pm – 8pm, 31 January 2017

This event is FREE but registration is required on Eventbrite.

Speaker: Angela Daly

With guest speakers: Professor Chris Marsden, University of Sussex; Dr Orla Lynskey, London School of Economics and Political Science

About the Book

This monograph examines how European Union law and regulation address concentrations of private economic power which impede free information flows on the Internet to the detriment of Internet users’ autonomy. In particular, competition law, sector specific regulation (if it exists), data protection and human rights law are considered and assessed to the extent they can tackle such concentrations of power for the benefit of users.

Using a series of illustrative case studies, of Internet provision (including the net neutrality debate), search, mobile devices and app stores, and the cloud, the work demonstrates the gaps that currently exist in EU law and regulation. It is argued that these gaps exist due, in part, to current overarching trends guiding the regulation of economic power, namely neoliberalism, by which only the situation of market failure can invite ex ante rules, buoyed by the lobbying of regulators and legislators by those in possession of such economic power to achieve outcomes which favour their businesses. Given this systemic, and extra-legal, nature of the reasons as to why the gaps exist, solutions from outside the system are proposed at the end of each case study.

Praise for the Book

‘This is a richly textured, critically argued work, shedding new light on case studies in information law which require critical thinking. It is both an interesting series of case studies (notably cloud computing, app stores and search) that displays original and deeply researched scholarship and a framework for critiquing neoliberal competition policy from a prosumerist and citizen-oriented perspective.’ – Professor Chris Marsden, University of Sussex.

Information Law and Policy Centre’s annual workshop highlights new challenges in balancing competing human rights

dsc_0892  dsc_0896  dsc_0898

Our annual workshop and lecture – held earlier this month – brought together a wide range of legal academics, lawyers, policy-makers and interested parties to discuss the future of human rights and digital information control.

A number of key themes emerged in our panel sessions including the tensions present in balancing Article 8 and Article 10 rights; the new algorithmic and informational power of commercial actors; the challenges for law enforcement; the liability of online intermediaries; and future technological developments.

The following write up of the event offers a very brief summary report of each panel and of Rosemary Jay’s evening lecture.

Morning Session

Panel A: Social media, online privacy and shaming

Helen James and Emma Nottingham (University of Winchester) began the panel by presenting their research (with Marion Oswald) into the legal and ethical issues raised by the depiction of young children in broadcast TV programmes such as The Secret Life of 4, 5 and 6 Year Olds. They were also concerned with the live-tweeting which accompanied these programmes, noting that very abusive tweets could be directed towards children taking part in the programmes.

Continue reading

‘Tracking People’ research network established

Tracking People Research NetworkA new research network has been established to investigate the legal, ethical, social and technical issues which arise from the use of wearable, non-removable tagging and tracking devices.

According to the network’s website, tracking devices are increasingly being used to monitor a range of individuals including “offenders, mental health patients, dementia patients, young people in care, immigrants and suspected terrorists”.

The interdisciplinary network is being hosted at the University of Leeds and aims to foster “new empirical, conceptual, theoretical and practical insights into the use of tracking devices”.

The network is being coordinated by Professor Anthea Hucklesby and Dr Kevin MacNish. It will bring together academics, designers, policy-makers and practitioners to explore critical issues such as:

  • privacy;
  • ethics;
  • data protection;
  • efficiency and effectiveness;
  • the efficacy and suitability of the equipment design;
  • the involvement of the private sector as providers and operators;
  • the potential for discriminatory use.

Readers of the Information Law and Policy Centre blog might be particularly interested in a seminar event scheduled for April 2017 which will consider the “legal and ethical issues arising from actual and potential uses of tracking devices across a range of contexts”.

For further information, check out the network’s website or email the team to join the network.

Full Programme: Annual Workshop and Evening Lecture

Restricted and Redacted: Where now for human rights and digital information control?

The full programme for the Information Law and Policy Centre’s annual workshop and lecture on Wednesday 9th November 2016 is now available (see below).

For both events, attendance will be free of charge thanks to the support of the IALS and our sponsor, Bloomsbury’s Communications Law journal.

To register for the afternoon workshop please visit this Eventbrite page.
To register for the evening lecture please visit this Eventbrite Page.

Please note that for administrative purposes you will need to book separate tickets for the afternoon and evening events if you would like to come to both events.

PROGRAMME

10.45am: REGISTRATION AND COFFEE 

11.15am: Welcome

  • Judith Townend, University of Sussex
  • Paul Wragg, University of Leeds
  • Julian Harris, Institute of Advanced Legal Studies, University of London

11.30am-1pm: PANEL 1 – choice between A and B

Panel A: Social media, online privacy and shaming

Chair: Asma Vranaki, Queen Mary University of London

  1. David Mangan, City, University of London, Dissecting Social Media: Audience and Authorship
  2. Marion Oswald, Helen James, Emma Nottingham, University of Winchester, The not-so-secret life of five year olds: Legal and ethical issues relating to disclosure of information and the depiction of children on broadcast and social media
  3. Maria Run Bjarnadottir, Ministry of the Interior in Iceland, University of Sussex, Does the internet limit human rights protection? The case of revenge porn
  4. Tara Beattie, University of Durham, Censoring online sexuality – A non-heteronormative, feminist perspective

Panel B: Access to Information and protecting the public interest

Chair: Judith Townend, University of Sussex

  1. Ellen P. Goodman, Rutgers University, Obstacles to Using Freedom of Information Laws to Unpack Public/Private Deployments of Algorithmic Reasoning in the Public Sphere
  2. Felipe Romero-Moreno, University of Hertfordshire, ‘Notice and staydown’, the use of content identification and filtering technology posing a fundamental threat to human rights
  3. Vigjilenca Abazi, Maastricht University, Mapping Whistleblowing Protection in Europe: Information Flows in the Public Interest

1-2pm: LUNCH 

2-3.30pm: PANEL 2 – choice between A and B

Panel A: Data protection and surveillance

Chair: Nora Ni Loideain, University of Cambridge

  1. Jiahong Chen, University of Edinburgh, How the Best Laid Plans Go Awry: The (Unsolved) Issues of Applicable Law in the General Data Protection Regulation
  2. Jessica Cruzatti-Flavius, University of Massachusetts, The Human Hard Drive: Name Erasure and the Rebranding of Human Beings
  3. Wenlong Li, University of Edinburgh, Right to Data Portability (RDP)
  4. Ewan Sutherland, Wits University, Wire-tapping in the regulatory state – changing times, changing mores

Panel B: Technology, power and governance

Chair: Chris Marsden, University of Sussex

  1. Monica Horten, London School of Economics, How Internet structures create closure for freedom of expression – an exploration of human rights online in the context of structural power theory
  2. Perry Keller, King’s College, London, Bringing algorithmic governance to the smart city
  3. Marion Oswald, University of Winchester and Jamie Grace, Sheffield Hallam University, Intelligence, policing and the use of algorithmic analysis – initial conclusions from a survey of UK police forces using freedom of information requests as a research methodology
  4. Allison Holmes, Kent University, Private Actor or Public Authority? How the Status of Communications Service Providers affects Human Rights

3.30-5pm: PANEL 3 – choice between A and B

Panel A: Intermediary Liability

Chair: Christina Angelopoulos, University of Cambridge

  1. Judit Bayer, Miskolc University, Freedom and Diversity on the Internet: Liability of Intermediaries for Third Party Content
  2. Mélanie Dulong de Rosnay, Félix Tréguer, CNRS-Sorbonne Institute for Communication Sciences and Federica Giovanella, University of Trento, Intermediary Liability and Community Wireless Networks Design Shaping
  3. David Rolph, University of Sydney, Liability of Search Engines for Publication of Defamatory Matter: An Australian Perspective

Panel B: Privacy and anonymity online

Chair: Paul Wragg, University of Leeds

  1. Gavin Phillipson, University of Durham, Threesome injuncted: has the Supreme Court turned the tide against the media in online privacy cases?
  2. Fiona Brimblecombe, University of Durham, European Privacy Law
  3. James Griffin, University of Exeter and Annika Jones, University of Durham, The future of privacy in a world of 3D printing

5-6pm: TEA BREAK / STRETCH YOUR LEGS

6-8pm: EVENING LECTURE AND DRINKS

Lecture Title: Heads and shoulders, knees and toes (and eyes and ears and mouth and nose…): The impact of the General Data Protection Regulation on use of biometrics.

Biometrics are touted as one of the next big things in the connected world. Specific reference to biometrics and genetic data has been included for the first time in the General Data Protection Regulation. How does this affect existing provisions? Will the impact of the Regulation be to encourage or to restrict the development of biometric technology?

  • Speaker: Rosemary Jay, Senior Consultant Attorney at Hunton & Williams and author of Sweet & Maxwell’s Data Protection Law & Practice.
  • Chair: Professor Lorna Woods, University of Essex
  • Respondents: Professor Andrea Matwyshyn, Northeastern University and Mr James Michael, IALS

Information Law and Policy Centre Annual Lecture and Workshop

An afternoon workshop and evening lecture to be given by leading information and data protection lawyer Rosemary Jay.

Restricted and Redacted: Where now for human rights and digital information control?

The Information Law and Policy Centre is delighted to announce that bookings are now open for its annual workshop and lecture on Wednesday 9th November 2016, this year supported by Bloomsbury’s Communications Law journal.

For both events, attendance will be free of charge thanks to the support of the IALS and our sponsor, although registration will be required as places are limited.

To register for the afternoon workshop please visit this Eventbrite page.

To register for the evening lecture please visit this Eventbrite Page.

Please note that for administrative purposes you will need to book separate tickets for the afternoon and evening events if you would like to come to both events.

AFTERNOON WORKSHOP/SEMINAR 
11am – 5pm (lunch and refreshments provided)

For the afternoon part of this event we have an excellent set of presentations lined up that consider information law and policy in the context of human rights. Speakers will offer an original perspective on the way in which information and data interact with legal rights and principles relating to free expression, privacy, data protection, reputation, copyright, national security, anti-discrimination and open justice.

We will be considering topics such as internet intermediary liability, investigatory and surveillance powers, media regulation, freedom of information, the EU General Data Protection Regulation, whistleblower protection, and ‘anti-extremism’ policy. The full programme will be released in October.

EVENING LECTURE BY ROSEMARY JAY, HUNTON & WILLIAMS
6pm-7.30pm (followed by reception)

The afternoon workshop will be followed by a keynote lecture to be given by Rosemary Jay, senior consultant attorney at Hunton & Williams and author of Sweet & Maxwell’s Data Protection Law & Practice. Continue reading

Brexit: “You don’t know what you’ve got till it’s gone”

Brexit IT law scrabble

In the following editorial, Professor Lilian Edwards considers the implications of the Brexit vote for information law and assesses the mood amongst the academic community in the aftermath of the EU Referendum.

The article was first published in Volume 13, Issue 2 of SCRIPT-ed: A Journal of Law, Technology and Society. Professor Edwards’ views do not represent those of the Information Law and Policy Centre or the Institute of Advanced Legal Studies. 

On 23 June 2016 a slim majority of UK voters decided we should leave the EU in one of the great political upsets of British political history. On 24 June, the next day, CREATe,[1] the RCUK copyright and business models centre which I have helped run since 2012, ran a one-day festival at the Royal Society of the Arts in London. This was designed to be a showcase and celebration of four years of working at the cutting edge of copyright and how it either helps or hinders the creative industries and arts. Hundreds of academics signed up to show and see, including the Director of CREATe, Martin Kretschmer of Glasgow University, from Germany by birth, and many others from all over Europe and beyond.

It was a classic international IT/intellectual property event: analysing laws made throughout the world to regulate globalised cultural markets, transnational data and product flows, disruptive technologies that disregard borders, and audiences as likely to listen to music made in Brazil via decentralised P2P networks, as watch Netflix series made in the US, or use smartphones made in Japan to watch Hindi pop videos on YouTube.

In the event, the CREATe Festival became more of a wake. Reportedly, experienced academics, who thought themselves hardened to trauma by years of bombardment from REF, TEF and NSS, were almost in tears at the first session. This writer, derelict of duty, was not there to corroborate, still staring like a rabbit in the headlights at the TV in a hotel bedroom in Docklands, where the dominant tech, business and financial workers were almost equally in shock.

So, Brexit. As the dust not so much settles as temporarily accumulates while we work out what on earth happens next, what are the implications for IT law and UK academe? Are they really as bad as they seemed that morning? Continue reading

Pokémon Go has revealed a new battleground for virtual privacy

Pokemon go and virtual privacyAndres Guadamuz, University of Sussex

People have been lingering outside Boon Sheridan’s house all through the night. The designer lives in an old church in Massachusetts that has been designated a “gym” in the new smartphone game Pokémon Go. Because the game requires players to visit places in the real world, Sheridan now has to put up with people regularly stopping outside his building to play.

It has got to the point where he has started wondering if there is anything the law can do in situations like this. He wrote on Twitter: “Do I even have rights when it comes to a virtual location imposed on me? Businesses have expectations, but this is my home.” This problem of virtual activities impinging on physical spaces in only likely to grow with the increasing popularity of the augmented reality used in games such as Pokémon Go to overlay digital landscapes on real ones. But there may be a way to deal with this before it becomes a serious legal problem for more people.

Pokémon Go encourages players to interact with their actual environment by using realistic maps of their surroundings as part of the game. Certain landmarks, monuments and public buildings are tagged as “stops”, where players can collect items, and some public spaces including churches, parks and businesses are tagged as “gyms”, where users can battle each other.

It is the tagging element that has prompted a few interesting legal questions about the role of augmented reality. The game’s developer, Niantic, is using a combination of data from Google Maps and user-generated tags collected from an earlier game called Ingress. This data is used to identify real-life spots as either a stop or a gym. But what happens when the data mistakenly identifies a house as a public space, as happened to Sheridan?

As it turns out, Niantic offers people the chance to highlight problems with a location. And in the grand scheme of things, whether a person’s house is mis-tagged in a game does not seem like something worthy of new laws, particularly when the developer offers to correct any errors. But Pokémon Go is just the beginning. The game has proven the potential of augmented reality to appeal to a very large audience, so we can expect many other applications of the technology to come our way.

The wild success of location-based gaming may bring about a horde of imitators, so expect a new generation of augmented reality gaming to hit the app stores soon. And the technology’s potential also goes beyond gaming so we can expect more mainstream applications of geo-tagging and location-based interaction, especially with the growth of wearable technology such as fitness trackers. You can imagine that soon we will have a world in which ever house, every car, even every person could come with an added virtual tag full of data. The potential for innovation in this area is staggering.

But what if your house is tagged in a global database without your permission and you value your privacy so do not want any passersby to know that you live there? Or what if a commercially-sensitive database identifies your business with incorrect data and you cannot reach the developer or they refuse to amend it? People looking for businesses in your area may miss you and go to a competitor that is correctly listed. Even more worrying, what if your house was previously occupied by a sex offender and is tagged in an outdated database with that information?

The problems would go far beyond what is happening with Sheridan’s house. These cases could have real negative effects on people’s lives, privacy, or business prospects.

The potential for trouble will be worse with the launch of apps that allow users to tag public or private buildings themselves. Why will abusers and trolls bother spray-painting a house, when they can geo-tag it maliciously? Paint washes away, but data may be more difficult to erase.

My proposal is to extend data protection legislation to virtual spaces. At the moment, data protection is strictly personal as it relates to any information about a specific person, known as a data subject. The data subject has a variety of rights, such as having the right to access their data and rectify and erase anything that is inaccurate or excessive.

Protecting objects

Under my proposal, the data subject’s rights would remain as they are, but the law would contain a new definition, that of the data object. This relates to data about a specific location. The rights of data objects would be considerably more limited than those of a data subject. But classifying them like this would take advantage of the data-protection mechanisms that already exist for when someone is intrinsically linked to a location.

In other words, just tagging a location on an augmented reality database wouldn’t violate the data protection. But mis-tagging a location as a public space in a way that could impinge on people’s enjoyment of that location could trigger action by the regulator to have the tag amended, removed or even erased. This would be especially useful for private spaces such as Sheridan’s house. If the app developer fails to make a change to the data, the property owner could make a request to the data protection authority, who would then force developers to change the data – or face fines.

There are limits to this proposal. Such a regime would only apply to companies based in the same country as the data protection regulator. So, for example, European countries wouldn’t be able to force Niantic to make changes to Pokémon Go’s tags, because the company is based in the US. There would also need to be strict restrictions on exactly what counts a data object and what is worth amending or deleting, otherwise the system could be abused.

But one thing is already certain: Pokémon Go is just the beginning of a new world of location-based data applications, and we need to find better ways to protect our digital rights in that space.

The ConversationAndres Guadamuz, Senior Lecturer in Intellectual Property Law, University of Sussex

This article was originally published on The Conversation. Read the original article.

Photo: Eduardo Woo, CC BY-SA 2.0

Analysing the Advocate General’s opinion on data retention and EU law

7562831366_66f986c3ea_o (1)Last week, the Advocate General published an opinion on a case brought to the European Court of Justice concerning the compatibility of the UK and Sweden’s data retention laws with EU law.

In a detailed analysis, Lorna Woods, Professor of Internet Law at the University of Essex considers the potential implications of the opinion for national data retention regimes (including the UK’s Investigatory Powers Bill) and the legal tensions which arise from the Advocate General’s opinion. This post first appeared on Professor Steve Peer’s EU Law Analysis blog.     

The Advocate General’s opinion concerns two references from national courts which both arose in the aftermath of the invalidation of the Data Retention Directive (Directive 2006/24) in Digital Rights Ireland dealing with whether the retention of communications data en masse complies with EU law.

The question is important for the regimes that triggered the references, but in the background is a larger question: can mass retention of data ever be human rights compliant. While the Advocate General clearly states this is possible, things may not be that straightforward. Continue reading