Tag Archives: lorna woods

Analysing the Advocate General’s opinion on data retention and EU law

7562831366_66f986c3ea_o (1)Last week, the Advocate General published an opinion on a case brought to the European Court of Justice concerning the compatibility of the UK and Sweden’s data retention laws with EU law.

In a detailed analysis, Lorna Woods, Professor of Internet Law at the University of Essex considers the potential implications of the opinion for national data retention regimes (including the UK’s Investigatory Powers Bill) and the legal tensions which arise from the Advocate General’s opinion. This post first appeared on Professor Steve Peer’s EU Law Analysis blog.     

The Advocate General’s opinion concerns two references from national courts which both arose in the aftermath of the invalidation of the Data Retention Directive (Directive 2006/24) in Digital Rights Ireland dealing with whether the retention of communications data en masse complies with EU law.

The question is important for the regimes that triggered the references, but in the background is a larger question: can mass retention of data ever be human rights compliant. While the Advocate General clearly states this is possible, things may not be that straightforward. Continue reading

Lorna Woods: An overview of the Investigatory Powers Bill report by the Joint Committee on Human Rights

In this post, Professor Lorna Woods, University of Essex and Senior Associate Research Fellow at the Institute of Advanced Legal Studies, considers a report by the Joint Committee on Human Rights on the Investigatory Powers Bill.

The Joint Committee has reported on the IPB. In doing so, it has made clear that this is an expedited report to aid the bill’s hasty progress through Parliament. The Joint Committee does not suggest that its review covers all the issues, nor that it might not come back to issues. The Joint Committee discussed issues arising under seven headings: bulk powers; thematic warrants; modifications; MPs and the Wilson Doctrine; legal professional privilege (LPP); journalists’ sources; and oversightContinue reading

Update on Information Law and Policy Centre’s contribution to Investigatory Powers debate

As previously reported on this blog, our Information Law and Policy Centre (ILPC) at IALS has facilitated an ad hoc research group of academics and practitioners to contribute to the ongoing policy debate on surveillance following publication of the government’s Draft Investigatory Powers Bill. Members of this group published a clause-by-clause review examining their provenance – that is, whether the clauses come from existing legislation, or are newly introduced.

Lorna Woods, IALS senior associate research fellow and professor in law at the University of Essex, then submitted a revised version in her evidence to the joint select committee scrutinising the Bill. The committee used her evidence in its report published in February, for a table describing each investigatory capability in the draft bill (pp.32-37).

Separately, members of the Information Law and Policy Centre’s advisory board including Professor Lilian Edwards, Strathclyde University and Dr Lawrence McNamara, Bingham Centre for the Rule of Law, have signed an open letter published in the Telegraph calling on the government to give the Investigatory Powers Bill, which was introduced to the House of Commons on 1st March, the time it needs and not rush it through Parliament.

Members of the Centre have also participated in related events: Information Law and Policy Centre director Dr Judith Townend spoke at a symposium on the Bill at the University of Cambridge on 5 February 2016, and on 8th March, acted as discussant in an event on surveillance and human rights at Senate House, as part of a Seminar Series organised by the Institute of Commonwealth Studies and the Human Rights Consortium.  Other speakers included Kirsty Brimelow QC and Silkie Carlo, policy officer in technology and surveillance at Liberty.

Some things old, some things new: A clause-by-clause review of the Draft Investigatory Powers Bill

ipbillSoon after the publication of the Draft Investigatory Powers Bill in November, a number of privacy, surveillance and freedom of expression specialist academics and practitioners gathered at the Institute of Advanced Legal Studies to discuss the detail and the main issues.

Fairly quickly it was agreed that a clause-by-clause review of legislative sources would be a useful resource, to inform and complement wider commentary and committee submissions. Under Professor Lorna Woods’ stewardship, we carved up the Bill and compiling/reviewing/administrative roles between us.

Given the length of the main Bill document (299 pages) plus all the accompanying material and relevant legislation and reviews to consider, it was an ambitious task. But we have managed to (just) meet our pre-Christmas deadline and today have published a set of working documents that identify the provenance of as many as the clauses in the draft Investigatory Powers Bill as possible.

We have taken the view that the clauses can be ascribed to one of three groups:

  • The same as a pre-existing provision (or functionally equivalent);
  • Completely new; or
  • Amended/extended.

Where there are pre-existing sources, we have highlighted the relevant provision [see list]; for those that are completely new there are no such sources, but we have included references to the three reviews published in 2015: the Anderson Investigatory Powers Review; the ISC Privacy and Security report and the RUSI Independent Surveillance Review. As regards this latter aspect, only a brief sketch has been included; it is safe to say that there is more detail from the reports that could be pulled through were a more detailed analysis to be undertaken. The aim of this project was not however, to provide such an analysis but rather to provide a tool to assist others seeking to undertake such projects.

Although our primary objective related to the identification or relevant sources we have as part of the project flagged up the significance of the changes, as well as issues where we were not sure of the consequences of the drafting/changes identified. This we hope will give food for thought for others engaged in this area. While one of the stated aims of this legislative endeavour is to clarify the terms on which surveillance may take place, the resulting draft is still long and complex, with parts of the old, fragmented system for surveillance still remaining in place.

Follow the links below for a Part-by-Part review of drafting provenance. The chapters of some Parts have been split into different Google documents, which you can view and download. These working documents may be subject to change, following further assessment. Comments/suggestions to: ipbillresearchgroup@gmail.com. 

THEMES

Introduction of oversight
One of the important novelties of the draft IPB is the introduction of oversight mechanisms (via the Judicial Commissioner process: the ‘double lock’ mechanism, and the consolidation of various external review bodies into a new body, the IPC). While this is significant in terms accountability and control, there will be questions as to what the standards of judicial review actual are and whether ex post facto review is sufficient – questions that become increasingly important in the light of Grand Chamber judgments from both European courts regarding mass surveillance and technical bypassing of oversight procedures (eg. Schrems, Digital Rights Ireland, Zakharov). There are also questions about the independence of the IPC and the scope of his/her review functions, and regarding the operation of the new error reporting provisions.

Standardisation of warrant process
Looking at the warrant process, similar ideas can be seen reoccurring in respect of successive types of warrant – so length of warrants, process for renewal and cancellation. This is probably advantageous from the perspective of transparency and accessibility. Nonetheless, while the oversight was built on a common structure, there were small differences in the precise elaboration of that structure across the various parts of the draft IPB, for example the approach to material obtained under a cancelled warrant. In sum there is not just one, uniform system despite the strong similarity between the various parts of the bill. Further, the impact of the new structures in terms of comparison with what has gone before would vary depending on what went on before. So while it is no doubt a good thing that the bulk interception warrants are limited to 6 months, this means that some of the pre-existing warrants will be extended from the current 3 months.

Normalisation of techniques
This ‘standardisation’ process also means that things that seem to have been limited under RIPA to interception warrants have been applied across the whole range of warrants under IPB – a sort of normalisation of those techniques (e.g capability maintenance and national security notices). This takes place against a background in which there are new forms of warrant (or perhaps existing forms of warrant are recognised and put on a specific statutory footing).

Impact of definitions
The definitions are very important as they determine scope of application for particular provisions. The definitions have been changed, perhaps in response to technological and market developments. There are some questions as to the precise scope of some of these concepts (instances of difficult areas were given in the evidence to the Science and Technology Committee, for example). Because of their systemic effect, however, changes to definitions have far-reaching consequences for the meaning and consequently scope of various powers and indeed, some provisions which appear not to have changed in terms of the wording used, will have changed because of changes to the definitions of those words. Careful reading is required to understand the significance of this.

Not a totally consolidated system
The introduction to the bill emphasised that the aim of the bill is to consolidate the regime, so that provisions enabling surveillance are not scattered across a range of instruments, some of which were arguably not designed for that purpose, empowering a wide range of authorities to intrude. Certainly, the bill goes some way in this direction, enclosing some behaviours within a detailed oversight regime and foreclosing the use of some general powers. Nonetheless, key general powers remain – such as those in the Police Act and the Intelligence Services Act – although some attempt has been made to curtail their use in circumstances falling with the scope of this Bill.

 CONTRIBUTORS

This project was put together with the support of the Information Law and Policy Centre at the Institute of Advanced Legal Studies (IALS). The team, led by Professor Lorna Woods, was: Andrew Cormack, Ray Corrigan, Julian Huppert, Nora Ní Loideain, Eleanor Mitchell, Marion Oswald, Javier Ruiz Diaz, Jessica Simor, Graham Smith, Judith Townend, Caroline Wilson Palow, and Ian Walden. A wider group of  specialist academics and practitioners have been involved in discussions over email and at two meetings held at the IALS in autumn 2015.

Further resources

Lorna Woods: ECtHR case report and comment – Roman Zakharov v Russia (Grand Chamber)

In this post Lorna Woods, professor of internet law, University of Essex and senior associate research fellow at the Institute of Advanced Legal Studies, considers the ECtHR’s judgment  in Roman Zakharov v. Russia (47143/06) [GC] handed down on 4 December 2015.

Introduction

The European Court has heard numerous challenges to surveillance regimes, both individual and mass surveillance, with mixed results over the years.   Following the Snowden revelations, the question would be whether the ECtHR would take a hard line particularly as regards mass surveillance, given its suggestion in Kennedy that indiscriminate acquisition of vast amounts of data should not be permissible. Other human rights bodies have condemned this sort of practice, as can be seen by the UN Resolution 68/167 the Right to Privacy in the Digital Age. Even within the EU there has been concern as can be seen in cases such as Digital Rights Ireland and more recently in Schrems.

Facts

Zakharov, a publisher and a chairman of an NGO campaigning for media freedom and journalists’ rights, sought to challenge the Russian system for permitting surveillance in the interests of crime prevention and national security. Z claimed that the privacy of his communications across mobile networks was infringed as the Russian State, by virtue of Order No. 70, had required the network operators to install equipment which permitted the Federal Security Service to intercept all telephone communications without prior judicial authorisation.

This facilitated blanket interception of mobile communications. Attempts to challenge this and to ensure that access to communications was restricted to authorised personnel were unsuccessful at national level. The matter was brought before the European Court of Human Rights. He argued that the laws relating to monitoring infringe his right to private life under Article 8; that parts of these laws are not accessible; and that there are no effective remedies (thus also infringing Art. 13 ECHR).

Judgment

The first question was whether the case was admissible. The Court will usually not rule on questions in abstracto, but rather on the application of rules to a particular situation. This makes challenges to the existence of a system, rather than its use, problematic. The Court has long recognised that secret surveillance can give rise to particular features that may justify a different approach. Problematically, there were two lines of case law, one of which required the applicant to show a ‘reasonable likelihood’ that the security services had intercepted the applicant’s communications (Esbester) and which favoured the Government’s position, and the other which suggested the menace provided by a secret surveillance system was sufficient (Klass) and which favoured the applicant.

The Court took the opportunity to try to resolve these potentially conflicting decisions, developing its reasoning in Kennedy. It accepted the principle that legislation can be challenged subject to two conditions: the applicant potentially falls within the scope of the system; and the level of remedies available. This gives the Court a form of decision matrix in which a range of factual circumstances can be assessed. Where there are no effective remedies, the menace argument set out in its ruling in Klass would be accepted.

Crucially, even where there are remedies, an applicant can still challenge the legislation if ‘due to his personal situation, he is potentially at risk of being subjected to such measures’ [para 171]. This requirement of ‘potentially at risk’ seems lower than the ‘reasonable likelihood’ test in the earlier case of Esbester. The conditions were satisfied in this case as it has been recognised that mobile communications fall within ‘private life’ and ‘correspondence’ (see Liberty, para 56, cited here para 173).

This brought the Court to consider whether the intrusion could be justified. Re-iterating the well-established principles that, to be justified, any interference must be in accordance with the law, pursue a legitimate aim listed in Article 8(2) and be necessary in a democratic society, the Court considered each in turn.

The requirement of lawfulness has a double aspect, formal and qualitative. The challenged measure must be based in domestic law, but it must also be accessible to the person concerned and be foreseeable as to its effects (see e.g Rotaru). While these principles are generally applicable to all cases under Article 8 (and applied analogously in other rights, such as Articles 9, 10 and 11 ECHR), the Court noted the specificity of the situation. It stated that:

‘…. domestic law must be sufficiently clear to give citizens an adequate indication as to the circumstances in which and the conditions on which public authorities are empowered to resort to any such measures’ [para 229].

In this, the Court referred to a long body of jurisprudence relating to surveillance, which recognises the specific nature of the threats that surveillance is used to address. In the earlier case of Kennedy for example, the Court noted that ‘threats to national security may vary in character and may be unanticipated or difficult to define in advance’ [para 159].

While the precision required of national law might be lower than the normal standard, the risk of abuse and arbitrariness are clear, so the exercise of any discretion must be laid down by law both as to its scope and the manner of its exercise. It stated that ‘it would be contrary to the rule of law … for a discretion granted to the executive in the sphere of national security to be expressed in terms of unfettered power’ [para 247]. Here, the Court noted that prior judicial authorisation was an important safeguard [para 249]. The Court gave examples of minimum safeguards:

  • The nature of offences which may give rise to an interception order
  • A definition of the categories of people liable to have their telephones tapped
  • A limit on the duration of telephone tapping
  • Protections and procedures for use, storage and examination of resulting data
  • Safeguards relating to the communication of data to third parties
  • Circumstances in which data/recordings must be erased/destroyed (para 231)
  • the equipment installed by the secret services keeps no logs or records of intercepted communication, which coupled with the direct access rendered any supervisory arrangements incapable of detecting unlawful interceptions
  • the emergency procedure provided for in Russian law, which enables interception without judicial authorization, does not provide sufficient safeguards against abuse.

The Court then considered the principles for assessing whether the intrusion was ‘necessary in a democratic society’, highlighting the tension between the needs to protect society and the consequences of that society of the measures taken to protect it. The Court emphasised that it must be satisfied that there are adequate and effective guarantees against abuse.

In this oversight mechanisms are central, especially where individuals will not – given the secret and therefore unknowable nature of surveillance – be in a position to protect their own rights. The court’s preference is to entrust supervisory control to a judge. For an individual to be able to challenge surveillance retrospectively, affected individuals need either to be informed about surveillance or for individuals to be able to bring challenges on the basis of a suspicion that surveillance has taken place.

Russian legislation lacks clarity concerning the categories of people liable to have their phones tapped, specifically through the blurring of witnesses with suspects and the fact that the security services have a very wide discretion. The provisions regarding discontinuation of surveillance are omitted in the case of the security services. The provisions regarding the storage and destruction of data allow for the retention of data which is clearly irrelevant; and as regards those charged with a criminal offence is unclear as to what happens to the material after the trial.

Notably, the domestic courts do not verify whether there is a reasonable suspicion against the person in respect of whose communications the security services have requested interception be permitted. Further, there is little assessment of whether the interception is necessary or justified: in practice it seems that the courts accept a mere reference to national security issues as being sufficient.

The details of the authorisation are also not specified, so authorisations have been granted without specifying – for example – the numbers to be interception. The Russian system, which at a technical level allows direct access, without the police and security services having to show an authorisation is particularly prone to abuse. The Court determined that the supervisory bodies were not sufficiently independent. Any effectiveness of the remedies available to challenge interception of communications is undermined by the fact that they are available only to persons who are able to submit proof of interception, knowledge and evidence of which is hard if not impossible to come by.

Comments

The Court could be seen as emphasising in its judgment by repeated reference to its earlier extensive case law on surveillance that there is nothing new here. Conversely, it could be argued that Zakharov is a Grand Chamber judgment which operates to reaffirm and highlight points made in previous judgments about the dangers of surveillance and the risk of abuse. The timing is also significant, particularly from a UK perspective. Zakharov was handed down as the draft Investigatory Powers Bill was published. Cases against the UK are pending at Strasbourg, while it follows the ECJ’s ruling in Schrems, with Davis (along with the Swedish Tele2 reference) now pending before it. The ECtHR noted the Digital Rights Ireland case in its summary of applicable law.

In setting out its framework for decisions, the Court’s requirement of ‘potentially at risk’ even when remedies are available seems lower than the ‘reasonable likelihood’ test in Esbester. The Court’s concern relates to ‘the need to ensure that the secrecy of surveillance measures does not result in the measures being effectively unchallengeable and outside the supervision of the national judicial authorities and of the Court’ [para 171]. This broad approach to standing is, as noted by Judge Dedon’s separate but concurring opinion, in marked contrast to the approach of the United States Supreme Court in Clapper where that court ‘failed to take a step forward’ (Opinion, section 4).

The reassessment of ‘victim status’ simultaneously determines standing, the question of the applicability of Article 8 and the question of whether there has been an infringement of that right. The abstract nature of the review then means that a lot falls on the determination of ‘in accordance with the law’ and consequently the question of whether the measures (rather than individual applications) are necessary in a democratic society. The leads to a close review of the system itself and the safeguards built in. Indeed, it is noteworthy that the Court did not just look at the provisions of Russian law, but also considered how they were applied in practice.

The Court seemed particularly sceptical about broadly determined definitions in the context of ‘national, military, economic or ecological security’ which confer ‘almost unlimited degree of discretion’ [para 248]. Although the system required prior judicial authorisation (noted para 259], in this case it was not sufficient counter to the breadth of the powers. So, prior judicial authorisation will not be a ‘get out of gaol free’ card for surveillance systems. There must be real oversight by the relevant authorities.

Further, the Court emphasised the need for the identification of triggering factor(s) for interception of communications, as otherwise this will lead to overbroad discretion [para 248]. Moreover, the Court stated that the national authorisation authorities must be capable of ‘verifying the existence of a reasonable suspicion against the person concerned’ [260-2], which in the context of technological access to mass communications might be difficult to satisfy. The Court also required that specific individuals or premises be identified. If it applies the same principles to mass surveillance currently operated in other European states, many systems might be hard to justify.

A further point to note relates to the technical means by which the interception was carried out. The Court was particularly critical of a system which allows the security services and the police the means to have direct access to all communications. It noted that ‘their ability to intercept the communications of a particular individual or individuals is not conditional on providing an interception authorisation to the communications service provider’ [para 268], thereby undermining any protections provided by the prior authorisation system.

Crucially, the police and security services could circumvent the requirement to demonstrate the legality of the interception [para 269]. The problem is exacerbated by the fact that the equipment used does not create a log of the interceptions which again undermines the supervisory authorities’ effectiveness [para 272]. This sort of reasoning could be applied in other circumstances where police and security forces have direct technical means to access content which is not dependent on access via a service provider (e.g. hacking computers and mobiles).

In sum, not only has the Russian system been found wanting in terms of compliance with Article 8, but the Court has drawn its judgment in terms which raised questions about the validity of other systems of mass surveillance.

  • Professor Lorna Woods is Deputy Director of Research (Impact) at the School of Law, University of Essex and senior associate research fellow at the Institute of Advanced Legal Studies
  • Our blog posts give the view of the author and do not represent the position of the Information Law and Policy Centre or the Institute of Advanced Legal Studies.

Lorna Woods: Safe Harbour – Key Aspects of the ECJ Ruling

On Tuesday (6 October) the Court of Justice of the European Union (ECJ) declared that the Safe Harbour agreement that allows the movement of digital data between the EU and the US was invalid. The case was brought by Max Schrems, an Austrian student and privacy campaigner who, in the wake of the Snowden revelations of mass surveillance, challenged the way in which technology companies such as Facebook transferred data to the US. In this guest post, which originally appeared on the LSE Media Policy Project blog, Professor Lorna Woods of the University of Essex explains some key aspects of the judgment.

This case arises from a challenge to the transfer of personal data from the EU (via Ireland) to the United States, which relied on a Commission Decision 2000/520 stating that the Safe Harbour system in place in the United States was ‘adequate’ as permitted by Article 25 Data Protection Directive. While the national case challenged this assessment, the view of the Irish data protection authority (DPA) was that it had no freedom to make any other decision – despite the fact that the Irish authorities and courts were of the view the system did not meet the standards of the Irish constitution – because the European Commission decision was binding on them. The question of the validity and status of the Decision were referred to the Court of Justice of the European Union (ECJ).

The Advocate General, a senior ECJ official who advises on cases, took the view that the Commission’s decision could not limit the powers of DPAs granted under the directive and that the US system was inadequate, particularly as regards the safeguards against mass surveillance (a more detailed review of the AG’s Opinion can be found here). The ECJ has now ruled, following very swiftly on from the Opinion. The headline: the Commission’s decision is invalid. There is more to the judgment than this.

Powers of DPAs and Competence

The ECJ emphasised that the Commission cannot limit the powers granted by the Data Protection Directive, but at the same time Commission decisions are binding and benefit from a presumption of legality. Nonetheless, especially given the importance of the rights, individuals should have the right to be able to complain and ask a DPA to investigate. DPAs remain responsible for oversight of data processing on their territory, which includes the transfer of personal data outside the EU. The ECJ resolves this conundrum by distinguishing between the right and power of investigation and challenge to Commission decisions, and the declaration of such decisions’ invalidity. While the former remains with DPAs, the latter – following longstanding jurisprudence, remains with the ECJ.

Validity of Decision 2000/520

The ECJ noted that there is no definition of what is required by way of protection for the purposes of Article 25 of the Data Protection Directive. According to the ECJ, there were two aspects to be derived from the text of Article 25. There is the requirement that protection be ‘adequate’ in Article 25(1) and the fact that Article 25(6) refers to the fact that protection must be ensured. The ECJ agreed with the Advocate General that this Article is ‘intended to ensure that the high level of that protection continues where personal data is transferred to a third country’ (para [72], citing the Advocate’s General’s Opinion para [139]), which seems higher than ‘adequate’ might at first suggest. That requirement does not however mean that protection in third (non-EU) countries must be identical but rather that it is equivalent (para 73]) and effective (para [74]). This implies an on-going assessment of the rules and their operation in practice, where the Commission has very limited room for discretion.

The Court concluded that the Decision was unsound. It did so on the basis that mass surveillance is unacceptable, that there was no legal redress and that the decision did not look at the effectiveness of enforcement. It steered clear of determining whether the self-certification system itself could ever be fit for purpose, basing its reasoning on only elements of the Commission’s decision (but which were so linked with the rest that their demise meant the entire decision fell).

Implications

This is a judgment with very far reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right, and underlines that protection must be at a high level. In this, the ECJ is building a consistent line of case law – and case law that deals not just with mass surveillance (Digital Rights Ireland) but activities by companies (Google Spain) and private individuals (Rynes).

At a practical level, what happens today with the Decision declared invalid? Going forward, will there be more challenges looking not just at mass surveillance but at big data businesses self-certifying? What will happen to uniformity in the EU? Different Member States may well take different views. This should also be understood against the Weltimmo judgment of last week, according to which more than one Member State could have the competence to regulate a multinational business (irrespective of where that business has its registered office in the EU). Finally, what does this mean for the negotiation of the Data Protection Regulation? The political institutions had agreed that the Regulation would not offer lower protection than the Data Protection Directive, but now we might have to examine this directive more closely.

Lorna Woods: Schrems v Data Protection Commissioner – The beginning of the end for safe harbour?

The Advocate General of the European Court of Justice has delivered his non-binding legal opinion in Schrems v. Data Protection Commissioner, a case brought by an Austrian citizen against the Irish Data Protection Commissioner concerning the transfer of Facebook data to US servers.  Professor Lorna Woods, University of Essex, reports and comments on the opinion – and its potential implications – in this guest post. 

Case C-362/14: Schrems v. Data Protection Commissioner

Opinion of the Advocate General

FACTS AND BACKGROUND

The Data Protection Directive imposes relatively high standards of data protection on those processing data in the EU. It also prohibits the transfer of data to non-EU countries unless an adequate level of protection for the processing of data is ensured there too. Under Article 25(6) of the Data Protection Directive, the Commission can determine that a third country ensures an adequate level of protection of personal data by reason of its domestic law or of the international commitments it has entered into. Should the Commission adopt a decision to that effect, transfer of personal data to the third country concerned would be permissible.

The Commission adopted Decision 2000/520 pursuant to that provision accepting that the ‘Safe Harbor’ system in the United States provided a satisfactory level of protection. It sets out certain principles but mainly operates on a basis of self-certification, although the US authorities may intervene.  A number of mechanisms, combining private dispute resolution and oversight by the public authorities, exist to check compliance with the ‘safe harbor’ principles.

Decision 2000/520 permits the limitation of these principles, ‘to the extent necessary to meet national security, public interest, or law enforcement requirements’ and ‘by statute, government regulation, or case law that create conflicting obligations or explicit authorisations, provided that, in exercising any such authorisation, an organisation can demonstrate that its non-compliance with the Principles is limited to the extent necessary to meet the overriding legitimate interests furthered by such authorisation’. The reference concerns the legitimacy of these arrangements in the light of the Data Protection Directive and the EU Charter of Fundamental Rights.

The case was originated by an Austrian national who had signed up to Facebook, run in Europe by Facebook Ireland. All data is however transferred to the US parent company. Following the Snowden revelations, Schrems challenged the level of protection in the USA against state surveillance with reference in particular to the PRISM programme under which the NSA under which it obtained unrestricted access to mass data stored on servers in the United States.

The Irish Data Protection Commissioner refused to investigate the complaint as according to the Irish statute, Decision 2000/520 of the Commission was final (s. 11(2)(a) Data Protection (Amendment) Act 2003). The decision was reviewed before the High Court which found that if the matter were to be determined solely by Irish law, s. 11(2)(a) would end the matter. It recognised, however, that implementation of EU law must be carried out in the light of the EU Charter. The High Court then referred questions to the Court of Justice as to whether the Data Protection Commissioner was absolutely bound by Decision 2000/520.

OPINION OF THE ADVOCATE GENERAL
Competence of the Irish Data Protection Commission

The Data Protection Commissioner argued that its responsibility relates to the implementation of the Irish legislation in particular cases of application of the rules; conversely, the assessment of adequacy of the US system overall is the responsibility of the European Commission. Section 11(2)(a) reflects this division and meant that the Irish Data Protection Commission could not act on Schrems’s complaint.

Given the important role of the national authorities in the overall system of protection (para 63), AG Bot concluded that power conferred by the Data Protection Directive on the Commission does not affect the powers which the Directive has conferred on the national supervisory authorities, so a national regulator could investigate matters notwithstanding the Commission’s decision (para 61). Art 8(3) of the Charter, which occupies ‘the highest level of the hierarchy of rules in EU law’ (para 72) requires independence (see also Case C-288/12 Commission v. Hungary and Case C-293 and 594/12 Digital Rights Ireland) and it would be this quality that would be curtailed were national authorities not able to investigate a claim on its merits.

So, while the Commission plays an important role in ensuring uniformity of approach across EU Member States and its decision is binding, this cannot justify a summary dismissal of a complaint without looking into its merits (para 85). Uniformity achieved by virtue of a Commission decision, such as Decision 2000/520, ‘can continue only while that finding [of adequacy] is not called in question’ (para 89).

Here, not only has the Commission decision been criticised by others, but the Commission has also expressed its concerns and has entered into negotiation with a view to remedying the problem.

In reaching these conclusions, Bot – referring to earlier case law – emphasised that the orientation of the Directive is towards ensuring privacy. Further, the Directive must be understood in the light of the Charter and not only that, but Member States must ensure that they do not rely on interpretations of the Directive which would be inconsistent with the Charter Rights (paras 99-100, relying on Case C-131/12 Google Spain and Cases C-411 and 493/10 NS). Here, the existence of an irrebuttable presumption was inconsistent with the duty of Member States to interpret EU law in a manner consistent with the Charter ( para 104).

Validity of Decision 2000/520

Bot then noted it is within the scope of the court’s powers to question on its own motion the validity of an instrument which it had been asked to interpret (going back as far as Case 62/76 Strehl). The review would consider only those aspects of the safe harbour scheme that had been discussed– specifically the PRISM programme and the generalised surveillance of citizens by the NSA.

While the normal position is that a decision is assessed as at the time at which the decision was taken, the ECJ has recognised that sometimes circumstances might subsequently come to light which changes that position. Bot suggested that this was one such case and that the review should be carried out by reference to the current legal and factual context.

The first issue is the determination of ‘adequate’. Bot argued that the purpose of the limitation on transfers was to ensure continuity of protection under the Data Protection Directive, which is described as a high level of protection. So while the means to ensure that level of protection might differ from the system in the EU, the level must be the same. Consequently, ‘the only criterion that must guide the interpretation of that word is the objective of attaining a high level of protection of fundamental rights…’ (para 142).

The Advocate General took as read two points: that the NSA would engage in surveillance; and EU citizens had no mechanism for complaint in the USA. So,

‘the law and practice of the United States allow the large-scale collection of the personal data of citizens of the Union …without those citizens benefiting from effective judicial protection’ (para 158).

Specifically, the law enforcement derogations are too broadly worded and allow the reliance on those derogations beyond what is strictly necessary. Such widespread access constitutes an interference with Art 8 EUCFR, a fact exacerbated by the secrecy surrounding these activities. While interferences can in principle be justified, here the Advocate General suggested that it was

‘extremely doubtful that the limitations at issue in the present case … [respect] the essence of Articles 7 and 8 of the Charter’ (para 177).

The exceptions are not specifically precisely defined and nor are they proportionate. The Advocate General referred back to Digital Rights Ireland to highlight that the legislature’s discretion in this context is limited because of the significance of the right to data protection. Limits to the right must be limited to that which is strictly necessary. The Advocate General highlighted the mass and indiscriminate nature of the surveillance carried out, which is ‘inherently disproportionate and constitutes an unwarranted interference’ (para 200).

It follows that third countries cannot be regarded as ensuring an adequate level of protection where such mass surveillance is permitted. Further, the safe harbour scheme – which relies on the FTC and private dispute resolution mechanisms -does not provide sufficient guarantees in terms of preventing abuse. It further allows the discrimination in terms of access between the protection of US citizens and EU citizens. In addition then to the interference with Articles 7 and 8 EUCFR, there was no right to an effective remedy in breach of Article 47 EUCFR.

The Advocate General concluded that:

  • a national regulatory authority is not precluded from investigating a complaint where there is a Commission decision such as Decision 2000/520; and
  • Decision 2000/520/EC is invalid.
COMMENT

This is the latest in a line of opinions and judgments which have emphasised the need to protect privacy and ensure data protection and which have run contrary to the industry lobby approach of ‘we make money from it therefore it is legal’. If the Court of Justice follows this line of reasoning this case will have very far reaching consequences, not just for Facebook but for all US data companies relying on the safe harbour scheme or similar. Of course the court is not bound by the opinion of the Advocate General but it should be noted that in data protection cases, where the court has departed (e.g. in Google Spain) the court has been more concerned about data protection than the Advocate General. Certainly, Digital Rights Ireland indicates the court is no fan of mass surveillance.

As regards the declaration of invalidity of Decision 2000/520, it should be noted that the decision is very much tied up with concerns about the activities of the NSA and the discriminatory treatment of EU citizens. That link between mass surveillance and inherent disproportionality does not automatically translate to other forms of data usage. It remains to be seen whether the “umbrella agreement” on data protection (see here) which has just been agreed between the EU and US (but which is still subject to European Parliament approval) will resolve these issues. One key point is the ending of the discrimination between US and EU citizens in terms of the rights to complain (via the adoption of the US Judicial Redress Bill).

Aside from this, there are some points which will affect any future decision as to adequacy:

  • The level of protection can no longer be viewed as ‘adequate’ in the English sense, but as a continuation of the high level of protection seen by the Directive; this may well be difficult given current practice in the US regarding tracking and using such data for purposes to which subjects have not given consent;
  • It is questionable what level of enforcement will be required – is self-certification together with the possibility of legal action sufficient, or is the Advocate General really suggesting there is a need for an independent regulator (see paras 2-7-208) – while the issue was not discussed, the FTC has started taking action against companies who claimed to self-certify but did not comply with the terms of the safe harbour agreement (see here, here and here).
  • The Commission may be obliged to review any such decision in the light in changing circumstances, and should not leave systems which are clearly inadequate in place.

In the absence of a safe harbour agreement, companies seeking to transfer data to the US will have to use other mechanisms such as ‘Binding Corporate Rules’ or ‘Standard Contractual Terms’. These are individual approved by national regulators.

The first part of the Opinion dealt with the position of national regulatory authorities, opening up the possibility for national regulators to challenge what they see as too low levels. Will this force an upwards standard of protection with regard third countries? Quite apart from this open question, we should note that the Advocate General took the opportunity to make some general points about the need to respect fundamental rights and not rely on interpretations of the law that are inconsistent with those rights.

While they were addressed to the making of the decision, they reiterate that the focus of the directive is the protection of privacy and the respect for data protection; the free movement of data seems to come a poor second whatever the data industry and the legal basis for the directive might have to say. Such an approach has relevance to the interpretation of the directive more generally. This reliance on fundamental rights arguments may also have significance as the EU institutions seek to finalise the long-awaited Data Protection Regulation.