Dr Nóra Ni Loideain

As stated in its White Paper in 2020, the European Commission’s Proposed AI Regulation, otherwise referred to as the ‘AI Act’, is part of a wider set of regulatory measures intended to enable ‘a trustworthy and secure development of AI in Europe’, fully in line with the rights of EU citizens. It is welcome that the EU legislator is taking these first steps towards this timely aim given the rapidly expanding role AI systems play in every aspect of daily life, in addition to the enabling of medical breakthroughs and protecting public safety in a global pandemic.

 

The draft AI Act also coincides with an international consensus clearly emerging among researchers, policymakers, courts, regulators, and industry that these powerful algorithmic systems also raise serious risks to several rights protected under the EU Charter of Fundamental Rights. These include, but are by no means limited to, the rising use of real-time facial recognition, and biometrics more broadly, across Europe, which interfere with the rights to private life, freedom of expression, and equality, as protected by the EU Charter. The following analysis specifically addresses whether the current provisions of the AI Act intended to regulate future uses of biometric identity systems by law enforcement may be assessed as trustworthy and fully respectful of EU fundamental rights.

 

Main provisions

High-risk AI systems: ‘real-time’ and ‘post’ remote biometric identification systems

Article 3(33) defines ‘biometric data’ fully in line with the same provisions in the EU GDPR and the EU Law Enforcement Directive. Note that the scope is non-exhaustive and goes beyond the identification of facial and fingerprint images, thereby encompassing voice analysis, gait analysis, iris analysis, and other new and emerging biometrics:

personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.

Article 3(36) defines a ‘remote biometric identification system’ as an:

AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified.

The Proposed Regulation deals with two categories of remote biometric identification systems that may be used for the identification of individuals: ‘real time’ and ‘post’.

Article 3(37) defines the category of such ‘real-time’ systems as a:

remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.

From the perspective of legal certainty, key to this fully understanding the scope and application of this definition in practice is establishing what exactly constitutes ‘a significant delay’? This is especially important given how briefly Article 3(38) defines the distinct category of ‘post’ remote biometric identification systems: ‘a remote biometric identification system other than a “real-time” remote biometric identification system’. The non-binding recitals of the AI Act also provide little further explanation regarding what exactly distinguishes the scope of these key categories of ‘real-time’ and ‘post’ systems. Instead, recital 8 simply states that ‘post’ systems concern biometric data that has ‘already been captured and the comparison and identification occur only after a significant delay’.

Nevertheless, the legal uncertainty surrounding what constitutes ‘a significant delay’ aside, recital 8 of the AI Act still makes clear that the different systems should be viewed and treated differently:

Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices

Under Annex III, both categories fall within the scope of ‘High-Risk’ AI systems intended to be used by law enforcement for various purposes. These cover a broad range of predictive policing applications: ‘individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences.’ Annex III also provides that it will regulate biometric systems intended to be used by law enforcement such ‘as polygraphs and similar tools … to detect the emotional state’ of an individual, otherwise referred to in the Regulation as ‘emotion recognition’ (Article 3).

 

No ban on law enforcement use of live facial recognition

Although Article 5 of the AI Act prohibits the use of ‘real-time’ remote biometric systems in publicly accessible spaces in various circumstances, these restrictions do not apply to law enforcement. Instead, Article 5(1)(d) provides, subject to the use being ‘strictly necessary’, that law enforcement may use these powerful real-time monitoring systems for any one of the following broadly defined set of objectives:

(i) the targeted search for specific potential victims of crime, including missing children;

(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;

(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.

Article 5(3) provides that any law enforcement use of real-time biometric identification systems shall be subject to prior authorisation granted either by a judge or an independent administrative body within the relevant EU Member State. Exceptions are permitted for a ‘justified situation of urgency’ whereby authorisation may then be requested during or after use. This particular aspect of the oversight regime is in line with EU fundamental rights law, recent judgments of the CJEU, and well-established case law on the right to private life and secret surveillance under Article 8 of the European Convention on Human Rights (ECHR). The latter of which represents the minimum standard of protection for the right to private life that EU law must meet, as guaranteed under Article 52(3) of the EU Charter.

The oversight regime under Article 5(3) also contains two further notable elements. The first concerns the requirement that the oversight authorities shall only grant law enforcement authorisation to use ‘real-time’ remote biometric identification system when it is satisfied that the use is ‘necessary for and proportionate’ to achieving one of the above objectives based on either ‘objective evidence or clear indications presented to it’ (emphasis added). Confusingly, this immediately runs contrary to the higher standard of necessity and proportionality requirements in Article 5(1)(d) which clearly stipulates that use of such real-time systems is only permitted when ‘such use is strictly necessary’.

The second novel provision concerning the oversight regime in Article 5(3) is the nudging requirement that oversight authorities shall consider the consequences of not granting this authorisation to law enforcement: ‘the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system’.

 

Analysis

The use of systems by law enforcement that will indiscriminately capture and analyse the biometric data of a vast number of innocent individuals raise particularly significant questions regarding their proportionality and their compatibility with EU fundamental rights law. Indeed, there is a clear analogy between the fundamental rights analysis of the CJEU in its recent landmark Grand Chamber judgment of La Quadrature du Net and Others concerning law enforcement access to real-time mobile phone location data and the real-time identification of an individual from processing highly sensitive biometric data, particularly their facial image.

As the Court states in La Quadrature du Net:

Like national legislation authorising the automated analysis of data, national legislation authorising such real-time collection … constitutes interference[s] with the fundamental rights enshrined in Articles 7 and 8 of the Charter and is likely to have a deterrent effect on the exercise of freedom of expression, which is guaranteed in Article 11 of the Charter … It must be emphasised that the interference constituted by the real-time collection of data that allows terminal equipment to be located appears particularly serious, since that data provides the competent national authorities with a means of accurately and permanently tracking the movements of users of mobile telephones. To the extent that that data must therefore be considered to be particularly sensitive [paras 186-187]

Consequently, in line with the seriousness of the interferences posed by the automated analysis and real-time collection of particularly sensitive data, the Court proceeded to hold that Articles 7, 8, 11, and 52(1) require that:

recourse to the real-time collection of traffic and location data is limited to persons in respect of whom there is a valid reason to suspect that they are involved in one way or another in terrorist activities and is subject to a prior review carried out either by a court or by an independent administrative body whose decision is binding in order to ensure that such real-time collection is authorised only within the limits of what is strictly necessary. [para 152]

Recital 18 of the AI Act clearly recognises these parallels with the intrusiveness specifically posed by the real-time remote biometric identification systems to the rights to private life and freedom of expression, as protected in Articles 7 and 11 of the EU Charter:

The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.

However, considerable legal uncertainty surrounds the key categories of ‘real-time’ and ‘post’ remote biometric identification systems. Furthermore, two specific elements of the oversight regime in Article 5(3), coupled with the broadly defined set of objectives in Article 5(2), also depart significantly from the case law of the CJEU, thereby calling into question the compatibility of the AI Act with EU fundamental rights.

 

‘Real-time’ vs ‘post’ remote biometric systems? Legal uncertainty surrounds scope and definitions

Considerable legal uncertainty surrounds the distinction between definition and scope of real-time access and historical (‘post’) remote biometric identification systems. The draft AI Act is silent on the crucial question of what ‘a significant delay’ actually entails and thus still needs to address at what point does real-time biometric data become historical biometric data. Moreover, it is also unclear as to why the AI Act treats real-time access as being more intrusive to EU fundamental rights than historical biometric data?

Why does the ‘significant delay’ between the original collection of a photo/image of an individual and its processing by law enforcement for facial recognition/emotion recognition purposes determine its intrusiveness? As pointed out by the European Data Protection Board and European Data Protection Supervisor in their Joint Opinion on the AI Act: why should any passing of time be considered as a mitigating factor, ‘taking into account that a mass identification system is able to identify thousands of individuals in only a few hours.’

 

Precarious oversight and protection of EU fundamental rights

The first area where compliance with Articles 7, 8, 11, and 52(1) of the EU Charter is suspect concerns the far lower necessity threshold for access by which law enforcement may be granted authorisation to use ‘real-time’ remote biometric identification systems, including live facial recognition. Access to such powerful monitoring systems is not even limited under Article 5(2) to the prevention of crime but is so broad as to allow for its use to be permitted for the notably vague objective of searching for ‘targeted potential victims of crime’. The objective is so broad in scope that it clearly runs the risk of arbitrary use by law enforcement and is a stark departure from the threshold of providing objective evidence of a link to the prevention of terrorism, as held in La Quadrature du Net.

The second, and related issue, that is problematic for EU fundamental rights compliance concerns the watered-down necessity requirement in Article 5(3). This novel standard, that clearly departs from La Quadrature du Net, allows oversight authorities to grant authorisation based on the distinctly vague and general basis of ‘clear indications’ presented to them, as opposed to the strict necessity requirement of ‘objective evidence’.

Thirdly, another element of the oversight regime as provided in Article 5(3) is particularly troubling as it runs contrary to the well-established requirement in EU and ECHR law that oversight authorities must be sufficiently independent and impartial in their operation and functioning in order to be an effective control of oversight in practice. As noted above, and again completely novel from EU fundamental rights and any relevant CJEU case law, Article 5(3) requires oversight authorities to consider the consequences of not granting this authorisation: ‘in particular … the harm caused in the absence of the use of the system’.

This coercive element in the oversight regime of Article 5(3) calls into question the core aim of the AI Act to be a trustworthy regime. As stated by philosopher Baroness Onora O’Neill in her influential work on trustworthy governance, in order for the public to be capable of placing their trust meaningfully in the State’s lawful and proportionate use of otherwise secret monitoring systems, ‘trustworthy claims and commitments’ must be made by the latter to the former.  With respect to the AI Act, EU citizens are effectively being asked by the EU legislator to transfer their trust to informed and robust oversight bodies who are then tasked with demonstrating to the public that they do in fact operate independently from government when exercising the supervisory role designated to them in Article 5(3). It is therefore difficult to see how EU citizens can be expected to place such trust in the current AI Act which enables the coercion of independent oversight authorities by effectively reversing the burden of proof on them to consider the ‘the seriousness, probability and scale of the harm caused’ by their decision not to authorise law enforcement use of real-time.

Finally, the inclusion of emotion recognition systems in the draft AI Act is striking given a fundamental problem at the heart of such systems, namely their lack of accuracy and reliability as the expression of human emotions is not universal. Indeed, research findings show that how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Given this scope for such a significant margin of error, and the accuracy of any biometric identification system being at its weakest when used in ‘everyday life’, using such systems to identify a potential security threat from a crowded public place is highly questionable. The mention of ‘polygraphs’ (lie-detector tests) as an example of such intended uses also does little to encourage the placing of trust in the Draft AI Act. Despite decades of use by law enforcement in the US, there is still a compelling lack of scientific evidence demonstrating the ability of polygraphs to effectively detect lies, as opposed to merely recording bodily responses. As aptly observed by the US Supreme Court, ‘there is simply no consensus that polygraph material is reliable’.

 

Conclusions

It is welcome that the EU legislator is taking its first steps towards the timely aim of providing a trustworthy and rights-compliance regulatory framework for the use of AI systems given the rapidly expanding role AI systems play in every aspect of daily life. However, as the above analysis and others point out, the Draft AI Act is far from perfect. Three particularly significant issues regarding law enforcement use of remote biometric identification systems raise serious concerns regarding the trustworthiness of the AI Act and its compatibility with EU fundamental rights. First, considerable legal uncertainty surrounds the distinction between the definition and scope of real-time and ‘post’ remote biometric identification systems. Secondly, the broad, contradictory, and vague regime governing access, use, and weak oversight of these remote biometric identification systems is not in line with well-established EU fundamental rights law or the case law of the CJEU. Thirdly, the coercive element in the oversight regime of Article 5(3) calls into question a core aim of the AI Act to be a trustworthy regime and casts serious doubt on the impartiality and independence of the oversight regime.

 

Dr Nóra Ni Loideain is Director of the Information Law & Policy Centre, Institute of Advanced Legal Studies, University of London and an Associate Fellow of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. She is also a member of the UK Home Office Biometrics and Forensics Ethics Group (BFEG).

Her book, EU Data Privacy Law and Serious Crime, is forthcoming from Oxford University Press.