The use of (live) facial recognition technology in public places has become a hot topic internationally, with governments keen to implement it as part of ‘smart city’ strategies. In the current moment of COVID-19 and surveillance techniques, some places, notably China, are implementing facial recognition which reportedly can scan through face masks to verify individuals’ identity. 

Facial recognition technology is a system which can identify or verify an individual from a digital image, usually by comparing the features of that individual’s face to images in a biometric database of faces. ‘Live’ facial recognition is an application of this which involves this image capture and analysis being done in real time, usually in public or semi-public places.

Facial recognition, especially when employed in live settings has proved controversial for a number of reasons. Firstly, current facial recognition technologies are not fully accurate in their identification of individuals. There are also particular inaccuracies in identifying women compared to men, and Black and Minority Ethnic people compared to white people. Facial recognition is most likely to misidentify Black women, compounding other kinds of discrimination they face. 

There are also controversies surrounding the conditions in which live facial recognition technology is being researched, developed and trialled. This includes Chinese products which have been developed, trialled and used against the Uyghurs and other ethnic minority groups in Xinjiang. In the West, Clearview AI, a company with ties to far-right white extremists, has developed an advanced facial recognition system backed by an enormous database of photos (reports suggest 3 billion of them), some of which have been scraped from social media accounts without permission from users. Their controversial product has been used by law enforcement agencies in the US and Europe, including reportedly the Metropolitan Police. Clearview is currently marketing its product to governments to aid with COVID-19 contact tracing and surveillance.

Live facial recognition has already been trialled in the UK, specifically England and Wales. Some of these trials have been highly controversial, including indiscriminately targeting football fans, festival goers and the general public going about their business. Particular controversies relate to the choice of trials being in locations and at events with high BME presence and populations, such as the Notting Hill Carnival and in Stratford, east London. In addition to the police use, some private companies have been using facial recognition in public places in the UK, such as the area around Kings Cross station in London and shopping centres in Manchester and Sheffield.

The use of live facial recognition raises privacy concerns, especially when it is being used in public places and vis-à-vis the general public rather than just those suspected of crime. Some members of the public have viscerally reacted against facial recognition trials, such as by covering their faces, in order to protect their privacy (although at least one person has been fined for doing this). Whether we have a right to cover our faces to protect our privacy in public places is not clear in light of European Court of Human Rights jurisprudence. However, a civil liberties campaigner, Ed Bridges, has mounted a legal challenge to the South Wales Police’s use of live facial recognition on the basis of privacy, data protection and equalities rights infringements. The case was heard by the High Court of England and Wales, which ruled in 2019 that while the use of facial recognition did interfere with Art 8 ECHR rights and involved the processing of personal data, South Wales Police’s use was lawful. This decision has been appealed to the Court of Appeal, which is currently pending.

The High Court decision was followed by an Opinion from the UK data protection authority, the Information Commissioner, on the use of facial recognition in public places by law enforcement agencies. The Opinion acknowledges the possibility of live facial recognition deployment being legal, although emphasises the application of data protection law to live facial recognition and the need for compliance with the law. 

In the interim, the Swedish data protection authority has fined a local education authority for using facial recognition technology to register school attendance in ways which infringed the GDPR, including as regards purpose limitation and data minimisation. This latter fine can be distinguished from the South Wales case for a number of reasons including the location and purposes of the facial recognition use (school attendance vs law enforcement).

In Scotland (where I am based), the Scottish Parliament Justice Sub-Committee on Policing has recently examined facial recognition use. Currently, Police Scotland, the national police authority (second only in size to the Metropolitan Police in the UK), does not use live facial recognition but plans to implement it in Scotland before 2026, and does use retrospective facial recognition i.e. facial recognition being applied to pre-recorded video or photo images. The Parliament Sub-Committee commenced an inquiry into the topic in 2019 (to which I made a submission), and delivered its report in February 2020. The Sub-Committee considers that due to facial recognition’s negative human rights and equalities impacts, “there would be no justifiable basis for Police Scotland to invest in this technology” at the current moment in time, and a “number of safeguards must be met” before Police Scotland could use it including comprehensive impact assessments and a “robust legal and regulatory basis”. Police Scotland in its response to the Sub-Committee’s report has stated that it has no plans to use live facial recognition and will not do so “without all the necessary impact assessments having been undertaken and safeguards met”.

I welcome the Scottish Parliament Sub-Committee’s conclusions as the human rights risks of live facial recognition technology outweigh the benefits from an ethical, if not legal perspective. This also places Scotland on a different trajectory to England and Wales on police deployment of facial recognition given it seems unlikely to be used for the foreseeable future in Scotland while trials continue down south, including again in Stratford, east London by the Metropolitan Police in February 2020. It remains to be seen what the result of the Bridges litigation will be, and more clarity on the legal position of live facial recognition would be useful. 

However the strictly legal implications of live facial recognition are only part of the picture: when the technology is discriminatory in its operation and often developed and incubated in disturbing circumstances internationally, and is deployed in discriminatory ways and without public consent, is it ethical for the police and others to use this technology even if it is legal? 

One step further would be to impose a moratorium on live facial recognition technology. The UK Parliament Science and Technology Committee recommended a moratorium in 2017, at least until appropriate safeguards and guidelines were put in place. Other authorities abroad have gone further. In 2019, the City of San Francisco became the first US city to ban facial recognition technology from being used by the police force and other local public agencies due to the technology’s unreliability and its invasion of privacy and civil liberties. Since then, there have been calls by civil society for a global moratorium on facial recognition technology for mass surveillance purposes. I also recommended a complete moratorium to the Scottish Parliament Justice Sub-Committee due to the deficiencies of live facial recognition (e.g. inaccuracies), the infringement of human rights and civil liberties live facial recognition causes in public places, and the unethical conditions in which many facial recognition products and services are developed. These issues are still current, hence both public authorities and private companies deploying live facial recognition in UK public places should place themselves on the side of ethics and heed the call for a moratorium, regardless of whether such deployment would be legal.

Author

Dr Angela Daly
Senior Lecturer and Director of Centre for Internet Law & Policy
Strathclyde University Law School, Scotland