This guest post was written by Jamie Grace, Senior Lecturer in Law at Sheffield Hallam University. This post therefore reflects the views of the author, and not those of the ILPC.

The use of algorithmically-informed decision-making in public protection contexts in the UK justice system does appear to be proliferating; and this is problematic.

The UN Special Rapporteur on Privacy has commented that in the context of surveillance, algorithmic processing of personal information is less intrusive than human processing of the same personal information. But this position overlooks the transparency problems of algorithms and their opacity in their workings, and the potential injustices with regard to ‘trade-offs’ in algorithmic weightings driven by particular policy choices, or the risks of potential exacerbation of discrimination through the use of skewed data.

There are concerns that UK policing could soon be awash with ‘algorithmic impropriety’. Big(ger) data and machine learning-based algorithms combine to produce opportunities for better intelligence-led management of offenders, but also creates regulatory risks and some threats to civil liberties – even though these can be mitigated. In constitutional and administrative law terms, the use of predictive intelligence analysis software to serve up ‘algorithmic justice’ presents varying human rights and data protection problems based on the manner in which the output of the tool concerned is deployed. But regardless of exact context, in all uses of algorithmic justice in policing there are linked fears; of risks around potential fettering of discretion, arguable biases, possible breaches of natural justice, and troubling failures to take relevant information into account. The potential for ‘data discrimination’ in the growth of algorithmic justice is a real and pressing problem.

Of course there are growing efforts in terms of the modelling of good-practice for regulating algorithms, machine learning and the applications of ‘big data’ technologies. A community of academics and data scientists working on ‘Fairness, Accountability and Transparency in Machine Learning’ (FAT/ML) have published five ‘Principles for Accountable Algorithms’ as well as a ‘Social Impact Statement for Algorithms’, for example. And the Data Protection Act 2018 in the UK requires the Home Office to publish annual ‘privacy impact assessments’ in the roll-out of any technology such as its new-generation, joined-up ‘Law Enforcement Data Service‘ (LEDS). However, it was perhaps an astute observation by the Council of Europe that doctrinal law might be better regulation overall, in relation to the risks of machine learning algorithms in the ways that they affect human rights values, compared to any combination of non-binding ethical frameworks and self-regulation. The Council of Europe have also made the observation that ‘meta-norms’ in the deployment of machine learning may need more time to evolve in practice.

I’ve been involved in writing a piece of research (Oswald et al, 2018) that sets out a model of algorithmic accountability in policing contexts for UK forces, known as ‘ALGO-CARE’ and which is based around the following principles:

  • Advisory
    • Lawful
    • Granularity
    • Ownership
    • Challengeable
    • Accuracy
    • Responsible
    • Explainable

The National Police Chiefs’ Council have now recommended to UK police forces that they adopt the ALGO-CARE model as an interim safeguard in determining whether and how to deploy AI in operational or strategic ways.

But one particularly significant issue in the field of algorithmic justice has begun to emerge: the lack of transparency over the development and likely scale of future use of machine-learning technologies by UK police forces.

This issue of a lack of transparency applies equally to recidivism-prediction tools drawing on ‘big data’, and ‘hotspots’ patrolling software, through to automated facial recognition technologies. A lack of meaningful public engagement by forces over the use of these tools is a troubling trend so far.

To that end, and with support and input from a number of researchers and academic colleagues at a range of institution, I decided to host an event on public engagement the police use of technology impacting privacy rights, on Wednesday 27 March 2019, at the IALS. Delegates represented half a dozen UK universities and as many UK police organisations.

The schedule of the event was as follows (and readers should feel free to contact the presenters directly for their slides/written papers in relation to their ongoing work):

  • Alexander Babuta (Research Fellow, Royal United Services Institute) – Machine Learning and Predictive Policing: Human Rights, Law and Ethics
  • Dr Nora Ni Loideain (Director of information Law and Policy Centre, Institute of Advanced Legal Studies, and Faculty of Law, University of Cambridge) – Predictive policing and legal and technical mechanisms for oversight
  • Tom McNeil (Solicitor and Strategic Adviser to the PCC & Board Member, Office of the West Midlands Police and Crime Commissioner) – Discussing independent data ethics committees
  • Dr. Joe Purshouse (Lecturer in Criminal Law, University of East Anglia) – Privacy, Crime Control and Police Use of Automated Facial Recognition Technology
  • Christine Rinik (Senior Lecturer in Law, University of Winchester) – Datafication in policing?  Concerns, opportunities and recommendations regarding use of data-driven tools
  • Jamie Grace (Senior Lecturer in Law, and Fellow of the Sheffield Institute for Policy Studies, Sheffield Hallam University) – Taking ALGO-CARE: Moving UK police forces away from potential ‘algorithmic impropriety’ and toward ‘data justice’ standards

A report that captures the literature on police engagement with the public over the use of technology, as well as the input of attendees at the 27 March event, will be forthcoming from the Helena Kennedy Centre for International Justice.

I’d like to thank my fellow contributors to the event. If you would like to discuss the ALGO-CARE framework for adopting algorithmic policing approaches, or the research that underpins it, please do contact me at: j.grace@shu.ac.uk

NB: This blog draws in part on the following academic papers:

  • J. Grace, ‘Human rights, regulation and the right to restrictions on algorithmic police intelligence analysis tools in the UK’, available online as a draft paper at: http://ssrn.com/abstract=3303313