This guest post was written by Marion Oswald, Senior Fellow at the Faculty of Law, Winchester University. This post therefore reflects the views of the author, and not those of the ILPC.

The machine decided.  It’s a wonderful thing.’

This quote is over 80 years old, and refers to technology that could be described as the original ‘black box’: the polygraph, or ‘lie detector’ as it is better known.

The person who was so enthused by the ‘machine’ was a juror in the case of People v Kenny, a New York State trial court decision in 1938.  The trial involved conflicting eye witness testimony, and according to the juror, the lie detector was ‘the deciding factor.’  Or perhaps not so much the machine itself, but its results as presented by the scientific witness. 

That witness was Father Walter Summers, a Jesuit and Head of the Department of Psychology of the Graduate School of Fordham University.  He must have cut an impressive figure as he outlined his confidence that the device was ‘100 per cent efficient and accurate in the detection of deception’.  Clearly, the judge was convinced, remarking, ‘It seems to me that this pathometer and the technique by which it is used indicate a new and more scientific approach to the ascertainment of truth in legal investigations.’

After the case, the jurors were polled by a member of the New York bar.  Although none admitted to basing their decision solely on the lie detector testimony, six jurors thought that the testimony was ‘conclusive proof’ of the defendant’s guilt or innocence, and five agreed that they had accepted it ‘without question’ (an early example of the ‘computer says no’ problem perhaps).  The judge’s assertion that the jury will ‘evaluate’ the lie detector testimony seems somewhat wishful thinking.

The lie detector out of court

Summers’ success was short-lived.  The majority of early US cases, both before and after Kenny, rejected the use of lie detector evidence in court on the basis of the Frye standard.  According to this, lie detectors were not sufficiently established in order to gain general acceptance by experts in the field, nor had their use moved out of the experimental ‘twilight zone’ to a ‘demonstrable’ stage in the sense of something that can be proved logically.  The courts expressed nervousness, not only about scientific validity, but about the test’s potential impact on established legal norms and procedures, such as the Fifth Amendment privilege against self-incrimination and the jury’s role in determining credibility. 

This did not however prevent use of the lie detector outside court forging ahead – for the assessment of evidence; vetting potential employees; in fraud investigations; and to test the fidelity of your spouse.

Technology in the twilight zone

Laudable aims accompanied the early polygraph.  It was said to be more humane than the third degree interrogation methods common at the time, and more ‘scientific’ than potentially unreliable witness testimony.  Polygraph inventors and practitioners contributed articles to legal journals to assist their cause.  Today’s machine learning is often advocated to be more consistent, accurate and even transparent than the human mind, thus providing proof in issues such as discrimination.  Both these technologies arguably support what I’d describe as ‘reformist legal realism’.  This is an approach that aims to advance productivity, efficiency and the human condition (although often narrowly defined) by an emphasis on empirical evidence and scientific methods, and a distrust of reliance on ‘principles’.   

Despite these real-world aims, lie detectors and artificial intelligence alike have become embedded in fiction, comics, TV and movies, distorting general understanding of what the technology can actually do.  The early lie detector’s ‘magic’ – its theatricality, opacity and intimidating character – benefited those who would promote its use.  It is perhaps telling that one of the most charismatic proponents of the early lie detector as evidence – psychologist and lawyer Dr. William Moulton Marston – also created the character ‘Wonder Woman’, whose lasso of truth shares the lie detector’s characteristic of benign coercion.  Yet the inventor of the first portable polygraph, Leonarde Keeler, said in 1934 that there was no such thing as a ‘lie detector’. 

Present-day artificial intelligence and machine learning can suffer from similar magical thinking.  Despite the common parlance ‘artificial intelligence’, there is no such thing as a machine that can act like a human at all times.  Neither can a machine learning tool independently ‘predict’ risk or a person’s future.  Rather the real world or a person’s life is reduced to variables, and an algorithm then trained to detect patterns or similarities based on probabilities.  The badging of the output as a prediction is a human one.  Considerable doubt exists as to the benefit, accuracy and relevance of such predictions at an individual level. 

The science behind polygraphs is based upon the assumption that deception can be correlated with physiological responses over a certain threshold.  Machine learning is based upon the premise that all relevant factors can be represented as data, measured, and analysed with significantly accurate predictive power.  But the deployment of both technologies in real contexts remains in the ‘twilight’ zone between the ‘experimental and demonstrable’ stages. 

Governing the lie detector & lessons for AI

The courts in the US, and in England, retain the authority to decide whether expert scientific testimony is based upon a scientifically valid foundation relevant to the facts at issue in a case.  However, it took 50 years from the Kenny case for legislation to be introduced in the US to prohibit most private sector employers from using lie detectors.  US Government job applicants were not so lucky, and the polygraph test is still administered widely to potential recruits.  Although use in court remains restricted, post-offence monitoring by lie detector has gained some acceptance.  Taking advantage of the popular (but highly contested) belief that lie detectors ‘work’, studies have claimed that sex offenders are more likely to make significant disclosures if they are made subject to, or threatened with, testing.

So what might we conclude from lie detector history regarding machine learning today?  That the use of machine learning, especially when backed by commercial interests, is likely to expand to fill whatever gap is available.  We already see private sector predictive tools and emotion detection marketed for use in hiring decisions, fraud detection, immigration and other screening, decisions that come with high-stakes for individuals.  In terms of governance and regulation, focus to date has been on data protection, individual consent, privacy and ethics.  Appropriate ‘scientific validity’ and relevance standards should also be applied, constructed for particular contexts, which lead to red-lines that cannot be crossed until the experimental has truly moved out of the twilight zone.