Category Archives: Artificial Intelligence

Co-existing with HAL 9000: Being Human in a World with AI

This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?

Event date:
Monday, 20 November 2017 – 5:30pm

A Prediction about Predictions

In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.

Continue reading

Co-existing with HAL 9000: Being Human in a World with AI

This event took place at the Information Law and Policy Centre at the Institute of Advanced Legal Studies on Monday, 20 November 2017.

Date
20 Nov 2017, 17:30 to 20 Nov 2017, 19:30
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Description

As part of the University of London’s Being Human Festival, the Information Law and Policy Centre will be hosting a film and discussion panel evening at the Institute of Advanced Legal Studies.

One of the Centre’s key aims is to promote public engagement by bringing together academic experts, policy-makers, industry, artists, and key civil society stakeholders (such as NGOs, journalists) to discuss issues and ideas concerning information law and policy relevant to the public interest that will capture the public’s imagination.

This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?

The primary purpose of this event is to particularly encourage engagement and interest from young adults (15-18 years) in considering the implications for democracy, civil liberties, and human rights posed by the increasing role of AI in society that affect their everyday decision-making as humans and citizens. A limited number of places for this event will also be available to the general public.

Continue reading

Should robot artists be given copyright protection?

File 20170620 24878 9lzggx
                                                                                                                 Shutterstock

In this guest post, Andres Guadamuz, (University of Sussex) explores whether robots should be awarded copyright for their creative works.

When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.

The computer used something called machine learning to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.

This is just one example in a growing body of works generated by computers. A short novel written by a Japanese computer program in 2016 reached the second round of a national literary prize. The Google-owned artificial intelligence (AI) firm, Deep Mind, has created software that can generate music by listening to recordings. Other projects have seen computers write poems, edit photographs, and even compose a musical. Continue reading

Why using AI to sentence criminals is a dangerous idea

File 20170515 7005 ybtl2u
                                                                                       Phonlamai Photo/Shutterstock

 

In this guest post, PhD researcher Christopher Markou, University of Cambridge, explores the use of Artificial Intelligence in the justice system and asks whether the use of algorithms should be used to decide questions of guilt or innocence.

Artificial intelligence is already helping determine your future – whether it’s your Netflix viewing preferences, your suitability for a mortgage or your compatibility with a prospective employer. But can we agree, at least for now, that having an AI determine your guilt or innocence in a court of law is a step too far?

Worryingly, it seems this may already be happening. When American Chief Justice John Roberts recently attended an event, he was asked whether he could forsee a day “when smart machines, driven with artificial intelligences, will assist with courtroom fact finding or, more controversially even, judicial decision making”. He responded: “It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things”.

Roberts might have been referring to the recent case of Eric Loomis, who was sentenced to six years in prison at least in part by the recommendation of a private company’s secret proprietary software. Loomis, who has a criminal history and was sentenced for having fled the police in a stolen car, now asserts that his right to due process was violated as neither he nor his representatives were able to scrutinise or challenge the algorithm behind the recommendation.

The report was produced by a software product called Compas, which is marketed and sold by Nortpointe Inc to courts. The program is one incarnation of a new trend within AI research: ones designed to help judges make “better” – or at least more data-centric – decisions in court.

While specific details of Loomis’ report remain sealed, the document is likely to contain a number of charts and diagrams quantifying Loomis’ life, behaviour and likelihood of re-offending. It may also include his age, race, gender identity, browsing habits and, I don’t know … measurements of his skull. The point is we don’t know.

What we do know is that the prosecutor in the case told the judge that Loomis displayed “a high risk of violence, high risk of recidivism, high pretrial risk.” This is standard stuff when it comes to sentencing. The judge concurred and told Loomis that he was “identified, through the Compas assessment, as an individual who is a high risk to the community”.

The Wisconsin Supreme Court convicted Loomis, adding that the Compas report brought valuable information to their decision, but qualified it by saying he would have received the same sentence without it. But how can we know that for sure? What sort of cognitive biases are involved when an all-powerful “smart” system like Compas suggests what a judge should do? Continue reading

Call for Papers: Automated decision-making, machine learning and artificial intelligence

IRP&P logoInformation Rights, Policy & Practice, a peer-reviewed, open access, interdisciplinary journal for academics and practitioners alike, is seeking submissions for its Autumn 2017 special issue on automated decision-making, machine learning and artificial intelligence.

Perspectives from a variety of disciplines are welcome and encouraged, including papers on present and future challenges, policy and theoretical perspectives and ethical issues.

The journal is looking for Articles of 5,000 to 10,000 words; Forward thinking pieces of 3,000 to 5,000 words; Case reports of 3,000 to 5,000 words; Policy reports of 1,000 to 2,000 words; as well as book reports of 700 to 1,000 words. All word counts are exclusive of footnotes.

For more information about the journal’s focus and aims, its online submission processes and requirements, and to register with the journal, please go to www.jirpp.org.uk.

Deadline for submissions for the Autumn 2017 issue: 31 AUGUST 2017

The journal is also looking for a reviewer of the following book:
Private Power, Online Information Flows and EU Law: Mind the Gap by Angela Daly (2016, Hart).
Please contact julian.dobson@winchester.ac.uk to request to review this book.

About IRP&P

IRP&P is an open access, international, peer-reviewed journal seeking to create a space to allow academics and practitioners across a multitude of fields to reflect and critique the law, policy and practical reality of Information Rights, as well as to theorise potential future developments in policy, law and regulation.

@IRPandPJournal
www.jirpp.org.uk

Call for papers: Critical Research in Information Law

Deadline 15 March 2017

The Information Law Group at the University of Sussex is pleased to announce its annual PhD and Work in Progress Workshop on 3 May 2017. The workshop, chaired by Professor Chris Marsden, will provide doctoral students with an opportunity to discuss current research and receive feedback from senior scholars in a highly focused, informal environment. The event will be held in conjunction with the Work in Progress Workshop on digital intermediary law.

We encourage original contributions critically approaching current information law and policy issues, with particular attention on the peculiarities of information law as a field of research. Topics of interest include:

  • internet intermediary liability
  • net neutrality and media regulation
  • surveillance and data regulation
  • 3D printing
  • the EU General Data Protection Regulation
  • blockchain technology
  • algorithmic/AI/robotic regulation
  • Platform neutrality, ‘fake news’ and ‘anti-extremism’ policy.

How to apply: Please send an abstract of 500 words and brief biographical information to Dr Nicolo Zingales  by 15 March 2017. Applicants will be informed by 30 March 2017 if selected. Submission of draft papers by selected applicants is encouraged, but not required.

Logistics: 11am-1pm 3 May in the Moot Room, Freeman Building, University of Sussex.

Afternoon Workshop: all PhD attendees are registered to attend the afternoon workshop 2pm-5.30pm F22 without charge (programme here).

Financial Support: Information Law Group can repay economy class rail fares within the UK. Please inform the organizers if you need financial assistance.