Monthly Archives: May 2017

Information Law and Policy in the General Election Manifestos

With the General Election fast-approaching, we have collected together a few blog posts from around the web that consider what the party manifestos say about information law and policy.

Paul Magrath, Head of Product Development and Online Content, discusses proposed changes to Media law as part of a more general review of law and justice policies. He identifies newspaper regulation as the big issue here: “Specifically,

  1. Should the Leveson inquiry recommendations be enforced in full, including the costs provisions of section 40 of the Crime and Courts Act 2013
  2. Should the second part of the Leveson inquiry go ahead?”

Chris Pounder has picked out all the relevant sections in the manifestos which relate to data protection and human rights (Article 8 and 10) issues. He says:

“The main controversy relates to the Conservative manifesto which hints at leaving the ECHR after the next General Election in 2022 and raises the prospect of the establishment of a national population register.”

Christopher Knight at 11KBW provides an entertaining look at the data protection elements of the manifestos (while also passing comment on the various aesthetic features). He considers the Lib Dems pledge to repeal or substantially re-write the Investigatory Powers Act 2016; Labour’s commitment to “strong data protection rules to protect personal privacy”; and the Conservatives’ “Digital Charter” as well as their proposals for the National Data Guardian for Health and Social Care, and a new “expert Data Use and Ethics Commission”.

If you like election manifestos presented infographically, then Rights Info has jazzed up the relevant sections on human rights for you. You’ll find ‘Privacy and Free Speech’ at the bottom of the section for each of the parties.

If you want to find out more about the approach taken to cybersecurity in the manifestos, please do come along to our free cybersecurity event this Monday where our expert panel will be looking at party policies and the fall out from the recent WannaCry attack. You can book online here.

Submissions to the Law Commission’s consultation on ‘Official Data Protection’: Open Rights Group

In response to a Cabinet Office request in 2015, the Law Commission has been reviewing relevant statutes – the Official Secrets Act in particular – to “examine the effectiveness of the criminal law provisions that protect Government information from unauthorised disclosure.” The review is believed to be necessary to “ensure that the law is keeping pace with the challenges of the 21st century.” 

As part of this work, the Law Commission published a consultation report on ‘official data protection’. The Law Commission’s report has already attracted significant criticism from the media and whistleblowing organisations which regarded the proposals as a potential assault on free speech and freedom of the press. There were also concerns about the extent to which the Law Commission had consulted with NGOs and media organisations. Defending the report, Law Commissioner Professor David Ormerod QC argued that the Law Commission had considered “carefully the freedom of expression and public interest issues”.

The Law Commission has invited interested parties to write submissions commenting on the proposals outlined in the report. The consultation period closed for submissions on 3 May, although some organisations have been given an extended deadline.

The Information Law and Policy Centre will be re-publishing some of the submissions written by stakeholders and interested parties in response to the Law Commission’s consultation report (pdf) to our blog. In due course, we will collate the submissions on a single resource page. If you have written a submission for the consultation you would like (re)-published please contact us.

Please note that none of the published submissions reflect the views of the Information Law and Policy Centre which aims to promote and facilitate cross-disciplinary law and policy research, in collaboration with a variety of national and international institutions.

We will begin with the Open Rights Group submission which was first published on their website and was accompanied by a press release.

OPEN RIGHTS GROUP: LAW COMMISSION CONSULTATION ON PROTECTION OF OFFICIAL DATA 2017

Introduction

The Law Commission is an independent body that reviews UK legislation and identifies where reform is needed. In 2015, the Cabinet Office asked the Law Commission to look into the UK’s laws around the disclosure of official data.

For reasons explained fully below, ORG’s view is that the Law Commission consultation paper[1] is shoddy and confused, contradictory, poorly researched, and ill-informed on public interest issues. It follows a review that has markedly failed to conduct full, independent and balanced investigation. The consultation paper appears to be based mainly on opinions and hopes from government and security and intelligence agency representatives. It is generally argumentative rather than evidence-based.

The report contains detailed legal analyses and comparative studies, but in relation to the application of the law, the evidence supporting changes is limited to an asserted need for more up to date language and unsubstantiated concerns that the current framework may not be robust enough to deal with the Internet. However, the report does not explore what specific changes and challenges the Internet and digitisation bring.

The report fails to demonstrate how current secrecy laws have, in practice, worked or failed, or to consider the evidence of important recent espionage or leaking cases and investigations. As drafted, it fails to make the case for the reforms proposed, other than by deploying rhetoric.

Therefore, ORG cannot support the proposed full rewriting of espionage and secrecy legislation until a full and proper analysis is produced. This should contain explanatory evidence of the risks to the state and the public interest posed by new technologies, but also of the possible risks for pervasive surveillance and clamping down on whistleblowers. Continue reading

Where did all the privacy injunctions go? A response to the Queen’s Bench ‘Media List’ consultation

Dr Judith Townend highlights the difficulties in accessing accurate data on privacy injunctions as part of a submission made on behalf of the Transparency Project to the Queen’s Bench ‘Media List’ consultation . Dr Townend is a Lecturer in Media and Information Law, (University of Sussex) and Associate Research Fellow at the Information Law and Policy Centre. This post first appeared on the Transparency Project website. 

privacy-injunction-statistics-1080x675

According to the latest official statistics on privacy injunctions in January to December 2016 there were just three proceedings where the High Court considered an application for a new interim privacy injunction. Two were granted, one was refused.

Two appeals were heard in the Court of Appeal against a granting or refusal of an interim injunction (the refused application mentioned above) before it went to the Supreme Court, where the injunction was upheld until trial or further order (though the case isn’t identified, we can safely assume this is the well publicised case of PJS v News Group Newspapers).

This data has been collected for the past six years as a result of the Master of the Rolls’ report on super injunctions, conducted in the wake of the super injunction furore of 2010-11.

Following his recommendation that HMCTS and the MOJ investigate the viability of data collection on privacy injunctions, a new Civil Practice Direction was introduced to ensure judges recorded data relating to specified cases. These include civil proceedings in the High Court or Court of Appeal in which the court considers applications, continuations, and appeals of injunctions prohibiting the publication of private or confidential information (the scheme does not include proceedings to which the Family Procedure Rules 2010 apply, to immigration or asylum proceedings, or to proceedings which raise issues of national security).

Prior to the introduction of this regime, it was impossible to say how many ‘super’ or anonymous injunctions had been granted historically, as the MR (then Lord Neuberger) conceded at the time.

But how accurate is the Ministry of Justice data? According to the Inforrm media law blog, not very. Although the data purports to show fluctuation and an overall decrease in injunction applications since a peak in January to July 2012, the Inforrm blog has shown these statistics are “clearly incomplete”. The evidence is incontrovertible: there are public judgments in five privacy injunction applications in 2016. Furthermore, there have been press reports of other proceedings with no published judgments.

Inforrm remarked: “It is difficult to ascertain the true figure as many injunctions are never the subject of publicity – often because they relate to threatened ‘privacy disclosures’ by private individuals who subsequently agree to permanent undertakings. It seems likely that there were at least four times as many applications for privacy and confidentiality injunctions in 2016 than those recorded [by the] Civil Justice Statistics Quarterly. The reasons for this under reporting are unclear.”

It was worth remarking, as the judge in charge of the Media and Communications List at the Royal Courts of Justice, Mr Justice Warby, has now launched a short consultation for practitioners and other court users.

Among other questions it asks users whether they agree that (a) collection of statistics is worthwhile, and (b) whether they think the current system is adequate.

On behalf of The Transparency Project, Paul Magrath (ICLR), Julie Doughty (Cardiff University) and I (University of Sussex) have responded: answering that (a) yes, collection of statistics is worthwhile, and (b) no, the current system is inadequate. Our submission can be downloaded here [PDF].

There is no official space for extra comment, but we offer the following observations and hope there will be an opportunity to engage further with the judiciary and the Ministry of Justice on this issue, and broader points about access to the courts (there is, for example, a problem about access to information about reporting restrictions and defamation cases, as I have previously written about here and here).

We welcomed the Master of the Rolls’ recommendation in 2011 for HMCTS to examine the feasibility of introducing a data collection system for all interim non-disclosure orders, including super-injunctions and anonymised injunctions.

Prior to this, there had been much confusion in the media and on social media about the number and type of injunctions that had been granted. There was some criticism of media exaggeration and distortion but at the same time, no reliable source of information existed with which to check the claims that were being made.

At a press conference marking the launch of the release of the Master of the Rolls’ report, Lord Neuberger said he ‘would not like to say precisely how many’ super injunctions or anonymous injunctions had been granted since 2000. The number could not be ascertained because no reliable records had been kept.

It is our view that it is wholly unacceptable that no reliable information exists for how many injunctions were granted historically. We were pleased therefore when the Ministry of Justice began publishing results twice a year. However, we do not think the system is reliable or complete, as has recently been observed on the Inforrm media law blog. It is worrying that HMCTS and the MOJ did not appear to notice the incompleteness of the data.

We recommend that judges should record all interim and final non-disclosure orders, including super injunctions and anonymised injunctions and relating to publication of private and confidential information (by mainstream media organisations or other publishers including individuals) – as defined in Practice Direction 40F.

We have two concerns about the process to date despite the PD being in force:

First, that not all such orders have been recorded. We do not know the reason for this. It is important that PD 40F is followed and enforced. Although we ticked option 5b, we do not think the judge’s completion of the form should rely solely on legal representatives prompting the judge to complete the form as they may have no incentive to do so. HMCTS should also ensure that the data has been correctly completed by the judge. Therefore, as part of the data collection exercise, HMCTS should have an audit procedure for ensuring data is being correctly and systematically collected.

Second, we do not think that the format of the data is accessible or as useful as it could be. We think that the anonymised case names should be published alongside the statistics to allow for verification of the data and cross-referencing with any published judgments (there would be rare exception where a ‘true’ super injunction was in force). We think the MOJ and HMCTS should also collect information relating to the eventual outcome: when an order is discontinued or expires, for example.

Given the narrow remit of this consultation, we will keep these comments brief. However, we have other ideas for how transparency and access to information in media proceedings could be improved with view to improving public understanding and education in these types of proceedings. We would welcome the opportunity to discuss these with you and would like to join any future meeting and discussions of users of the Media and Communications List.

Event: Responding to the WannaCry Attack: The Future of UK Cybersecurity Policymaking

This event took place at the Information Law and Policy Centre at the Institute of Advanced Legal Studies on Monday, 5 June 2017.

Date:
5 June 2017
Time: 
18:00 to 20:00
Venue:
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
Book: Online on the SAS events website. (Event is free but registration is required.)

Responding to the WannaCry Attack: The Future of UK Cybersecurity Policymaking

The recent WannaCry ransomware attack which infected more than 230,000 computers in over 150 countries has brought the future of cybersecurity policy-making into sharp focus. In the UK, the ransomware cryptoworm caused significant problems to NHS computer systems highlighting the vulnerability of a critical pillar of national infrastructure to cyberattack.

The success of the ransomware in exploiting weaknesses in Microsoft operating systems raises questions at a number of levels. Individual organisations have been reassessing their approaches to software updates, regular back ups and staff training, while governmental policy and the role of the security agencies in protecting the public has also come under scrutiny.

An interdisciplinary panel of experts will discuss the legal, technical, and societal implications of the attack.

The discussion will take place in the context of an election campaign where all the major parties have included cybersecurity policies in their manifestos.

Speakers:

Dr Steven Murdoch is a Royal Society University Research Fellow in the Information Security Research Group of the Department of Computer Science at University College London. His work aims to develop metrics for security and privacy. His research interests include authentication/passwords, banking security, anonymous communications, censorship resistance and covert channels. He is also a member of the Tor Project. In a recent commentary on the WannaCry attack he argued that “various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals”.

Dr Julio Hernandez-Castro, is Senior Lecturer in Computer Security at the School of Computing, University of Kent. His research interests range from Cryptology (particularly Lightweight Crypto) to Steganography & Steganalysis, including Computer & Network Security, Computer Forensics, CAPTCHAs, and RFID Security. He is the Principal Investigator (PI) for the RAMSES EU Horizon 2020 Project that deals with ransomware from a technical and economic perspective. He is also a member of the EUROPOL Expert Platform.

Dr Tim Stevens is Lecturer in Global Security at King’s College London. His research looks critically at global security practices, with specific interests in cybersecurity and digital surveillance. He has also written on time and temporality in International Relations theory, most recently in a monograph, Cyber Security and the Politics of Time (Cambridge University Press, 2016).

Chair:

Dr Nora Ni Loideain, Director and Lecturer in Law of the Information Law and Policy Centre, will chair the event.

Our discussion will be followed by a wine reception.

Event: Challenges of the New Transnational Cyber Policing

This event took place at the Information Law and Policy Centre at the Institute of Advanced Legal Studies on Monday, 26 June 2017.

Date:
26 June 2017
Time: 
17:00 to 19:00
Venue:
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
Book: Online on the SAS events website. (Event is free but registration is required.)

Cryptomarkets, Computer Hacking and Child Exploitation Material: Challenges of the New Transnational Cyber Policing

Speaker: Dr Monique Mann
Discussant: Professor Ian Walden

Description:

A seminar discussion by Dr Monique Mann, School of Justice, Faculty of Law,  Queensland University of Technology, Brisbane, Australia.

Cyberspace presents new opportunities for offending and new challenges for policing. Both the transnational nature of the internet and anonymising dark net infrastructure challenge conventional policing methods, prompting the introduction of enhanced investigatory and intelligence capabilities, such as Computer Network Operations (CNOs), to detect and investigate crimes with an online dimension. These new forms of online surveillance and policing transcend multiple legal jurisdictions, and test established procedures governing access to, and the admissibility of, online evidence.

This seminar will summarise three research projects concerning online policing to highlight a range of emerging challenges and issues.

First, a discussion of the dismantling of the Silk Road crypto market will be used to demonstrate how US conspiracy law drives transnational cyber investigations and how these processes reflect ideological conceptions of justice and due process to legitimise US extraterritorial surveillance and access to digital evidence.

Second, an analysis of high profile cases of computer hackers who have been sought for extradition by the US from the UK are presented. These reveal important legal and human rights considerations where the alleged unlawful conduct occurred exclusively online and concurrent jurisdiction applies at both at the source and location of harm.

Finally, the Playpen clandestine network used for the distribution of child exploitation material is considered as these cases offer crucial insights into new and emerging developments such as recent amendments to US Criminal Procedure that authorise extraterritorial governmental hacking.

The seminar will conclude with a discussion of the implications for future criminological research, online policing and transnational criminal law and justice reform. This includes recognition of the importance of the shifting legal geographies associated with strategies for accessing digital evidence and due process safeguards in extraterritorial online criminal investigations.

A wine reception will follow our discussion.

Speaker Details:

Dr Monique Mann is a lecturer at the School of Justice, Faculty of Law, at the Queensland University of Technology in Brisbane, Australia. She is also a member of the Crime and Justice Research Centre and the Intellectual Property and Innovation Law Research Group at QUT Law.

Professor Ian Walden is Professor of Information and Communications Law and head of the Institute of Computer and Communications Law (ICCL) in the Centre for Commercial Law Studies, Queen Mary University of London.

Observing the WannaCry fallout: confusing advice and playing the blame game

In this guest post, researchers from the Information Security Group at UCL – Steven J. Murdoch, Angela Sasse, Wendy M. Grossman and Simon Parkin – consider what lessons should be learnt after the WannaCry ransomware attack.

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the WannaCry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

Countries initially affected in WannaCry ransomware attack (source Wikipedia, User:Roke)

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?

Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

This article also appears on Bentham’s Gaze, the blog of the UCL Information Security Group.

Uber: a taxi service or an app? Analysis of a CJEU Advocate-General’s view

Uber picLorna Woods, Professor of Internet Law at the University of Essex and a Senior Associate Research Fellow at the Information Law and Policy Centre, analyses the Opinion of the Advocate-General in a CJEU case concerning the status of Uber’s taxi services. The post first appeared on Steve Peers’ blog, EU Law Analysis

Case C-434/15 Asociación Profesional Elite Taxi v. Uber Systems Spain SL, 11 May 2017

This case is the first before the Court of Justice specifically on the sharing economy and the extent to which coordination via platform should be treated as removing unnecessary red-tape, or as seeking to avoid regulation in the public interest (in the form of concerns about passenger safety) as well as permitting unfair competition.  While the Commission seems in favour of the (unequal) sharing economy, Advocate-General Szpunar sees the position a little differently. Continue reading

Why using AI to sentence criminals is a dangerous idea

File 20170515 7005 ybtl2u
                                                                                       Phonlamai Photo/Shutterstock

 

In this guest post, PhD researcher Christopher Markou, University of Cambridge, explores the use of Artificial Intelligence in the justice system and asks whether the use of algorithms should be used to decide questions of guilt or innocence.

Artificial intelligence is already helping determine your future – whether it’s your Netflix viewing preferences, your suitability for a mortgage or your compatibility with a prospective employer. But can we agree, at least for now, that having an AI determine your guilt or innocence in a court of law is a step too far?

Worryingly, it seems this may already be happening. When American Chief Justice John Roberts recently attended an event, he was asked whether he could forsee a day “when smart machines, driven with artificial intelligences, will assist with courtroom fact finding or, more controversially even, judicial decision making”. He responded: “It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things”.

Roberts might have been referring to the recent case of Eric Loomis, who was sentenced to six years in prison at least in part by the recommendation of a private company’s secret proprietary software. Loomis, who has a criminal history and was sentenced for having fled the police in a stolen car, now asserts that his right to due process was violated as neither he nor his representatives were able to scrutinise or challenge the algorithm behind the recommendation.

The report was produced by a software product called Compas, which is marketed and sold by Nortpointe Inc to courts. The program is one incarnation of a new trend within AI research: ones designed to help judges make “better” – or at least more data-centric – decisions in court.

While specific details of Loomis’ report remain sealed, the document is likely to contain a number of charts and diagrams quantifying Loomis’ life, behaviour and likelihood of re-offending. It may also include his age, race, gender identity, browsing habits and, I don’t know … measurements of his skull. The point is we don’t know.

What we do know is that the prosecutor in the case told the judge that Loomis displayed “a high risk of violence, high risk of recidivism, high pretrial risk.” This is standard stuff when it comes to sentencing. The judge concurred and told Loomis that he was “identified, through the Compas assessment, as an individual who is a high risk to the community”.

The Wisconsin Supreme Court convicted Loomis, adding that the Compas report brought valuable information to their decision, but qualified it by saying he would have received the same sentence without it. But how can we know that for sure? What sort of cognitive biases are involved when an all-powerful “smart” system like Compas suggests what a judge should do? Continue reading

Where to after Watson? The challenges and future of mass data retention in the UK

CJEUAs our lives have increasingly become data-driven and digital by default, finding the balance between privacy and national security/law enforcement has become one of the central legal, political, and ethical debates of the information age. On 11 May, the Director of the Information Law and Policy Centre, Dr Nora Ni Loideain joined a panel of experts at a Bingham Centre event to discuss the latest round in the legal debate – the European Court of Justice’s (CJEU) recent ruling in a case brought by Tom Watson MP against the UK government in regard to the legality of the Data Retention and Investigatory Powers Act (DRIPA). Although DRIPA has now expired, the CJEU Grand Chamber judgment delivered last December also calls into question the legal status of the legislation which replaced DRIPA in 2016, the Investigatory Powers Act (IP Act).

According to the panel chair, Professor Lorna Woods, the CJEU judgment formed what might be considered a “strong view” on privacy and regarded mass data retention as “disproportionate” compared to citizens’ rights to privacy. In this regard, the ruling continued in the same vein as the landmark 2014 Digital Rights Ireland judgment, which struck down the EU’s instrument for mandatory mass data retention – the Data Retention Directive – and declared it to be incompatible with the right to respect for private life and data protection protected by Articles 7 and 8 of the EU Charter of Fundamental Rights.

As we wait for the UK Court of Appeal to interpret the Watson/Tele2 judgment in relation to UK law, the panel considered what the Grand Chamber’s judgment might mean for mass data retention. In particular, Professor Lorna Woods put it to the panel and audience to consider whether the scope of data retention currently provided for under the IP Act 2016 was still possible in light of the reasoning of the CJEU Grand Chamber’s judgment. Continue reading

Call for Papers: Automated decision-making, machine learning and artificial intelligence

IRP&P logoInformation Rights, Policy & Practice, a peer-reviewed, open access, interdisciplinary journal for academics and practitioners alike, is seeking submissions for its Autumn 2017 special issue on automated decision-making, machine learning and artificial intelligence.

Perspectives from a variety of disciplines are welcome and encouraged, including papers on present and future challenges, policy and theoretical perspectives and ethical issues.

The journal is looking for Articles of 5,000 to 10,000 words; Forward thinking pieces of 3,000 to 5,000 words; Case reports of 3,000 to 5,000 words; Policy reports of 1,000 to 2,000 words; as well as book reports of 700 to 1,000 words. All word counts are exclusive of footnotes.

For more information about the journal’s focus and aims, its online submission processes and requirements, and to register with the journal, please go to www.jirpp.org.uk.

Deadline for submissions for the Autumn 2017 issue: 31 AUGUST 2017

The journal is also looking for a reviewer of the following book:
Private Power, Online Information Flows and EU Law: Mind the Gap by Angela Daly (2016, Hart).
Please contact julian.dobson@winchester.ac.uk to request to review this book.

About IRP&P

IRP&P is an open access, international, peer-reviewed journal seeking to create a space to allow academics and practitioners across a multitude of fields to reflect and critique the law, policy and practical reality of Information Rights, as well as to theorise potential future developments in policy, law and regulation.

@IRPandPJournal
www.jirpp.org.uk