Tag Archives: data

Call for papers: Critical Research in Information Law

Deadline 15 March 2017

The Information Law Group at the University of Sussex is pleased to announce its annual PhD and Work in Progress Workshop on 3 May 2017. The workshop, chaired by Professor Chris Marsden, will provide doctoral students with an opportunity to discuss current research and receive feedback from senior scholars in a highly focused, informal environment. The event will be held in conjunction with the Work in Progress Workshop on digital intermediary law.

We encourage original contributions critically approaching current information law and policy issues, with particular attention on the peculiarities of information law as a field of research. Topics of interest include:

  • internet intermediary liability
  • net neutrality and media regulation
  • surveillance and data regulation
  • 3D printing
  • the EU General Data Protection Regulation
  • blockchain technology
  • algorithmic/AI/robotic regulation
  • Platform neutrality, ‘fake news’ and ‘anti-extremism’ policy.

How to apply: Please send an abstract of 500 words and brief biographical information to Dr Nicolo Zingales  by 15 March 2017. Applicants will be informed by 30 March 2017 if selected. Submission of draft papers by selected applicants is encouraged, but not required.

Logistics: 11am-1pm 3 May in the Moot Room, Freeman Building, University of Sussex.

Afternoon Workshop: all PhD attendees are registered to attend the afternoon workshop 2pm-5.30pm F22 without charge (programme here).

Financial Support: Information Law Group can repay economy class rail fares within the UK. Please inform the organizers if you need financial assistance.

Your next social network could pay you for posting

In this guest post, Jelena Dzakula from the London School of Economics and Political Science considers what blockchain technology might mean for the future of social networking. 

You may well have found this article through Facebook. An algorithm programmed by one of the world’s biggest companies now partially controls what news reaches 1.8 billion people. And this algorithm has come under attack for censorship, political bias and for creating bubbles that prevent people from encountering ideas they don’t already agree with.

blockchainNow a new kind of social network is emerging that has no centralised control like Facebook does. It’s based on blockchain, the technology behind Bitcoin and other cryptocurrencies, and promises a more democratic and secure way to share content. But a closer look at how these networks operate suggests they could be far less empowering than they first appear.

Blockchain has received an enormous amount of hype thanks to its use in online-only cryptocurrencies. It is essentially a ledger or a database where information is stored in “blocks” that are linked historically to form a chain, saved on every computer that uses it. What is revolutionary about it is that this ledger is built using cryptography by a network of users rather than a central authority such as a bank or government.

Every computer in the network has access to all the blocks and the information they contain, making the blockchain system more transparent, accurate and also robust since it does not have a single point of failure. The absence of a central authority controlling blockchain means it can be used to create more democratic organisations owned and controlled by their users. Very importantly, it also enables the use of smart contracts for payments. These are codes that automatically implement and execute the terms of a legal contract.

Industry and governments are developing other uses for blockchain aside from digital currencies, from streamlining back office functions to managing health data. One of the most recent ideas is to use blockchain to create alternative social networks that avoid many of the problems the likes of Facebook are sometimes criticised for, such as censorship, privacy, manipulating what content users see and exploiting those users.

Continue reading

Why the rise of wearable tech to monitor employees is worrying

Shutterstock.com

In this guest post, Ivan Manokha, Departmental Lecturer in International Political Economy at the University of Oxford, considers the use of wearable technology in the workplace and the potential privacy implications of collecting the data of employees. 

An increasing number of companies are beginning to digitally monitor their employees. While employers have always scrutinised their workers’ performance, the rise of wearable technology to keep tabs has more of a dystopian edge to it. Monitoring has become easier, more intrusive and is not just limited to the workplace – it’s 24/7.

Devices such as Fitbit, Nike+ FuelBand and Jawbone UP, which can record information related to health, fitness, sleep quality, fatigue levels and location, are now being used by employers who integrate wearable devices into employee wellness programmes.

One of the first was BP America, which introduced Fitbit bracelets in 2013. In 2015 at least 24,500 BP’s employees were using them and more and more US employers have followed suit. For instance, the same year, Vista Staffing Solutions, a healthcare recruitment agency, started a weight-loss programme using Fitbits and wifi-enabled bathroom scales. Appirio, a consulting company, started handing out Fitbits to employees in 2014.

In the UK similar projects are under consideration by major employers. And this trend will only intensify in the years to come. By 2018, estimates suggest that more than 13m of these devices will be part of worker wellness schemes. Some analysts say that by the same year, at least 2m employees worldwide will be required to wear health-and-fitness trackers as a condition of employment.

According to some, this is a positive development. Chris Brauer, an academic at Goldsmiths, University of London, argues that corporate managers will now be comparable to football managers. They will be equipped with a dashboard of employee performance trajectories, as well as their fatigue and sleep levels. They will be able to pick only the fittest employees for important business meetings, presentations, or negotiations.

It seems, however, that such optimism overlooks important negative and potentially dangerous social consequences of using this kind of technology. History here offers a word of warning.

Historical precedent

The monitoring of workers’ health outside the workplace was once attempted by the Ford Motor Company. When Ford introduced a moving assembly line in 1913 – a revolutionary innovation that enabled complete control over the pace of work – the increase in productivity was dramatic. But so was the rise in worker turnover. In 1913, every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963, as workers struggled to keep up with the pace and left shortly after being recruited.

Ford’s solution to this problem was to double wages. In 1914, the introduction of a US$5 a day wage was announced, which immediately led to a decline in worker turnover. But high wages came with a condition: the adoption of healthy and moral lifestyles.

The company set up a sociology department to monitor workers’ – and their families’ – compliance with its standards. Investigators would make unannounced calls upon employees and their neighbours to gather information on living conditions and lifestyles. Those that were deemed insufficiently healthy or morally right were immediately disqualified from the US$5 wage level.

Analysing Ford’s policies, Italian political philosopher and revolutionary Antonio Gramsci coined the term “Fordism” for this social phenomenon. It signalled fundamental changes to labour, which became much more intense after automation. Monitoring workers’ private lives to control their health, Gramsci argued, was necessary to preserve “a certain psycho-physical equilibrium which prevents the physiological collapse of the worker, exhausted by the new method of production”.

Parallels today

Today, we are faced with another great change to how work is done. To begin with, the “great doubling” of the global labour force has led to the increase in competition between workers around the world. This has resulted in a deterioration of working and employment conditions, the growth of informal and precarious labour, and the intensification of exploitation in the West.

So there has been a significant increase in the average number of hours worked and an increase in the intensity of labour. For example, research carried out by the Trade Union Congress in 2015 discovered that the number of people working more than 48 hours in a week in the UK was rising and it warned of a risk of “burnout Britain”.

Indeed, employee burnouts have become a major concern of employers. A UK survey of human resources directors carried out in 2015 established that 80% were afraid of losing top employees to burnout.

Ford’s sociology department was shut down in the early 1920s for two reasons. It became too costly to maintain it in the context of increasing competition from other car manufacturers. And also because of growing employee resistance to home visits by inspectors, increasingly seen as too intrusive into their private lives.

Wearable technology, however, does not suffer from these inconveniences. It is not costly and it is much less obviously intrusive than surprise home visits by company inspectors. Employee resistance appears to be low, though there have been a few attempts to fake the results of the tracking (for example, workers strapping their employer-provided Fitbits onto their dogs to boost their “activity levels”). The idea of being tracked has mostly gone unchallenged.

Labour commodified to the extreme

But the use of wearable technology by employers raises a range of concerns. The most obvious is the right to privacy. The use of wearable technology goes significantly further than computer systems where emails are already logged and accessible to employers.

Surveillance becomes continuous and all-encompassing, increasingly unconfined to the workplace, and also constitutes a form of surveillance which penetrates the human body. The right to equal employment opportunities and promotion may also be compromised if employers reserve promotion for those who are in a better physical shape or suffer less from fatigue or stress.

It may also be argued that the use of wearable technology takes what the Hungarian historian Karl Polanyi called the “commodification” of human labour to an extreme. Monitoring worker health both inside and outside the workplace involves the treatment of people as machines whose performance is to be maximised at all costs. However, as Polanyi warned, human labour is a “fictitious commodity” – it is not “produced” for sale to capital as a mere tool. To treat it as such risks ultimately leading to a “demolition of society”.

To protect individual rights, systems have been introduced to regulate how data that is gathered on employees is stored and used. So one possible solution is to render the data collected by trackers compulsorily anonymous. For example, one company that collects and monitors employee data for companies, Sociometric Solutions only charts broader patterns and connections to productivity, rather than individual performance.

This, however, does not address concerns about the increasing commodification of human labour that comes with the use of wearable technology and any potential threats to society. To prevent this, it is perhaps necessary to consider imposing an outright ban on its use by employers altogether.

The ConversationIvan Manokha, Departmental Lecturer in International Political Economy, University of Oxford

This article was originally published on The Conversation. Read the original article.

Information Law and Policy Centre’s annual workshop highlights new challenges in balancing competing human rights

dsc_0892  dsc_0896  dsc_0898

Our annual workshop and lecture – held earlier this month – brought together a wide range of legal academics, lawyers, policy-makers and interested parties to discuss the future of human rights and digital information control.

A number of key themes emerged in our panel sessions including the tensions present in balancing Article 8 and Article 10 rights; the new algorithmic and informational power of commercial actors; the challenges for law enforcement; the liability of online intermediaries; and future technological developments.

The following write up of the event offers a very brief summary report of each panel and of Rosemary Jay’s evening lecture.

Morning Session

Panel A: Social media, online privacy and shaming

Helen James and Emma Nottingham (University of Winchester) began the panel by presenting their research (with Marion Oswald) into the legal and ethical issues raised by the depiction of young children in broadcast TV programmes such as The Secret Life of 4, 5 and 6 Year Olds. They were also concerned with the live-tweeting which accompanied these programmes, noting that very abusive tweets could be directed towards children taking part in the programmes.

Continue reading

Data Retention and the Automated Number Plate Recognition (ANPR) System: A Gap in the Oversight Regime

ANPR Intercept

The Advocate General’s Opinion in the recent Watson/Tele2 case re-emphasises the importance of considered justification for the collection and storage of personal data which has implications for a variety of data retention regimes. In this post, Lorna Woods, Professor of Internet Law at the University of Essex, considers the legal position of the system used to capture and store vehicle number plates in the UK.

The Data Retention Landscape

Since the annulment of the Data Retention Directive (Directive 2006/24/EC) (DPD) with Digital Rights Ireland (Case C-293/12), it has become clear that the mass retention of data – even for the prevention of terrorism and serious crime – needs to be carefully justified. Cases such as Schrems (Case C-362/14) and Watson/Tele2 (Case C-698/15) re-emphasise this approach. This trend can be seen also in the case law of the European Court of Human Rights, such as Zakharov v. Russia (47143/06) and Szabo v Hungary (11327/14 and 11613/14).

Not only must there be a legitimate public interest in the interference in individuals’ privacy and data protection rights, but that interference must be necessary and proportionate. Mechanisms must exist to ensure that surveillance systems are not abused: oversight and mechanisms for ex ante challenge must be provided.  It is this recognition that seems part of the motivation of the Investigatory Powers Bill currently before Parliament which deals – in the main – with interception and surveillance of electronic communications.

Yet this concern is not limited to electronic communications data, as the current case concerning passenger name records (PNR) data before the Court of Justice (Opinion 1/15) and other ECtHR judgments on biometric data retention (S and Marper v. UK (30562/04 and 30566/04)) illustrate.  Despite the response of the UK government to this jurisprudence, there seems to be one area which has been overlooked – at least with regard to a full oversight regime. That area is automated number plate recognition (ANPR) and the retention of the associated data. Continue reading