Category Archives: Privacy

Where to after Watson: The challenges and future of data retention in the UK

An event hosted by the Bingham Centre for the Rule of Law and sponsored by Simmons & Simmons. 

Date: 11th May 2017
Time: 17:30 – 19:30 (Registration open from 17:00). Followed by a reception
Venue: Simmons & Simmons, City Point, 1 Ropemaker Street, London EC2Y 9SS
Cost: Members £15, Non-members £25

This event can be booked on the British Institute for International and Comparative Law website. Book now.

The judgment of the CJEU in the Watson case was handed down shortly before the year’s end in 2016. The determination that member states may not impose on communications providers a general obligation to retain data was applauded by privacy groups and has undoubtedly caused disquiet among those involved in policing and intelligence. What parliamentarians and judges will make of it in the coming months – and, post-Brexit, years – is both uncertain and important.

In this event, experts will examine the strengths, weaknesses and implications of the decision, with an eye to rights protections, the need to combat serious crime, and the practicalities of managing both in light of the European Court’s decision.

Speakers:

  • The Rt Hon Dominic Grieve QC MP, Chair of the Intelligence and Security Committee
  • Max Hill QC, Independent Reviewer of Terrorism Legislation
  • Dr Nora Ni Loideain, Incoming Director, Information Law and Policy Centre, IALS
  • Renate Samson, Chief Executive, Big Brother Watch

Chair:

  • Professor Lorna Woods, University of Essex

For more information download the event flyer and join in the conversation: @BinghamCentre, #Watson

New Study on Intermediary Liability and European Copyright Reform

Dr Christina Angelopoulos, associate research fellow at the Information Law & Policy Centre and lecturer at the University of Cambridge, has authored a study entitled ‘On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market’.

The study, commissioned by MEP Julia Reda, evaluates the provisions of the European Commission’s Proposal of 14 September 2016 for a Directive on Copyright in the Digital Single Market that are relevant to the issue of intermediary liability.

The study concludes that key elements of these provisions are incompatible with existing EU directives, as well as with the Charter of Fundamental Rights of the EU.

In particular, the study suggests that the Proposal misinterprets EU copyright and related rights law by implying that intermediaries that allow users to host content in a public manner are themselves performing an act of communication to the public. The study argues that acts of facilitation of third party copyright infringement are instead the rightful domain, not of primary, but of accessory liability, an area of copyright and related rights law that has not yet been harmonised at the EU level.

Continue reading

New Special Issue of Communications Law: Information control in an ominous global environment

Communications Law JournalThe Information Law and Policy Centre is pleased to announce the publication of a special issue of the Communications Law journal based on papers submitted for our annual workshop last November. The journal articles are available via direct subscription, through the Lexis Library (IALS member link) and (coming soon) Westlaw.

In the following editorial for the special issue, Dr Judith Townend, Lecturer in Media and Information Law, University of Sussex, (the outgoing Director of the ILPC, Institute of Advanced Legal Studies) and Dr Paul Wragg, Associate Professor of Law, University of Leeds discuss the challenges of information control in an ominous global environment.

This special issue of Communications Law celebrates the first anniversary of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Legal Studies. It features three contributions from leading commentators who participated in the ILPC’s annual conference ‘Restricted and redacted: where now for human rights and digital information control?‘, which was held on 9 November 2016 and sponsored by Bloomsbury Professional.

The workshop considered the myriad ways in which data protection laws touch upon fundamental rights, from internet intermediary liability, investigatory and surveillance powers, media regulation, whistle-blower protection, to ‘anti-extremism’ policy. We were delighted with the response to our call for papers. The conference benefited from a number of provocative and insightful papers, from academics including Professor Gavin Phillipson, Professor Ellen P Goodman, Professor Perry Keller and Professor David Rolph as well as Rosemary Jay, Mélanie Dulong de Rosnay, Federica Giovanella and Allison Holmes, whose papers are published in this edition.

The date of the conference, by happenstance, gave extra piquancy to the significance of our theme. News of Donald J Trump’s election triumph spoke to (and continues to speak to) an ominous and radically changed global environment in which fundamental rights protection takes centre stage. But as Trump’s presidency already shows, those rights have become impoverished in the rush to promote nationalism in all its ugly forms.

In the UK, the popularism that threatens to rise above all other domestic values marks a similar threat, in which executive decision-making is not only championed but also provokes popular dissent when threatened by judicial oversight. The Daily Mail’s claim that High Court justices were ‘enemies of the people’ when they sought to restrict the exercise of unvarnished executive power reminds us that fundamental rights are seriously undervalued.

Perhaps we should not be surprised at these events and their potential impact on communication law. In February 2015, at the ILPC’s inaugural conference Dr Daithí Mac Síthigh delivered a powerful paper in which he noted the rise of this phenomena in the government’s thinking on information law and policy under the Coalition Government 2010-15. In his view, following an ‘initial urgency’ of libertarianism, the mood changed to one of internet regulation or re-regulation. Such a response to perceived disorder, though not unusual, was ‘remarkable’ given how the measures in this field adopted during these final stages of the last government had been ‘characterised by the extension of State power in a whole range of areas.’ We should also note the demise of liberalism in popular thought. That much criticised notion which underpins all fundamental rights seems universally disclaimed as something weak and sinister. All of this speaks to a worrisome future in which the fate of the Human Rights Act remains undecided.

Concerns like these animate the papers in this special issue. The contribution from leading data protection practitioner Rosemary Jay, Senior Consultant Attorney at Hunton & Williams and author of Sweet & Maxwell’s Data Protection Law & Practice, is entitled ‘Heads and shoulders, knees and toes (and eyes and ears and mouth and nose…)’. Her paper discusses the rise of biometric data and restrictions on its use generated by the General Data Protection Regulation. As she notes, sensitive personal data arising from biometric data might be more easily shared, leading to loss of individual autonomy. It is not hard to imagine the impact unrestricted data access would have – the prospective employer who offers the job to someone else because of concerns about an applicant’s cholesterol levels; the partner who leaves after discovering a family history of mental ill heath; the bank that refuses a mortgage because of drinking habits. As Jay concludes, consent will play a major role in regulating this area.

In their paper, Federica Giovanella and Mélanie Dulong de Rosnay discuss community networks, a grassroots alternative to commercial internet service providers. They discuss the liability issues arising from open wireless local access networks after the landmark Court of Justice of the EU decision in McFadden v Sony Music Entertainment Germany GmbH. As they conclude, the decision could prompt greater regulation of, and political involvement in, the distribution of materials through these networks which may well represent another threat to fundamental rights.

Finally, Allison M Holmes reflects on the impact of fundamental rights caused by the status imposed on communication service providers. As Holmes argues, privacy and other human rights are threatened because CSPs are not treated as public actors when retaining communications data. As she says, this status ought to change and she argues convincingly on how that may be achieved.

Communicating Responsibilities: The Spanish DPA targets Google’s Notification Practices when Delisting Personal Information

In this guest post, David Erdos, University Lecturer in Law and the Open Society, University of Cambridge, considers the 2016 Resolution made by the Spanish Data Protection Authority in relation to Google’s approach to de-listing personal information. 

Spanish Data protection authorityThe Court of Justice’s seminal decision in Google Spain (2014) represented more the beginning rather than the endpoint of specifying the European data protection obligations of search engines when indexing material from the web and, as importantly, ensuring adherence to this.

In light of its over 90% market share of search, this issue largely concerns Google (even Bing and Yahoo come in a very distant second and third place).  To its credit, Google signalled an early willingness to comply with Google Spain.  At the same time, however, it construed this narrowly.  Google argued that it only had to remove specified URL links following ex post demands from individual European citizens and/or residents (exercising the right to erasure (A. 12c) and or objection (A. 14)), only as regards searches made under their name, only on European-badged search searches (e.g. .uk, .es) and even if the processing violated European data protection standards not if the processing was judged to be in the ʻpublic interestʼ.

It also indicated that it would inform the Webmasters of the ʻoriginalʼ content when de-listing took place (although it signalled that it would stop short of its usual practice of providing a similar notification to individual users of its services, opting instead for a generic notice only).

In the subsequent two and a half years, Google’s approach has remained in broad terms relatively stable (although from early 2015 it did stop notifying Webmasters when de-listing material from malicious porn sites (p. 29) and from early 2016 it has deployed (albeit imperfect) geolocation technology to block the return of de-listing results when using any of version of the Google search engine (e.g. .com) from the European country from where the demand was lodged).

Many (although not all) of these limitations are potentially suspect under European data protection, and indeed private litigants have (successfully and unsuccessfully) already brought a number of challenges.  No doubt partly reflecting their very limited resources, European Data Protection Authorities (DPAs) have adopted a selective approach, targeting only those issues which they see as the most critical.  Indeed, the Article 29 Working Party November 2014 Guidelines focussed principally on two concerns:

  • Firstly, that the geographical scope of de-listing was too narrow. To ensure “effective and complete protection” of individual data subjects, it was necessary that de-listing be “effective on all relevant domains, including .com”.
  • Secondly, that communication to third parties of data concerning de-listing identifiable to particular data subjects should be both very limited and subject to strong discipline. Routine communication “to original webmasters that results relating to their content had been delisted” was simply unlawful and, whilst in “particularly difficult cases” it might in principle be legitimate to contact such publishers prior to making a de-listing decision, even here search engines must then “take all necessary measures to properly safeguard the rights of the affected data subject”.

Since the release of the Guidelines, the French DPA has famously (or infamously depending on your perspective!) adopted a strict interpretation of the first concern requiring de-listing on a completely global scale and fining Google €100K for failing to do this.  This action has now been appealed before the French Conseil d’État and much attention has been given to this including by Google itself.  In contrast, much less publicity has been given to the issue of third party communication.

Nevertheless, in September 2016 the Spanish DPA issued a Resolution fining Google €150K for disclosing information identifiable to three data subjects to Webmasters and ordered it to adopt measures to prevent such practices reoccurring.  An internal administrative appeal lodged by Google against this has now been rejected and a challenge in court now seems inevitable.  This piece explores the background to, nature of and justification for this important regulatory development.

The Determinations Made in the Spanish Resolution

Apart from the fact that they had formally complained, there was nothing unusual in the three individual cases analysed in the Spanish Resolution.  Google had simply followed its usual practice of informing Webmasters that under data protection law specified URLs had been deindexed against a particular (albeit not directly specified) individual name.  Google sought to defend this practice on four separate grounds:

  • Firstly, it argued that the information provided to Webmasters did not constitute personal data at all. In contrast, the Spanish regulator argued that in those cases where the URL led to a webpage in which only one natural person was mentioned then directly identifiable data had been reported, whilst even in those cases where several people were mentioned the information was still indirectly identifiable since a simple procedure (e.g. conducting a search on names linked to the webpage in question) would render the information fully identified.  (Google’s argument here in any case seemed to be in tension with its practice since September 2015 of inviting contacted Webmasters to notify Google of any reason why the de-listing decision should be reconsidered – this would only really make sense if the Webmaster could in fact deduce what specific de-listing had in fact taken place).
  • Second, it argued that, since its de-listing form stated that it “may provide details to webmaster(s) of the URLs that have been removed from our search results”, any dissemination had taken place with the individual’s consent. Drawing especially on European data protection’s requirement that consent be “freely given” (A. 2 (h)) this was also roundly rejected.  In using the form to exercise their legal rights, individuals were simply made to accept as a fait accompli that such dissemination might take place.
  • Third, it argued that dissemination was nevertheless a compatible” (A. 6 (1) (b)) processing of the data given the initial purpose of its collection, finding a legal basis as necessary” for the legitimate interests (A. 7 (f)) of Webmasters regarding this processing (e.g. to contact Google for a reconsideration). The Spanish DPA doubted that Webmasters could have any legitimate interest here since “search engines do not recognize a legal right of publishers to have their contents indexed and displayed, or displayed in a particular order”, the Court of Justice had only referenced that the interests of the search engine itself and Internet users who might receive the information were engaged and, furthermore, had been explicit that de-listing rights applied irrespective of whether the information was erased at source or even if publication there remained lawful.  In any case, it emphasized that any such interest had (as article 7 (f) explicitly states) to be balanced with the rights and freedoms of data subjects which the Court had emphasized must be “effective and complete” in this context.  In contrast, Google’s practice of essentially unsafeguarded disclosure of the data to Webmasters could result in the effective extinguishment of the data subject’s rights since Webmasters had variously republished the deindexed page against another URL, published lists of all URLs deindexed or even published a specific news story on the de-listing decision.
  • Fourth, Google argued that its practice was an instantiation of the data subject’s right to obtain from a controller “notification to third parties to whom the data have been disclosed of any rectification, erasure or blocking” carried out in compliance with the right to erasure “unless this provides impossible or involves a disproportionate effort” (A. 12 (c)). The Spanish regulator pointed out that since the data in question had originally been received from rather than disclosed to Webmasters, this provision was not even materially engaged.  In any case, Google’s interpretation of it was in conflict with its purpose which was to ensure the full effectiveness of the data subject’s right to erasure.

Having established an infringement of the law, the Spanish regulator had to consider whether to pursue this as an illegal communication of data (judged ʻvery seriousʼ under Spanish data law) or only as a breach of secrecy (which is judged merely as ʻseriousʼ).  In the event, it plumped for the latter and issued a fine of €150K which was in the mid-range of that set out for ʻseriousʼ infringements.  As previously noted, it also injuncted Google to adopt measures to prevent re-occurrence of these legal failings and required that these be communicated to the Spanish DPA.

Analysis

This Spanish DPA’s action tackles a systematic practice which has every potential to fundamentally undermine practical enjoyment of rights to de-listing and is therefore at least as significant as the ongoing regulatory developments in France which relate to the geographical scope of these rights.  It was entirely right to find that personal data had been disseminated, that this had been done without consent, that the processing had nothing to do with the right (which, in any case, is not an obligation) of data subjects to have third parties notified in certain circumstances and that this processing was “incompatible” with the initial purpose of data collection which was to ensure data subject’s legal rights to de-listing.

It is true that the Resolution was too quick to dismiss the idea that original Webmasters do have “legitimate interests” in guarding against unfair de-listings of content.  Even in the absence of a de jure right to such listings, these interests are grounded in their fundamental right to “impart” information (and ideas), an aspect of freedom of expression (ECHR, art. 10; EU Charter, art. 11).   In principle, these rights and interests justify search engines making contact with original Webmasters, at the least as the Working Party itself indicated in particularly difficult de-listing cases.

However, even here dissemination must (as the Working Party also emphasized) properly safeguard the rights and interest of data subjects.  At the least this should mean that, prior to any dissemination, a search engine should conclude a binding and effectively policeable legal contract prohibiting Webmasters from disseminating the data in identifiable form.  (In the absence of this, those Webmasters out of European jurisdiction or engaged in special/journalistic expression cannot necessarily be themselves criticized for making use of the information received in other ways).

In stark contrast to this, Google currently engages in blanket and essentially unsafeguarded reporting to Webmasters, a practice which has resulted in a breakdown of effective protection for data subjects not just in Spain but also in other European jurisdictions such as the UK – see here and here.  Having been put on such clear notice by this Spanish action, it is to be hoped the Google will seriously modify its practices.  If not, then regulators would have every right to deal with this in the future as a (yet more serious) illegal and intentional communication of personal data.

Future Spanish Regulatory Vistas

The cases investigated by the Spanish DPA noted in this Resolution also involved the potential dissemination of data to the Lumen transparency database (formally Chilling Effects) which is hosted in the United States, the potential for subsequent publication of identifiable data on its publicly accessible database and even the potential for a specific notification to be provided to Google users conducting relevant name searches detailing that “[i]n response to a legal requirement sent to Google, we have removed [X] result(s) from this page.  If you wish, you can get more information about this requirement on LumenDatabase.org.

This particular investigation, however, failed to uncover enough information on these important matters.  Google was adamant that it had not yet begun providing information to Lumen in relation to data protection claims post-Google Spain, but stated that it was likely to do so in the future in some form.  Meanwhile, it indicated that the specific Lumen notifications which were found on name searches regarding two of the claimants concerned pre-Google Spain claims variously made under defamation, civil privacy law and also data protection.  (Even putting to one side the data protection claim, such practices would still amount to a processing of personal data and also highlight the often marginal and sometimes arbitrary distinctions between these very related legal causes of action).

Given these complications, the Spanish regulator decided not to proceed directly regarding these matters but rather open more wide-ranging investigatory proceedings concerning both Google’s practices in relation to disclosure to Lumen and also notification provided to search users.  Both sets of investigatory proceedings are ongoing.  Such continuing work highlights the vital need for active regulatory engagement to ensure that the individual rights of data subjects are effectively secured.  Only in this way will basic European data protection norms continue to ʻcatch upʼ not just with Google but with developments online generally.

David Erdos, University Lecturer in Law and the Open Society, Faculty of Law & WYNG Fellow in Law, Trinity Hall, University of Cambridge.

(I am grateful to Cristina Pauner Chulvi and Jef Ausloos for their thoughts on a draft of this piece.)

This post first appeared on the Inforrm blog. 

Signed Statement Condemns DHS Proposal to Demand Passwords to Enter the U.S.

A group of 50 organisations and nearly 90 individual experts have signed a statement against the US Department of Homeland Security’s (DHS) proposal to ask non-citizens to provide the passwords to their social media accounts in order to enter the United States.

The social media password proposal was raised by Secretary John Kelly at the House Homeland Security Committee hearing on 7th February.

The signed statement, which has been organised by the Center for Democracy & Technology, recognises the United States Government’s need to protect its borders but argues that a “blanket policy of demanding passwords” would “undermine security, privacy, and other rights”.

To view the full statement with list of signatories please click here.

Two Research Associate Posts at Hertie School of Governance

For the project ‘Evolving Internet Interfaces: Content Control and Privacy Protection’ within the Deutsche Foschungsgemeinschaft (German Research Foundation) research group on ‘Overlapping Spheres of Authority and Interface Conflicts in the Global Order’ (www.osaic.eu), the Hertie School is looking to hire:

2 Research Associates (m/f)
26 hours/week

The contract duration is 36 months. The envisaged start date is 1 June 2017. Salary is in accordance with TV-L Berlin. Continue reading

CFP: Bytes, Bodies and Souls: Interrogating Human Digitalisation

conference-imageKent Law School, in conjunction with the Eastern Academic Research Consortium, invites early career academics and postgraduate research students to participate in the “Bytes, Bodies and Souls: Interrogating Human Digitalisation” workshop to be held on 30th May, 2017.

The workshop aims to bring together researchers across the social sciences, humanities, sciences and other relevant disciplines who are interested in examining the consequences, possibilities, and limitations of human digitalisation.

Papers and Posters are welcomed on any aspect of the conference theme. This may include, although is not restricted to:

  • Big Data and its challenges
  • The role and impact of the Internet of Things
  • Digital ownership and appropriation processes
  • Privacy, surveillance, and control
  • The role of algorithms in the governance of human digitalisation
  • Politics of digital humans from cyber activism to post-truth
  • Digital human aesthetics; the forging of a digital soul

Abstracts for papers are invited for consideration. Abstracts should be no more than 300 words in length. Successful applicants will be allocated 15 minutes for the presentation of their paper plus time for questions and discussion.

Abstracts for posters are invited for consideration. Abstracts should be no more than 300 words in length. Accepted poster presenters will need to deliver the hard copy of their poster to the venue no later than 9 am on the day of the workshop to allow it to be displayed throughout the day.

Submissions should be sent in a Word document format to a.m.holmes@kent.ac.uk. Please include name, title, institution, and email correspondence address and whether you wish to be considered for a paper or poster presentation. The deadline for submission is Friday 3rd March 2017. Successful applicants will be notified by the 19th March 2017.

Your next social network could pay you for posting

In this guest post, Jelena Dzakula from the London School of Economics and Political Science considers what blockchain technology might mean for the future of social networking. 

You may well have found this article through Facebook. An algorithm programmed by one of the world’s biggest companies now partially controls what news reaches 1.8 billion people. And this algorithm has come under attack for censorship, political bias and for creating bubbles that prevent people from encountering ideas they don’t already agree with.

blockchainNow a new kind of social network is emerging that has no centralised control like Facebook does. It’s based on blockchain, the technology behind Bitcoin and other cryptocurrencies, and promises a more democratic and secure way to share content. But a closer look at how these networks operate suggests they could be far less empowering than they first appear.

Blockchain has received an enormous amount of hype thanks to its use in online-only cryptocurrencies. It is essentially a ledger or a database where information is stored in “blocks” that are linked historically to form a chain, saved on every computer that uses it. What is revolutionary about it is that this ledger is built using cryptography by a network of users rather than a central authority such as a bank or government.

Every computer in the network has access to all the blocks and the information they contain, making the blockchain system more transparent, accurate and also robust since it does not have a single point of failure. The absence of a central authority controlling blockchain means it can be used to create more democratic organisations owned and controlled by their users. Very importantly, it also enables the use of smart contracts for payments. These are codes that automatically implement and execute the terms of a legal contract.

Industry and governments are developing other uses for blockchain aside from digital currencies, from streamlining back office functions to managing health data. One of the most recent ideas is to use blockchain to create alternative social networks that avoid many of the problems the likes of Facebook are sometimes criticised for, such as censorship, privacy, manipulating what content users see and exploiting those users.

Continue reading

Case Preview: PNM v Times Newspapers, Open justice and the privacy of suspects – Hugh Tomlinson QC

In this guest post, Hugh Tomlinson QC previews an appeal to the Supreme Court in a case that considers where the balance lies between rights to privacy and the principle of open justice. The post was first published on the Inforrm blog

On 17 and 18 January 2017, a seven judge Supreme Court will hear the claimant’s appeal against the decision of the Court of Appeal in the case of PNM v Times Newspapers ([2014] EWCA Civ 1132).

That Court had upheld the judge’s view that, on the basis of the “open justice principle”, information mentioned in open court concerning a person who was arrested but not charged could be reported.  

Background

The claimant was one of a number of men arrested in March 2012 in connection with a Thames Valley Police investigation into allegations of child sex grooming and prostitution.  The claimant was released on bail and was subsequently notified that he was to be released without charge.

Nine men were charged and a criminal trial took place at the Central Criminal Court between January and May 2013.  The claimant was not a party or witness at the criminal trial.  On 25 January 2013 order under section 4(2) of the Contempt of Court Act 1981 was made prohibiting publication of any report which referred to evidence which may identify or tend to identify him.

On 14 May 2013, seven of the defendants were convicted of numerous serious sexual offences.  A further order under section 4(2) of the Contempt of Court Act 1981 was made on the claimant’s application.  It prohibited disclosure of details of applications made to the Court by Thames Valley Police (which concerned certain of the claimant’s property).

The claimant’s full name was mentioned in open court when a police officer said that a witness had failed to pick him out on an identification parade.  He was also mentioned in the course of cross-examination, in speeches and in the summing up.

At the conclusion of the criminal trial the Judge declined to discharge the section 4(2) order until the decision was made as to whether the claimant would be charged.  In July 2013 the police notified the claimant that he was not going to be charged.   The Times and the Oxford Mail applied to discharge the section 4(2) but, before he had handed down his ruling, the claimant applied to the High Court for an injunction.

By an application made on 15 October 2013 against the Times, the Oxford Mail and two journalists, the claimant sought an order to prevent publication of the fact of his arrest on suspicion of committing serious sexual offences against children and associated information because of the fear of the damage such publications may cause to him and his family, including his children.

Continue reading

Why the rise of wearable tech to monitor employees is worrying

Shutterstock.com

In this guest post, Ivan Manokha, Departmental Lecturer in International Political Economy at the University of Oxford, considers the use of wearable technology in the workplace and the potential privacy implications of collecting the data of employees. 

An increasing number of companies are beginning to digitally monitor their employees. While employers have always scrutinised their workers’ performance, the rise of wearable technology to keep tabs has more of a dystopian edge to it. Monitoring has become easier, more intrusive and is not just limited to the workplace – it’s 24/7.

Devices such as Fitbit, Nike+ FuelBand and Jawbone UP, which can record information related to health, fitness, sleep quality, fatigue levels and location, are now being used by employers who integrate wearable devices into employee wellness programmes.

One of the first was BP America, which introduced Fitbit bracelets in 2013. In 2015 at least 24,500 BP’s employees were using them and more and more US employers have followed suit. For instance, the same year, Vista Staffing Solutions, a healthcare recruitment agency, started a weight-loss programme using Fitbits and wifi-enabled bathroom scales. Appirio, a consulting company, started handing out Fitbits to employees in 2014.

In the UK similar projects are under consideration by major employers. And this trend will only intensify in the years to come. By 2018, estimates suggest that more than 13m of these devices will be part of worker wellness schemes. Some analysts say that by the same year, at least 2m employees worldwide will be required to wear health-and-fitness trackers as a condition of employment.

According to some, this is a positive development. Chris Brauer, an academic at Goldsmiths, University of London, argues that corporate managers will now be comparable to football managers. They will be equipped with a dashboard of employee performance trajectories, as well as their fatigue and sleep levels. They will be able to pick only the fittest employees for important business meetings, presentations, or negotiations.

It seems, however, that such optimism overlooks important negative and potentially dangerous social consequences of using this kind of technology. History here offers a word of warning.

Historical precedent

The monitoring of workers’ health outside the workplace was once attempted by the Ford Motor Company. When Ford introduced a moving assembly line in 1913 – a revolutionary innovation that enabled complete control over the pace of work – the increase in productivity was dramatic. But so was the rise in worker turnover. In 1913, every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963, as workers struggled to keep up with the pace and left shortly after being recruited.

Ford’s solution to this problem was to double wages. In 1914, the introduction of a US$5 a day wage was announced, which immediately led to a decline in worker turnover. But high wages came with a condition: the adoption of healthy and moral lifestyles.

The company set up a sociology department to monitor workers’ – and their families’ – compliance with its standards. Investigators would make unannounced calls upon employees and their neighbours to gather information on living conditions and lifestyles. Those that were deemed insufficiently healthy or morally right were immediately disqualified from the US$5 wage level.

Analysing Ford’s policies, Italian political philosopher and revolutionary Antonio Gramsci coined the term “Fordism” for this social phenomenon. It signalled fundamental changes to labour, which became much more intense after automation. Monitoring workers’ private lives to control their health, Gramsci argued, was necessary to preserve “a certain psycho-physical equilibrium which prevents the physiological collapse of the worker, exhausted by the new method of production”.

Parallels today

Today, we are faced with another great change to how work is done. To begin with, the “great doubling” of the global labour force has led to the increase in competition between workers around the world. This has resulted in a deterioration of working and employment conditions, the growth of informal and precarious labour, and the intensification of exploitation in the West.

So there has been a significant increase in the average number of hours worked and an increase in the intensity of labour. For example, research carried out by the Trade Union Congress in 2015 discovered that the number of people working more than 48 hours in a week in the UK was rising and it warned of a risk of “burnout Britain”.

Indeed, employee burnouts have become a major concern of employers. A UK survey of human resources directors carried out in 2015 established that 80% were afraid of losing top employees to burnout.

Ford’s sociology department was shut down in the early 1920s for two reasons. It became too costly to maintain it in the context of increasing competition from other car manufacturers. And also because of growing employee resistance to home visits by inspectors, increasingly seen as too intrusive into their private lives.

Wearable technology, however, does not suffer from these inconveniences. It is not costly and it is much less obviously intrusive than surprise home visits by company inspectors. Employee resistance appears to be low, though there have been a few attempts to fake the results of the tracking (for example, workers strapping their employer-provided Fitbits onto their dogs to boost their “activity levels”). The idea of being tracked has mostly gone unchallenged.

Labour commodified to the extreme

But the use of wearable technology by employers raises a range of concerns. The most obvious is the right to privacy. The use of wearable technology goes significantly further than computer systems where emails are already logged and accessible to employers.

Surveillance becomes continuous and all-encompassing, increasingly unconfined to the workplace, and also constitutes a form of surveillance which penetrates the human body. The right to equal employment opportunities and promotion may also be compromised if employers reserve promotion for those who are in a better physical shape or suffer less from fatigue or stress.

It may also be argued that the use of wearable technology takes what the Hungarian historian Karl Polanyi called the “commodification” of human labour to an extreme. Monitoring worker health both inside and outside the workplace involves the treatment of people as machines whose performance is to be maximised at all costs. However, as Polanyi warned, human labour is a “fictitious commodity” – it is not “produced” for sale to capital as a mere tool. To treat it as such risks ultimately leading to a “demolition of society”.

To protect individual rights, systems have been introduced to regulate how data that is gathered on employees is stored and used. So one possible solution is to render the data collected by trackers compulsorily anonymous. For example, one company that collects and monitors employee data for companies, Sociometric Solutions only charts broader patterns and connections to productivity, rather than individual performance.

This, however, does not address concerns about the increasing commodification of human labour that comes with the use of wearable technology and any potential threats to society. To prevent this, it is perhaps necessary to consider imposing an outright ban on its use by employers altogether.

The ConversationIvan Manokha, Departmental Lecturer in International Political Economy, University of Oxford

This article was originally published on The Conversation. Read the original article.