Category Archives: Ethics

LSE Experts on T3: Omar Al-Ghazzi

This post is re-posted from the LSE Media Policy Project Blog.

As part of a series of interviews with LSE Faculty on themes related to the Truth, Trust and Technology Commission, Dr Omar Al-Ghazzi  talks to LSE MSc student Ariel Riera on ‘echo chambers’ in the context of North Africa and the Middle East.

AR: The spread of misinformation through social media is a main focus of the Commission. Are there similar processes in the Middle East and in the North Africa region?

OA: Questions about trust, divisions within society, and authoritarian use of information or what could be called propaganda are very prevalent in the Middle East and North Africa. So in a way a lot of the issues at hand are not really new if we think about communication processes globally. Much of the attention that misinformation has been getting is in relation to Trump and Brexit. But Syria, for instance, is actually a very productive context to think through these questions, because with the uprising and the war, there was basically an information blackout where no independent journalist could go into the country. This created an environment where witnesses and citizen journalists and activists fill that gap. So it is now a cliché to say that the war in Syria is actually the most documented war. But all that information has not led to a narrative that people understand in relation to what’s happening. And that has to do with trust in digital media and the kind of narratives that the government disseminates. The echo chamber effect in the way people access information from online sources they agree with is also as prevalent in the Middle East as it is globally.

AR: And in these countries, who are the perpetrators of fake news and misinformation and what are the channels?

OA: It is a complicated question because if we talk about the war in Syria, the communication environment is much more complex than the binary division between fake and real. For instance, I am interested in the reporting on the ground in areas that are seeing or witnessing war and conflict. I will give you an example. Now in the suburbs of Damascus, where there is a battle between rebels and the government, there are several cases of children and teenagers doing the reporting. So how should this be picked up by news organisations, and what are the consequences? CNN recently called one of the teenagers based in Eastern Ghouta, Muhammed Najem, a ‘combat reporter’. What are the ethical considerations of that? Does that encourage that teenager to take for instance more risks to get to that footage? How is what he produces objective if first he has obviously no journalism training as a very young person and second he is in a very violent context where his obvious interest lies in his own survival and in getting attention about his and his community’s suffering. He has a voice that he wants to be heard and which should be heard. But why is the expectation, if he is dubbed a ‘combat report’, that what he produces should be objective news reporting?

Beyond this example of the complex picture in war reporting, I think the Middle East region also teaches us that when there is a lack of trust in institutions of any country in the world, when there is division in society about a national sense of belonging, about what it means to be a patriot or a traitor, that would produce mistrust in the media. Basically, a fractured political environment engenders lack of trust in media, and engenders that debate around fake or real. So there is a layer beyond the fakeness and realness that’s really about social cohesion and political identity.

AR: Nationalist politicians all over the world have found in social media a way to bypass mainstream media and appeal directly to voters. What techniques do they use to do this?

OA: Perhaps in the Middle East you don’t find an example of a stream of consciousness relayed live on Twitter like the case is with President Trump, but, like elsewhere in the world, politicians are on Twitter and even foreign policy is often communicated there. Also, a lot of narratives that feed into conflicts, like the Arab-Israeli conflict, take shape on social media. So without looking at social media you certainly don’t get the full picture even of the geopolitics in the region. Without social media, one would not grasp how government positions get internalised by people and how people contribute- whether by feeding into government policies, or maybe resisting them as well.

AR: Based on your observations in North Africa and the Middle East, can mistrust or even distrust of mainstream media outlets be a healthy instinct? For example, if mainstream media is a place where only one voice is heard.

OA: Even though a lot of the media are politicised in the Arab world because they are government owned, people have access to media other than their own governments because of a common regional cultural affiliation, a shared language and the nature of the regional media environment. So actually people in the Arab world are sophisticated media users because they have access to a wide array of media outlets. Of course, there are outlets that are controlled by governments wherever one may be situated and things vary between different countries, but audiences can access pan-Arab news media such as Al Jazeera, Al Arabiya and Al Mayadeen. They have access to a wide array of online news platforms as well as broadcast news. So you really have a lot of choices. If you are a very informed audience member you would watch one news outlet to know, let’s say, what the Iranian position on a certain event is, and then you watch a Saudi funded channel to see the Saudi. But of course, most people don’t do that because you know they just access the media that offers the perspective they already agree with.

We have to remember that in the context of the Middle East there is a lot of different conflicts, there is war, which obviously heightens the emotions of people and their allegiances and whatever their worldview is. So we are also talking about the context that, because of what is happening on the ground, people feel strongly about their political positioning which feeds into the echo chamber effect.

AR: You wrote that, at least linked to the Arab Spring, there was a ‘diversity of acts referred to as citizen journalism’. What differentiates these practices from the journalism within established media?

OA: Basically, in relation to the 2011 Arab uprisings, there were a lot of academic and journalistic approaches that talked about how these uprisings were Facebook or Twitter revolutions, or only theorising digital media practices through the lens of citizen journalism. But I argued that we cannot privilege one lens to look at what digital media does on the political level because a lot of people use digital media, from terrorist organisations to activists on the ground to government agents. So one cannot privilege a particular use of digital media and focus on that and make claims about digital media generally, when actually the picture is much more complicated and needs to be sorted out more.

Of course the proliferation of smartphones and social media offered ordinary people the opportunity to have their own output, to produce witness videos or write opinions. It is a very different media ecology because of that. However, we cannot take for granted how social media is used by different actors. In social science we have to think about issues of class, literacy, the urban rural divide, the political system, the media system. And, within that complexity, locate particular practices of social media rather than make blanket statements about social media doing something to politics generally and universally.

Dr Omar Al-Ghazzi is Assistant Professor in the Department of Media and Communications at LSE. He completed his PhD at the Annenberg School for Communication, the University of Pennsylvania, and holds MAs in Communication from the University of Pennsylvania and American University and a BA in Communication Arts from the Lebanese American University.

 

 

AI trust and AI fears: A media debate that could divide society

File 20180109 83547 1gya2pg.jpg?ixlib=rb 1.1

In this guest post, Dr Vyacheslav Polonski, Researcher, University of Oxford examines the key question of trust or fear of AI.

We are at a tipping point of a new digital divide. While some embrace AI, many people will always prefer human experts even when they’re wrong.

Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

Continue reading

The Legal Challenges of Social Media

Legal Challenges of social media imageHow has the law adapted to the emergence and proliferation of social media tools and digital technology? Furthermore, how successful has the law been in governing the challenges associated with an ongoing reformulation of our understandings of public and private spaces in the online environment?

These were the key questions discussed by a panel of experts at the Information Law and Policy Centre earlier this month. The event heralded the launch of a new book entitled the ‘Legal Challenges of Social Media’ edited by Dr David Mangan (City Law School) and Dr Lorna Gillies (University of Strathclyde).  A number of the book’s authors provided insights into the contents of their individual chapters.

Social Media and Press Regulation

Professor Ian Walden began proceedings with a discussion of his chapter on press regulation. His chapter was informed by his own experience on the board of the Press Complaints Commission (PCC) between 2009 and 2014.

Walden started by addressing the question of what constitutes “press law”. Walden highlighted that for the most part journalists and editors are subject to the same law as most people – there is no special ‘public interest’ defence or journalistic exemption for hacking into the voicemail of a mobile phone user for example. At the same time, journalists abide (to varying degrees) to an Editors’ Code which goes beyond the provisions of the law. In this context, the online environment and social media has rendered press regulation even more complex in a number of ways.

Continue reading

Call for Papers: Automated decision-making, machine learning and artificial intelligence

IRP&P logoInformation Rights, Policy & Practice, a peer-reviewed, open access, interdisciplinary journal for academics and practitioners alike, is seeking submissions for its Autumn 2017 special issue on automated decision-making, machine learning and artificial intelligence.

Perspectives from a variety of disciplines are welcome and encouraged, including papers on present and future challenges, policy and theoretical perspectives and ethical issues.

The journal is looking for Articles of 5,000 to 10,000 words; Forward thinking pieces of 3,000 to 5,000 words; Case reports of 3,000 to 5,000 words; Policy reports of 1,000 to 2,000 words; as well as book reports of 700 to 1,000 words. All word counts are exclusive of footnotes.

For more information about the journal’s focus and aims, its online submission processes and requirements, and to register with the journal, please go to www.jirpp.org.uk.

Deadline for submissions for the Autumn 2017 issue: 31 AUGUST 2017

The journal is also looking for a reviewer of the following book:
Private Power, Online Information Flows and EU Law: Mind the Gap by Angela Daly (2016, Hart).
Please contact julian.dobson@winchester.ac.uk to request to review this book.

About IRP&P

IRP&P is an open access, international, peer-reviewed journal seeking to create a space to allow academics and practitioners across a multitude of fields to reflect and critique the law, policy and practical reality of Information Rights, as well as to theorise potential future developments in policy, law and regulation.

@IRPandPJournal
www.jirpp.org.uk

Why the rise of wearable tech to monitor employees is worrying

Shutterstock.com

In this guest post, Ivan Manokha, Departmental Lecturer in International Political Economy at the University of Oxford, considers the use of wearable technology in the workplace and the potential privacy implications of collecting the data of employees. 

An increasing number of companies are beginning to digitally monitor their employees. While employers have always scrutinised their workers’ performance, the rise of wearable technology to keep tabs has more of a dystopian edge to it. Monitoring has become easier, more intrusive and is not just limited to the workplace – it’s 24/7.

Devices such as Fitbit, Nike+ FuelBand and Jawbone UP, which can record information related to health, fitness, sleep quality, fatigue levels and location, are now being used by employers who integrate wearable devices into employee wellness programmes.

One of the first was BP America, which introduced Fitbit bracelets in 2013. In 2015 at least 24,500 BP’s employees were using them and more and more US employers have followed suit. For instance, the same year, Vista Staffing Solutions, a healthcare recruitment agency, started a weight-loss programme using Fitbits and wifi-enabled bathroom scales. Appirio, a consulting company, started handing out Fitbits to employees in 2014.

In the UK similar projects are under consideration by major employers. And this trend will only intensify in the years to come. By 2018, estimates suggest that more than 13m of these devices will be part of worker wellness schemes. Some analysts say that by the same year, at least 2m employees worldwide will be required to wear health-and-fitness trackers as a condition of employment.

According to some, this is a positive development. Chris Brauer, an academic at Goldsmiths, University of London, argues that corporate managers will now be comparable to football managers. They will be equipped with a dashboard of employee performance trajectories, as well as their fatigue and sleep levels. They will be able to pick only the fittest employees for important business meetings, presentations, or negotiations.

It seems, however, that such optimism overlooks important negative and potentially dangerous social consequences of using this kind of technology. History here offers a word of warning.

Historical precedent

The monitoring of workers’ health outside the workplace was once attempted by the Ford Motor Company. When Ford introduced a moving assembly line in 1913 – a revolutionary innovation that enabled complete control over the pace of work – the increase in productivity was dramatic. But so was the rise in worker turnover. In 1913, every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963, as workers struggled to keep up with the pace and left shortly after being recruited.

Ford’s solution to this problem was to double wages. In 1914, the introduction of a US$5 a day wage was announced, which immediately led to a decline in worker turnover. But high wages came with a condition: the adoption of healthy and moral lifestyles.

The company set up a sociology department to monitor workers’ – and their families’ – compliance with its standards. Investigators would make unannounced calls upon employees and their neighbours to gather information on living conditions and lifestyles. Those that were deemed insufficiently healthy or morally right were immediately disqualified from the US$5 wage level.

Analysing Ford’s policies, Italian political philosopher and revolutionary Antonio Gramsci coined the term “Fordism” for this social phenomenon. It signalled fundamental changes to labour, which became much more intense after automation. Monitoring workers’ private lives to control their health, Gramsci argued, was necessary to preserve “a certain psycho-physical equilibrium which prevents the physiological collapse of the worker, exhausted by the new method of production”.

Parallels today

Today, we are faced with another great change to how work is done. To begin with, the “great doubling” of the global labour force has led to the increase in competition between workers around the world. This has resulted in a deterioration of working and employment conditions, the growth of informal and precarious labour, and the intensification of exploitation in the West.

So there has been a significant increase in the average number of hours worked and an increase in the intensity of labour. For example, research carried out by the Trade Union Congress in 2015 discovered that the number of people working more than 48 hours in a week in the UK was rising and it warned of a risk of “burnout Britain”.

Indeed, employee burnouts have become a major concern of employers. A UK survey of human resources directors carried out in 2015 established that 80% were afraid of losing top employees to burnout.

Ford’s sociology department was shut down in the early 1920s for two reasons. It became too costly to maintain it in the context of increasing competition from other car manufacturers. And also because of growing employee resistance to home visits by inspectors, increasingly seen as too intrusive into their private lives.

Wearable technology, however, does not suffer from these inconveniences. It is not costly and it is much less obviously intrusive than surprise home visits by company inspectors. Employee resistance appears to be low, though there have been a few attempts to fake the results of the tracking (for example, workers strapping their employer-provided Fitbits onto their dogs to boost their “activity levels”). The idea of being tracked has mostly gone unchallenged.

Labour commodified to the extreme

But the use of wearable technology by employers raises a range of concerns. The most obvious is the right to privacy. The use of wearable technology goes significantly further than computer systems where emails are already logged and accessible to employers.

Surveillance becomes continuous and all-encompassing, increasingly unconfined to the workplace, and also constitutes a form of surveillance which penetrates the human body. The right to equal employment opportunities and promotion may also be compromised if employers reserve promotion for those who are in a better physical shape or suffer less from fatigue or stress.

It may also be argued that the use of wearable technology takes what the Hungarian historian Karl Polanyi called the “commodification” of human labour to an extreme. Monitoring worker health both inside and outside the workplace involves the treatment of people as machines whose performance is to be maximised at all costs. However, as Polanyi warned, human labour is a “fictitious commodity” – it is not “produced” for sale to capital as a mere tool. To treat it as such risks ultimately leading to a “demolition of society”.

To protect individual rights, systems have been introduced to regulate how data that is gathered on employees is stored and used. So one possible solution is to render the data collected by trackers compulsorily anonymous. For example, one company that collects and monitors employee data for companies, Sociometric Solutions only charts broader patterns and connections to productivity, rather than individual performance.

This, however, does not address concerns about the increasing commodification of human labour that comes with the use of wearable technology and any potential threats to society. To prevent this, it is perhaps necessary to consider imposing an outright ban on its use by employers altogether.

The ConversationIvan Manokha, Departmental Lecturer in International Political Economy, University of Oxford

This article was originally published on The Conversation. Read the original article.

‘Tracking People’ research network established

Tracking People Research NetworkA new research network has been established to investigate the legal, ethical, social and technical issues which arise from the use of wearable, non-removable tagging and tracking devices.

According to the network’s website, tracking devices are increasingly being used to monitor a range of individuals including “offenders, mental health patients, dementia patients, young people in care, immigrants and suspected terrorists”.

The interdisciplinary network is being hosted at the University of Leeds and aims to foster “new empirical, conceptual, theoretical and practical insights into the use of tracking devices”.

The network is being coordinated by Professor Anthea Hucklesby and Dr Kevin MacNish. It will bring together academics, designers, policy-makers and practitioners to explore critical issues such as:

  • privacy;
  • ethics;
  • data protection;
  • efficiency and effectiveness;
  • the efficacy and suitability of the equipment design;
  • the involvement of the private sector as providers and operators;
  • the potential for discriminatory use.

Readers of the Information Law and Policy Centre blog might be particularly interested in a seminar event scheduled for April 2017 which will consider the “legal and ethical issues arising from actual and potential uses of tracking devices across a range of contexts”.

For further information, check out the network’s website or email the team to join the network.

Whistleblowers and journalists in the digital age

Snowden

Dr Aljosha Karim Schapals, research assistant at the Information Law and Policy Centre, reports on a research workshop hosted by the University of Cardiff on Digital Citizenship and the ‘Surveillance Society’.

A workshop led by researchers at the Cardiff School of Journalism, Media and Cultural Studies (JOMEC) on 27th June in London shared the findings of an 18 month ESRC funded research project examining the relationships between the state, the media and citizens in the wake of the Snowden revelations of 2013.

It was the concluding event of a number of conferences, seminars and workshops organised by the five principal researchers: Dr Arne Hintz (Cardiff), Dr Lina Dencik (Cardiff), Prof Karin Wahl-Jorgensen (Cardiff), Prof Ian Brown (Oxford) and Dr Michael Rogers (TU Delft).

Broadly speaking, the Digital Citizenship and the ‘Surveillance Society’ (DCSS) project has investigated the nature, opportunities and challenges of digital citizenship in light of US and UK governmental surveillance as revealed by whistleblower Edward Snowden.

Touching on more general themes such as freedom of expression, data privacy and civic transparency, the project aligns with the research activities of the Information Law and Policy Centre, which include developing work on journalism and whistleblower protection, and discussions and analysis of the Investigatory Powers Bill. Continue reading