This blog post was written by Professor Lorna Woods and originally posted on Inforrm.


Concern about the possible harmful effects of social media can now be seen in civil society, politics and the justice system not just in the UK but around the world. 

The remarkable benefits of social media have become tainted by stories raising questions about its adverse effects: the fact that it can be used for bullying; that content on those platforms can seemingly be manipulated for political purposes or facilitate terrorism and extremism; the fact that the underpinning systems leak data, whether deliberately or inadvertently; whether the design of the services themselves are malign and concerns about the addictive nature of some of the services – for example herehere and here .

While some of these stories may be anecdotal, and the research on these issues still at early stages, the cumulative impact suggests that market forces and a self-regulatory approach are not producing an ideal outcome in many of these fields. Mark Zuckerberg in his evidence to the US Congress has said he welcomes regulation of the right sort and Jack Dorsey of Twitter has made a public plea for ideas.

Against that background, Will Perrin and I, under the aegis of the Carnegie UK Trust, decided to explore whether there were any regulatory models that could be adopted and adapted to encourage the providers of social media services to take better care of their users, whilst not stifling these companies’ innovation and respecting all users’ freedom of expression.  The following is an outline of what we came up with.

Further detail is available on the Carnegie UK Trust site, where we posted a series of blogs explaining our initial thinking, summarised in our evidence to the House of Lords inquiry into internet regulation. We propose to develop a fuller proposal and in the meantime welcome suggestions on how the proposal could be improved at comms@carnegieuk.org.

Existing Regulatory Models

Electronic communications systems and the mass media content available over them have long been subject to regulation.  These systems do not on the whole require prior licensing but notification and compliance with standards. While there were some potential points of interest for a social media regulatory model – e.g. the fact that telecoms operators have to provide subscribers with a complaints process (see General Condition 14 (GC14)) and the guidance given by Ofcom to content providers regarding the boundaries or acceptable and unacceptable (some of which is based on audience research) – overall these regimes did not seem appropriate for the context of social media.  One concern was that the standards with which the operator must comply were on the whole top-down.  Moreover, the regulator has the power to stop the operator from providing the service, stopping the business in that field altogether.  This suggests that these regimes still rely on implicit consent from the regulator as far as the business itself is concerned.

Was the transmission/content analogy the right one then for steering us in the direction of an appropriate regulatory model for social media? In our view, social media is not (just) about publishing; rather, it is much more similar to an on-line public or quasi-public space.  Public spaces in real life vary hugely-  in terms of who goes where, what they do and how they behave. However, in all of these spaces a common rule applies – that the owners or those that control that space are expected to ensure basic standards of safety, and the need for measures and the type of measures needed are, to some extent, context specific.

Lawrence Lessig, in Code and Other Laws of Cyberspace (1999), famously pointed out that the software sets the conditions on which the Internet (and all computers) is used – it is the architecture of cyberspace.  Software (in conjunction with other factors) affects what people do online: it permits, facilitates and sometime prohibits. It is becoming increasingly apparent that it also nudges us towards certain behaviour. It also sets the relationships between the users and the service providers, particularly in relation to personal data use. So, social media operators could be asked when drafting their terms and conditions, writing their code and establishing their business systems to have user safety in mind.

If we adopt this analogy, a couple of regimes seem likely models on which regulation of social media could be based: Occupiers’ Liability Act 1957 ; Health and Safety at Work Act ; Environmental Protection Act 1990, which all establish a duty of care.  The idea of duty of care derives from the tort of negligence; statutory duties of care were established in contexts where the common law doctrine seemed insufficient (which we think would be the case in the majority of cases in relation to social media due, in part, to the jurisprudential approach to non-physical injury). Arguably the most widely applied statutory duty of care in the UK is the Health and Safety at work Act 1974 which applies to almost all employers and the myriad activities that go on in them. The regime does not set down specific detailed rules with regards to what must be done in each workplace but rather sets out some general duties that employers have both as regards their employees and the general public.  So s. 2(1) specifies:

It shall be the duty of every employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees.

The next sub-section then elaborates on particular routes by which that duty of care might be achieved: e.g provision of machinery that is safe; the training of relevant individuals; and the maintenance of a safe working environment. The Act also imposes reciprocal duties on the employees. While the Health and Safety at Work Act sets goals, it leaves employers free to determine what measures to take based on risk assessment.

The area is subject to the oversight of the Health and Safety Executive, whose functions are set down in the Act.  It may carry out investigations into incidents; it has the power to approve codes of conduct. It also has enforcement responsibilities and may serve “improvement notices” as well as “prohibition notices”.  As a last measure, the HSE may prosecute.  There are sentencing guidelines which identify factors that influence the heaviness of the penalty.  Matters that tend towards high penalties include flagrant disregard of the law, failing to adopt measures that are recognised standards, failing to respond to concerns, or to change/review systems following a prior incident as well as serious or systematic failure within the organisation to address risk.

In terms of regimes focussing on risk, we also noted that risk assessment lies at the heart of the General Data Protection Regulation regime (as implemented by the Data Protection Act 2018). Beyond this risk based approach – which could allow the operators to take account of the types of service they offer as well as the nature of their respective audiences – there are many similarities between the risk-focused regimes. Notably they operate at the level of the systems in place rather than on particular incidents.

Looking beyond health and safety to other regulators – specifically those in the communications sector – a common  element can be seen.  That is that changes in policy take place in a transparent manner and after consultation with a range of stakeholders.   Further,  all have some form of oversight and enforcement – including criminal penalties- and the regulators responsible are independent from both Parliament and industry. Breach of statutory duty may also lead to civil action.  These matters of standards and of redress are not left purely to the industry.

Implementing a Duty of Care

We propose that a new duty of care be imposed on social media platforms by statute, and that the statute should also set down the particular general harms against which preventative measures should be taken. This does not mean, of course, that a perfect record is required– the question is whether sufficient care has been taken.  Our proposal is that the regulator is tasked with ensuring that social media services providers have adequate systems in place to reduce harm. The regulator would not get involved in individual items of speech unless there was reasonable suspicion that a defective company system lay behind them.

We suggest that the regime apply to social media services used in the UK that have the following characteristics:

  1. Have a strong two-way or multiway communications component;
  2. Display and organise user generated content publicly or to a large member/user audience;
  3. A significant number of users or audience – more than, say, 1,000,000;
  4. Are not subject to a detailed existing regulatory regime, such as the traditional media

Given that there are some groups that we might want to see protected no matter what, another way to approach the de minimis point in (c) would be to remove the limit but to say that regulation should be proportionate also to the size of the operator as well as the risks the system presents. This still risks diluting standards in key areas (e.g. a micro business aimed at children – as the NSPCC have pointed out to us in the physical world child protection policies apply to even the smallest nurseries). A further different approach could be to identify core risks which all operators must take into account, but that bigger/more established companies must address a fuller range of risks.

The regulator would make the final determination as to which providers fell within the regime’s ambit, though we would envisage a registration requirement.

Our proposals envisage the introduction of a harm reduction cycle.  A harm reduction cycle begins with measurement of harms. The regulator would draw up after consultation with civil society and industry a template for measuring harms, covering scope, quantity and impact. The regulator would use as a minimum the harms set out in statute but, where appropriate, include other harms revealed by research, advocacy from civil society, the qualifying social media service providers etc. The regulator would then consult publicly on this template, specifically including the qualifying social media service providers. The qualifying social media service providers would then run a measurement of harm based on that template, making reasonable adjustments to adapt it to the circumstances of each service.

The regulator would have powers in law to require the qualifying companies (see enforcement below) to comply. The companies would be required to publish the survey results in a timely manner. This would establish a first baseline of harm.  The companies would then be required to act to reduce these harms, submitting a plan to the regulator which would be open to public comment.  Harms would be measured again after a sufficient time has passed for harm reduction measures to have taken effect, repeating the initial process. Depending on whether matters have improved or not, the social media service provider would have to revise its plan, and the measurement cycle begins again.  Well-run social media services would quickly settle down to a much lower level of harm and shift to less risky service designs. This cycle of harm measurement and reduction would continue to be repeated, as in any risk management process participants would have to maintain constant vigilance.

We do not envisage the harm reduction processes to necessarily involve take-down processes.  Moreover, we do not envisage that a system that relied purely on user notification of problematic content or behaviour and after the event responses would be taking sufficient steps.  Tools/techniques that could be developed and deployed include:

  • the development of a statement of risks of harm, prominently displayed to all users when the regime is introduced and thereafter to new users; and when launching new services or features;
  • an internal review system for risk assessment of new services prior to their deployment (so that the risk is addressed prior to launch or very risky services do not get launched);
  • the provision of a child protection and parental control approach, including age verification, (subject to the regulator’s approval/ adherence with industry standards);
  • the display of a rating of harm agreed with the regulator on the most prominent screen seen by users;
  • development – in conjunction with the regulator and civil society – of model standards of care in high risk areas such as suicide, self-harm, anorexia, hate crime etc; and
  • provision of adequate complaints handling systems with independently assessed customer satisfaction targets and also produce a twice yearly report on the breakdown of complaints (subject, satisfaction, numbers, handled by humans, handled in automated method etc.) to a standard set by the regulator.

It is central that there be a complaints handling system to cover concerns about content/behaviour of other users.  While an internal redress system that is fast, clear and transparent is important, we also propose that an external review mechanism be made available.  There are a number of routes which require further consideration – one route might be an ombudsman service, commonly used with utility companies although not with great citizen satisfaction, another might be a binding arbitration process or possibly both.

Finally, the regime must have sanctions.  The range of mechanisms available within the Health and Safety regime are interesting because they allow the regulator to try improve conditions rather than just punish the operator,  (and to some extent the GDPR has a similar approach). We would propose a similar range of notices.  For those that will not comply, the regulator should be empowered to impose fines (perhaps GDPR magnitude fines if necessary).

The more difficult questions relate to what to do in extreme cases. Should there be a power to send a social media services company director to prison or to turn off the service? Regulation of health and safety in the UK allows the regulator in extreme circumstances, which often involve a death or repeated, persistent breaches to seek a custodial sentence for a director. The Digital Economy Act contains power (Section 23) for the age verification regulator to issue a notice to internet service providers to block a website in the UK. Should there be equivalent  powers to send a social media services company director to prison or to turn off the service?  In the USA the new FOSTA-SESTA package apparently provides for criminal penalties (including we think arrest) for internet companies that facilitate sex trafficking.  Given the impact on freedom of expression, these sorts of penalties should be imposed only in the most extreme cases – the question is, should they be there at all?

Professor Lorna Woods is Chair of Internet Law, School of Law, University of Essex and the joint author of the Carnegie UK Trust proposals with William Perrin