This post was originally written by Dr Damian Tambini, Associate Professor in the Department of Media and Communications at LSE, for the LSE Media Policy Project Blog.


It has been an extraordinarily busy year in UK tech policy. The Furman Review reported on digital competition, recommending changes to competition law and a new regulator to deal with data dominance, competition and consumer welfare. The Online Harms White Paper outlined a comprehensive new regulatory framework – and proposed a new regulator – to deal with everything from online disinformation to cyberbullying to incitement to violence, as I discuss here. And a new Digital Services Tax has been proposed to claw back some of the surplus from tech firms. These proposals followed a number of parliamentary inquiries on fake news, internet regulation, hate crime, and our own LSE Commission on Truth, Trust and Technology.

With so many policy rabbits scurrying in different directions it is time to ask: what is the relationship between competition policy and other interventions, and is there sufficient coordination? In particular, is the competition approach likely to conflict with other aspects? As we argued in our T3 report, the deeper policy issues at stake are going to take a long time to resolve and it is crucial that the UK adopts a coordinated approach if powerful platform companies are not to game the system.

On the basis of the evidence currently available there seems indeed to be a lack of coordination and a danger that different policies may conflict. With legislation expected in 2020, it is time to ask what would be the best policy sequence, and how the overlaps can be dealt with.

Tech mergers: a new approach

Take one example: the Furman review sets out a number of sensible reforms to merger rules which should tip the balance in favour of more referrals and a precautionary approach to big tech mergers. No longer would it be possible, if these reforms are implemented, for Facebook to purchase – and integrate services and data of – companies like Instagram or WhatsApp, with their own social network and Messenger, or for tech platforms to pursue the ‘strangle at birth’ strategy by purchasing potential competitors. Regulators will be more alive to the dangers of data concentration and the potential for indirect network effects to create problems that previous methodologies were not good at spotting. So in the competition area, we can expect some progress on the basis of these sensible recommendations.

But competition, data portability, interoperability and consumer switching will not deal with some of the deeper harms at stake. And the attempt to do so through a ‘duty of care’ – that another part of government is recommending- may well conflict with the ‘pro-competition’ approach of Furman. In particular, introducing duties of care and other regulatory obligations can raise barriers to market entry by obliging platforms to invest more in moderation.

Online harms: how much does Facebook spend on moderation, and why it matters

Let’s look at the example of the duty of care proposed by the UK government’s Online Harms White Paper: the attempt to oblige Facebook and other platforms to deal better with the negative externalities associated with digital platforms. There has been a debate about whether Facebook, for instance, has been effectively dealing with illegal content and other harms, and even whether the company enforces its own terms of service and community guidelines, and the proposal is to ensure that they are held to account for this by a new regulator.

As a result of this controversy about Facebook moderation, there is a fair amount of information in the public domain on what they do currently, and from this we can infer costs. In March 2019, Facebook claimed to have 15-30,000 content moderators worldwide. It was reported that a moderator based in the US will take home around $30,000, and a conservative estimate of the cost to FB would therefore be $40,000 per annum per moderator. Many moderators are located in developing countries such as India, where moderators could cost as little as $5000.

Assuming US cost per moderator @40k = 40,000 x 30,000= $1.2bn

Assuming India cost per moderator @5k = 5000 x 30,000 =$150m

Therefore, the estimated total global cost of moderation for Facebook is something between $150m and $1.2bn, a significant expenditure. However, not only are these very low estimates based on limited data, they are a snapshot of a fast changing scene.

There are consistent reports that the social network is scaling up operations following several public scandals including those involving incitement and hate speech, and Facebook made a number of announcements that they had increased the numbers of moderators in response, for example, to the German NetzDG law (the company now reports having 1200 moderators in Germany). As a result of this law, which obliges platforms to take down material that is considered to breach the law on hate speech, Germany now has the most developed liability framework. In July 2019, after 18 months of operation under the new law, Facebook was fined €2m by Germany’s Federal Office of Justice for not observing the correct procedures and selectively reporting on complaints.

There have been many discussions of the ‘human cost’ of moderation, and this is already translating into increased costs (longer rest periods, in work support) and legal fees/ compensation due to the distressing nature of the work. There are numerous stories reporting that Facebook moderators are receiving more money, or that more of them are being hired in some countries shows the enormous scale of the operation. And an independent report on content moderation from May 2019 shows that the number of pieces of content runs into the millions.

Facebook is also proposing to insert a number of ‘appeals procedures’ including a ‘supreme court‘ for content moderation. This will dramatically increase the cost per staff members as legal and other expertise will be required.

So, moderation costs are rising extremely rapidly, arguably constitute a significant a barrier to entry for any social network trying to compete with a giant like Facebook. With the imposition of new regulatory obligations and standards, these significant barriers to entry will only grow.

Hipster Anti-trust and the Power Problem

The other point to note is that Furman decisively rejects so-called ‘hipster’ or ‘neo-Brandeisian’ anti-trust. This new approach to anti-trust, increasingly influential in the US for example with Democrats like Elizabeth Warren, calls for the break-up of tech platforms on the basis of an analysis not only of their entrenched market dominance, but on the basis of an analysis of the implications of this for political power. Furman says that such a new approach should not be taken for various reasons, including the claim that consumer welfare may be better served by large players.

However there is an important distinction to be made here. It may be the case, as former US Government official Carl Shapiro argued in his influential article, that anti-trust is not designed to deal with political concentrations of power. This is not the same thing as saying that the power problem does not exist.

Platform power and the role of technology in our society pose completely novel questions about privacy, data, new media and how they can be used to shape human behaviour and democracy. Furman may be right that competition law is not the way to deal with platforms or with the fundamental rights issues tied up with dominance and data, but they still need to be dealt with. There is clearly not enough coordination between these different policy initiatives and they are pulling in different directions. Regulation of online harms will raise barriers to entry, and likely undo the efforts of Furman to increase competition. Government should be thinking about a coordinated approach to the long term negotiation with platforms.