Author: Dr Rys Farthing

 

Seven years ago, Australia passed its first online safety bill, the Enhancing Online Safety Act, updating and expanding it in 2021 with the Online Safety Act. While both Acts have problems and pitfalls, these were ‘global firsts’ at attempts to legislate to address the problem. As the UK’s Online Safety Bill slowly passes its way, under a now caretaker government, through its Third reading and into the House of Lords, it is timely to reflect on some of the lessons from the Australian experience over the past seven years. Below are four reflections on how the UK can ensure that its reforms are able to adequately tackle online abuse in all of its forms.

The Take-Down Strategy

First, focusing on notice and take-down won’t fix things. No country can delete their way out of this problem, one piece of content at a time. While this may sound a little obvious, when Australia was forging the path for the world’s first online safety law, take-down was the central strategy. 

Australia’s first legislative attempt, the Enhancing Online Safety Act 2015, embraced this straightforward and ‘single-minded’ approach. If content was deemed to be cyber-bullying targeting children, it had to be taken down. While the scale of the risks the digital world pose are immense, and by today’s standards a ‘cyber-bullying only’ focus seems woefully inadequate, it was a bold first move. New ways to define what cyberbullying was, new mechanisms to report it, new responsibilities for digital service providers to take it down and new authorities to oversee this all needed to be first imagined then implemented. 

This mammoth effort created a take-down centric path that Australian regulation has been stuck in ever since. In 2018, for example, non-consensual image sharing was added to the Act as the second type of unsafe content to address. And in the 2021 update another type of unsafe content to the list, cyber-abuse of adults (as well as ‘abhorrent violent material’ as defined by the Criminal Code and material denied classification under Australia’s Classification Board, bringing into line with existing regulations). 

One of the key problems of this approach might have crossed your mind already. What exactly is cyberbullying or cyber-abuse material? Under the 2021 Act, cyber-abuse is defined as content that an ‘ordinary reasonable person’ would agree was intended to harm an adult, and an ‘ordinary reasonable person’ would consider ‘menacing, harassing or offensive’. That’s a frightfully open definition that’s bound to clash with all sorts of cultural and class expectations, as well as the obvious clash between victim’s experiences and the privileged perspective of perpetration. What feels very menacing or offensive to someone on the receiving end might be considered ‘just in jest’ by offenders. It’s also focussed entirely on individual safety, missing online threats to societal or community risk. If your approach centres around deleting ‘bad content’ someone has to define it. And that’s always going to be a problem. 

In the UK, this has been partly kicked into the long-grass in the Online Safety Bill. While there’s clarity about addressing already illegal content, there’s an expectation that regulators can and will define legal-but-harmful content later. While we’re expecting it to be a high threshold, that goes beyond disagreements or causing offence, it’s still open. The lessons from Australia is that this isn’t easy: the definitions matter and deserve close attention.

Another problem with this approach, as implemented in Australia, is that it puts all of the burden on victims to report content after the harm. The Australian Acts lack any proactive responsibilities or monitoring by either the Commission or platforms. Harm inevitably has to happen before the Acts ‘kick in’. The requirements in the UK’s act, around increasing transparency (especially around legal-but-harmful content), are welcome. They should shift the balance of responsibility from victims to platforms.

Focus on Systems and Processes

Secondly, flowing from this, the central flaw of a take-down centric approach becomes apparent: its impact is always going to be modest. In 2020-21, Australia issued 2 takedown notices regarding image based abuse, 5 Abhorrent Violent Material notices and addressed 954 complaints of cyber bullying directed at children. Regulators — and victims — are stuck playing whack-a-mole, requesting this or that piece of content be taken down as quickly as they’re posted. Without a systemic focus, or a trillion dollar budget for regulators to become de facto global content moderators, it just doesn’t work. What’s needed is a focus on systems and processes, and what digital services themselves can do to reduce the risks online before harms happen. 

This is where the UK’s draft Online Safety Act shows potential, in the multiple overlapping duties of care it creates for platforms. Incidentally, this systemic focus was somewhat included in Australia’s updated 2021 approach, as a sort of add-on that will see a co-regulatory approach to “basic online safety” standards implemented shortly. While Australia seems to have adopted a content first, systemic safety second approach, the UK’s has reversed this, which potentially has the capacity to be far more effective. At the very least, both countries will prove to be excellent case studies for global comparative studies for years to come.

An Independent Regulator running a public complaints process

While our first two points have a ‘what not to do’ flavour, our third and forth are Australian innovations notably lacking from current UK proposals that might weaken the overall impact. Our very first version of the act, way back in 2015, established the politically popular office of the eSafety Commissioner. The eSafety Commissioner is an independent regulator who is also tasked with running a public complaints mechanism, alongside a more significant education mandate. The independence of a regulator, and a public facing complaints procedure have been the key ingredients for the albeit limited gains Australia has had in the online space.

The public complaints mechanism has meant that under every version of Australia’s online safety legislation, members of the public have been able to access a complaints service that operates as a ‘backstop’ to the public. Children, parents, women and those targeted by some of the worst forms of online content, often left with no recourse from platforms themselves, have been able to avail themselves of an independent office able to compel platforms to remove content. This is not a systemic solution and the remedies on offer are limited, but it provides a sense of safety. It’s a hard sell to convince a voting public that that legislation is working and keeping them safer if their own individual experiences of harm have no avenue for redress. 

In the unique Australian milieux, this popularity has been problematic. The accessibility and popularity of these individualised solutions may have provided cover for the lack of systemic solutions. Projecting the perception of safety without a systemic underpinning can be in fact disingenuous and facilitate the perpetuation of harms. But an Online Safety Bill that includes both might be genuinely effective and popular.

Australia’s eSafety Commissioner is independent from politics (although the appointee themselves was from Big Tech, which has been criticised). The current proposals in the UK open up the space for potential executive influence on the regulatory oversight. Political independence in Australia has afforded public trust in the eSafety Commissioner, as well as enduring influence within their remit. The Australian experience could be instructive here too; political independence has enabled greater influence.

Final thoughts

The proposals on the table in the UK appear to be a very distant cousin to Australian legislation. These differences will — hopefully — avoid some of the significant problems that have hampered the impact in Australia. But there may be elements missing from the Australian model that Ofcom simply can’t fulfil. It will be interesting to see, when the Bill is finally moved, the nature and scale of the impact it can create and how this compares to the very different Antipodean approach.

Rys is policy wonk who focuses on children’s rights, especially around technology and disadvantage. She holds a DPhil from the University of Oxford where she was a Clarendon scholar, and a MSc from the LSE. Rys has held policy roles at civil society orgs including Reset Tech (Australia), 5Rights Foundation (UK), and Fairplay (US), and the APPG on Poverty. She has also held academia posts at Oxford and RMIT, and is a Research Associate at the Information Law & Policy Centre.