Online Safety Act: balancing protection and free speech

By Mark Jones
The Online Safety Act imposes strict duties on providers to protect users, sparking debate over free speech and regulation
The Online Safety Act was heralded as a landmark piece of legislation, designed to make the internet safer for all users under the slogan that the UK would be the “safest place to be online” in the world. At its core, the Act places legal duties on online service providers to protect users from illegal and harmful content.
The focus of the Act is not on Ofcom moderating individual pieces of content, but on providers themselves proactively assessing the risks of harm to their users and putting in place systems and processes to keep them safer online.
Such measures need to be proportionate with each provider determining their own approach to the risks of harm.
From principles to practice
Now that online service providers are subject to these duties, what does that mean in practice for content moderation?
Some critics have condemned the Online Safety Act as an attack on free speech, even though the Act itself specifically provides for free speech to be protected, as Part 3, Clause 18 outlines “Safety duties protecting rights to freedom of expression and privacy." Leaving that to one side, there is a risk of over-removal of lawful content by providers as they seek to comply with their duties to protect users from illegal and harmful content.
Providers are required to actively monitor and remove illegal content and content that is harmful to children. This involves not only eradicating existing illegal and harmful content, but also preventing it in the first place.
The Act entails significant consequences for non-compliance. These can range from significant fines, up to £18 million or 10% of a company’s annual global turnover, whichever is greater, to the service being shut down in the worst cases of abuse, should Ofcom wish to apply to courts to block access.
These penalties provide an incentive to err on the side of caution and to proactively remove content, and there is therefore a risk that content is removed even if it is actually lawful.
If lawful content is removed as a result of compliance with providers’ legal duties, then what will the effect be? Content that might be vital for community support, such as domestic abuse, could be disproportionately affected.
Users could also have their own personal experiences removed if the content is harmful to others meaning that sensitive topics may not be explored fully.
Regulation in a digital age
Determining what is harmful and unlawful, is not always clear cut. The people making these decisions, who work for the platforms, are unlikely to have legal training and need to make decisions quickly. Time is often of the essence order to stop harmful content from spreading online.
Algorithms used by platforms are not neutral. They can be used to direct opinions and to show users only one side of an argument. Tackling these algorithms to make them safe is key to making the internet a safer place.
What is worse, that a lawful post is removed in a genuine bid to comply with the regulations and remove illegal/harmful content? Or that harmful and illegal content is allowed to be published, disseminated and people harmed? The way we live our lives has changed significantly.
Social media has exploded over the past decade. Offences that one took place in the ‘real world’ are now taking place online. People hide behind the fact that their actions are online as somehow mitigating the effects of their conduct. That cannot be right.
In my view, the far greater risk is from a lack of regulation rather than from the regulation itself. I have seen, first hand, the effects of revenge pornography, cyber bullying, misinformation and disinformation. The consequences of online content can be utterly devastating, showing the need for regulation.