UK’s Online Safety Act Receives Royal Assent
On October 26, 2023 the Online Safety Act received Royal Assent in the Houses of Parliament, making the UK “the safest place in the world to be online,” according to the press release on the government’s official website.
The Act promotes the creation of a new era of internet safety for both adults and children alike through the placing of “world-first legal duties on social media platforms” and “a zero-tolerance approach to protecting children from online harm, while empowering adults with more choices over what they see online.” What this means for social media platforms is that, as regards the content they host, in light of the Online Safety Act’s becoming law, they will have to do the following:
- “remove illegal content quickly or prevent it from appearing in the first place, including content promoting self-harm;
- prevent children from accessing harmful and age-inappropriate content including pornographic content, content that promotes, encourages or provides instructions for suicide, self-harm or eating disorders, content depicting or encouraging serious violence or bullying content;
- enforce age limits and use age-checking measures on platforms where content harmful to children is published;
- ensure social media platforms are more transparent about the risks and dangers posed to children on their sites, including by publishing risk assessments;
- provide parents and children with clear and accessible ways to report problems online when they do arise.”
In addition to this, the Act also gives adults more power over the content they see online through 3 layers of protection intended to:
- Make sure illegal content is removed.
- Enforce the promises social media platforms make to users when they sign up, through terms and conditions.
- Offer users the option to filter out content, such as online abuse, that they do not want to see.
Some other changes that the Act will bring are as follows:
- it establishes Ofcom as the online safety regulator;
- it forces social media platforms to remove content that is already considered illegal. Examples include: child sexual abuse, controlling or coercive behaviour, extreme sexual violence, fraud, hate crime, inciting violence, illegal immigration and people smuggling, promoting or facilitating suicide, promoting self harm, revenge porn, selling illegal drugs or weapons, sexual exploitation, terrorism;
- it puts into place new types of offenses, such as creating content that promotes self-harm. This type of content is now deemed illegal and platforms have an obligation to remove it;
- it creates an obligation for social media platforms to protect children from content that, while not illegal, can be harmful or age-inappropriate for children, such as pornography, content that does not meet a criminal threshold but which promotes, encourages or provides instructions for suicide, self-harm or eating disorders, content that depicts or encourages serious violence, or bullying content;
- it mandates that children under the age of 13 years old will no longer have access to social media platforms who will have to say what age assurance technology they are using and prove that they are enforcing age limits and keeping underage children off their platform;
- it protects girls and women through strict rules against sharing intimate images without consent which can lead to a conviction against offenders;
- it sets out criminal penalties of 6 months to 2 years for those who share such intimate images or those who threaten to or even do so with the intent “to cause distress, alarm or humiliation, or to obtain sexual gratification;”
Enforcement authority of the Online Safety Act is granted to Ofcom, the UK’s communications regulator, who will have the powers to take action against companies found non-compliant with the Act. Penalties include fines of up to £18 million or 10 percent of their annual global turnover, whichever is greater, or criminal liability against companies or senior managers if they fail “to comply with Ofcom’s enforcement notices in relation to specific child safety duties or to child sexual abuse and exploitation on their service,” and in the most extreme cases, Ofcom will be able, with the agreement of the courts, to ask that internet service or payment providers and advertisers no longer work with a website in order to prevent it from being accessed in the UK or generating money.
The Online Safety Act is going through Parliament at the time of this article and is expected to be in place once it is passed there also, with most of the Act’s provisions taking effect in two months’ time. Meanwhile, the Department for Science, Innovation and Technology is working together with the regulator, Ofcom, “to lay the groundwork for the laws to be enforced once they are introduced” focusing initially on illegal content in order to address the most serious harms with the utmost priority, as part of a phased approach as the powers of the regulator come into force.