The UK’s internet watchdog is finalizing the first set of rules for the Online Safety Act
The UK’s internet regulator, Ofcom, on Monday published the first set of final guidelines for online service providers subject to the Online Safety Act. This starts the countdown to the first deadline for compliance with the sprawling Online Harms Act, which the regulator expects to begin within three months.
Ofcom was Under pressure To move faster in implementing the next online safety system Summer riots Which was widely seen as being fueled by social media activities. Although it only follows the process set out by lawmakers, which required it to consult on and obtain parliamentary approval for final compliance measures.
“This decision on Unlawful Harms Rules and Guidelines represents a major milestone, as online service providers now have a legal obligation to protect their users from unlawful harms,” Ofcom wrote in a blog post. press release.
“Providers now have a duty to assess the risks of unlawful harm to their services, with a deadline of 16 March 2025. Subject to laws supplementing the parliamentary process, from 17 March 2025, service providers will need to take safety measures set out in the rules or Use other effective measures to protect users from illegal content and activities.
“We stand ready to take enforcement action if providers do not act immediately to address risks to their services,” she added.
According to Ofcom, more than 100,000 technology companies could be within the scope of the law’s duties to protect users from a range of types of illegal content – in relation to more than 130 “priority crimes” identified by the law, covering areas including terrorism, hate speech, sexual assault and exploitation. On children, fraud and financial crimes.
Failure to comply risks fines of up to 10% of global annual turnover (or up to £18 million, whichever is greater).
Companies within the domain range from tech giants to “very small” providers, with various sectors affected including social media, dating, gaming, search and pornography.
“The duties set out in the law apply to service providers with UK links regardless of where they are located in the world. The total number of regulated online services could be more than 100,000 and range from some of the world’s largest technology companies to Very small services.
The rules and guidance follow consultation, with Ofcom looking at research and taking stakeholder feedback to help shape these rules, as the legislation Passed by Parliament last fall and became law in October 2023.
The regulator has set out measures for user services and user-to-user search to reduce the risks associated with illegal content. Guidance on risk assessment, record keeping and audits is summarized in Official document.
As published by Ofcom summary Each chapter covers today’s policy statement.
The approach taken by UK law is the opposite of a one-size-fits-all approach – with generally more obligations being imposed on larger services and platforms where multiple risks may arise compared to smaller services with lower risks.
However, smaller services with lower risks are not exempt from obligations either. In fact, many requirements apply to all services, such as having a content moderation system that allows for the rapid removal of illegal content; There is a mechanism for users to submit content complaints; Having clear and accessible terms of service; removing accounts of banned organizations; And many more. Although many of these comprehensive measures are features that the mainstream services, at least, are likely to offer already.
But it is fair to say that every technology company offering user-to-user or search services in the UK will need to at least undertake an assessment of how the law applies to their business, if not undertake operational reviews to address the issue. Specific areas of regulatory risk.
For larger platforms with engagement-centric business models — where their ability to monetize user-generated content is tied to keeping people’s attention tight — greater operational changes may be needed to avoid falling foul of the law’s duties to protect users from untold harm. And countless.
One of the key levers to drive change is a law that imposes criminal liability on senior executives in certain circumstances, meaning tech CEOs can be held personally accountable for some types of non-compliance.
Speaking on BBC Radio 4’s Today program on Monday morning, Ofcom chief executive Melanie Dawes suggested that 2025 will finally see major changes to how major technology platforms operate.
“What we’re announcing today is a big moment, actually, for online safety, because within three months, tech companies will need to start taking appropriate action,” she said. “What are they going to need to change? They’re going to have to change the way the algorithms work. They’re going to have to test them to make sure that illegal content like terrorism, hate, intimate image abuse, and much more, in fact, doesn’t appear on our feeds.”
“Then, if things slip through the network, they will have to remove them. “For kids, we want their accounts to be private, so strangers can’t contact them,” she added.
However, Ofcom’s policy statement is just the beginning of implementing the legal requirements, as the regulator is still working on further measures and duties in relation to other aspects of the law – including what Dawes phrased as “Wider protection for children“Which she said would be introduced in the new year.
So, more substantive changes to child safety on platforms that parents are demanding be forced may not be implemented until later in the year.
“In January, we will be moving forward with our age screening requirements so we can know where children are,” Dawes said. “Then in April, we will be finalizing the rules around child protection more broadly – that will be around pornography, suicide, self-harm, violent content and so on, just not feeding children in the way that has become so normal but is really harmful today.
Ofcom’s briefing document also notes that further measures may be needed to keep pace with technical developments such as the emergence of generative artificial intelligence, suggesting it will continue to review risks and may develop requirements on providers.
The regulator is also planning “crisis response protocols for emergency events” such as last summer’s riots. Proposals to ban the accounts of those who shared CSAM (child sexual abuse material); And guidance for using artificial intelligence to address illegal harms.