The Online Safety Bill aims to establish a new way to regulate online content, including abusive messages, bullying and pornography.It will place obligations on companies to keep people safe, while also protecting users’ rights to freedom of expression and privacy.
The Culture Secretary Michelle Donelan has recently announced amendments to the bill such as axing a measure that would have forced big social media sites to take down legal but harmful content. The Samaritans have called it a “hugely backward step” while Labour said it is a “major weakening”.
However, Donelan has insisted “nothing is getting watered down” and the bill was brought back to the House of Commons on Monday (5 December) where MPs continued to debate it. Ms Donelan has pledged that the bill will become law “during this parliamentary session” – which is due to end in the spring.
On Monday ex-health secretary Sajid Javid said in the Commons that the government must reveal when it will publish plans to deal with content promoting suicide online.
While Shadow Culture Secretary Alex Davies-Jones said the decision to remove parts of the Online Safety Bill aimed at curbing “legal but harmful” internet content will allow comments like those of Kanye West to spread,
Here we explain the history of the bill, what changes have been made and what the reaction to it has been.
What is the history of the bill?
The bill can be traced back to David Cameron’s premiership when the idea of checks and balances on potentially harmful online content was first introduced in the Internet Safety Green Paper announced in 2017. The green paper looked at the responsibilities of companies to keep users safe and the use of technical solutions to prevent online harm.
Following this, in May 2018 the government announced the publication of the white paper setting out plans for online safety legislation.
The Online Harms White Paper of April 2019 proposed a new regulatory framework to prevent harm online. In its December 2020 response, the government said that an Online Safety Bill would be introduced.
A draft Online Safety Bill was published in May 2021, and was subject to pre-legislative scrutiny by a joint committee of the two Houses of Parliament.
The committee’s report was published on 14 December 2021, and the government’s response was then published on 17 March 2022 explaining how they will incorporate 66 of the Committee’s recommendations into the bill.
What changes have been made?
Under the original bill’s plans, a section required "the largest, highest-risk platforms" to tackle some legal but harmful material accessed by adults. It meant that the likes of Facebook, Instagram and YouTube, would have been tasked with preventing people being exposed to content like, for example, self-harm, eating disorder and misogynistic posts.
These measures drew criticism from free speech campaigners, who claimed that governments or tech platforms could use the bill to censor certain content.
Under the amended bill tech giants will now be told to introduce a system allowing users more control to filter out harmful content they do not want to see. Adults will be able to access and post anything legal, provided a platform’s terms of service allow it - although, children must still be protected from viewing harmful material.
Donelan told BBC News the revised bill offered "a triple shield of protection - so it’s certainly not weaker in any sense".
The “triple shield of protection” requires platforms to:
- remove illegal content
- remove material that violates their terms and conditions
- give users controls to help them avoid seeing certain types of content to be specified by the bill
This could include content promoting eating disorders or inciting hate on the basis of race, ethnicity, sexual orientation or gender reassignment. Although there will be exemptions to allow legitimate debate. Donelan told LBC: “We are removing the legal but harmful (duties), which would have led to unintended consequences and have an erosion of free speech. Whereas we’re rebalancing this for some common-sense approaches.”
Other changes will require technology companies to assess and publish the risk of potential harm to children on their sites. Donelan said many tech companies may be international, but they will have to face the “ramifications” of British law if they fall foul of the new rules.
Companies must also explain how they will enforce age limits - knowing users’ ages will be a key part in preventing children seeing certain types of content. Writing for The Telegraph, Culture Secretary Michelle Donelan, said “Some platforms claim they don’t allow anyone under 13 – any parent will tell you that is nonsense. Some platforms claim not to allow children, but simultaneously have adverts targeting children. The legislation now compels companies to be much clearer about how they enforce their own age limits.”
Under the changes, users’ accounts must not be removed unless they have broken the law or the site’s rules. Social media companies could also face being fined by Ofcom up to 10% of annual turnover if they fail to fulfil policies to tackle racist, homophobic or other content harmful to children on their platforms.
What happened to Molly Russell?
It was recently announced that the encouragement of self-harm would be prohibited in the update to the Online Safety Bill.
The Department for Digital, Culture, Media and Sport (DCMS) said this change had been influenced by the case of Molly Russell, the 14-year-old who ended her own life in November 2017 after viewing social media content linked to depression, self-harm and suicide.
Molly’s father Ian Russell said the changes to the bill are “very hard to understand”.
He told BBC Radio 4’s Today programme: “What we need is the assurance from the Secretary of State that this watering down of the Bill by removing the legal but harmful content is at least boosted in other measures to make it safe for not just young people but for all of us to be online. I don’t see how you can see the removal of a whole clause as anything other than a watering down.”
‘Social media firms can no longer remain silent bystanders’
Donelan said: “I am determined that the abhorrent trolls encouraging the young and vulnerable to self-harm are brought to justice. So I am strengthening our online safety laws to make sure these vile acts are stamped out and the perpetrators face jail time.
“Social media firms can no longer remain silent bystanders either and they’ll face fines for allowing this abusive and destructive behaviour to continue on their platforms under our laws.”
Deputy Prime Minister and Justice Secretary Dominic Raab said: “Lives and families have been devastated by those who encourage vulnerable internet users to self-harm. Our changes will ensure the full force of the law applies to those callous and reckless individuals who try to manipulate the vulnerable online in this way.”
However, Julie Bentley, chief executive of Samaritans, hit out at the new changes describing dropping the requirement to remove “legal but harmful” content as “a hugely backward step”.
She said: “Of course children should have the strongest protection but the damaging impact that this type of content has doesn’t end on your 18th birthday. Increasing the controls that people have is no replacement for holding sites to account through the law and this feels very much like the Government snatching defeat from the jaws of victory.”
Meanwhile shadow culture secretary Lucy Powell said the bill has been “undermined”.
Powell said: “Replacing the prevention of harm with an emphasis on free speech undermines the very purpose of this Bill, and will embolden abusers, Covid deniers, hoaxers, who will feel encouraged to thrive online.”