More than four years on from a deadly New Zealand terror attack, online harm watchdogs argue social media companies are still letting large amounts of hateful and dangerous speech spread unchecked.
On 15 March 2019, 51 Muslim worshippers were gunned down as they prayed in the city of Christchurch. The gunman*, who pleaded guilty to 51 murder charges, 40 attempted murder charges, and one charge of terrorism, was sentenced to life in prison, without the possibility of parole.
The Australian national live-streamed 17 minutes of his first attack, on the Al Noor Mosque, on Facebook Live, leading to the content - now banned in NZ and the UK - being viewed and shared widely online in the aftermath of the attacks.
NZ Prime Minister Jacinda Ardern and French President Emmanuel Macron established the Christchurch Call to Action later that year, to bring the global community together after the Christchurch attack with the goals of both addressing the drivers of terrorism, and eliminating terrorist and violent extremist content online.
Some of the Call's 120 signatories were big tech companies, including social media giants Twitter, Facebook and Instagram owner Meta, and YouTube owner Google. Online service providers who signed the Call committed to taking transparent, specific actions to stop the upload of terrorist and violent extremist content - and to prevent its spread on social media and content-sharing services.
They also promised to set and enforce community standards or terms of service - prioritising moderation of violent extremist content; mitigate the risk that terrorist content could be livestreamed; and review algorithms that may drive users towards or amplify terrorist content, in an effort to stop people from becoming radicalised.
Is extremist content still being shared on social media after the Christchurch Call?
Head of research Callum Hood told NationalWorld that in 2019, Meta, Twitter, and Google each committed to uphold the Christchurch Call. "They promised that they would be ‘resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence’," he said.
"But this was an empty promise. Three years later, our research found that social media companies – including Facebook, Instagram, TikTok, Twitter, and YouTube – failed to act on 89% of posts containing anti-Muslim hate reported to them," he said.
Twitter failed to act on 97% of posts, he said, while none of the 23 videos reported to YouTube were acted upon.
Some examples of content not acted upon included false claims that Muslims were inherently violent, conspiracies about a Muslim plan to “Islamize” Western countries, depictions of Muslims as deceptive and untrustworthy, racist caricatures depicting Muslims as inhuman, and sectarian Hindu nationalist hate narratives against Muslims.
Hood said: "Identity-based hate runs rife on social media platforms, and has provided warped justifications for terrorism and murder. When Big Tech fails to act, they know there is a significant threat of offline harm."
What action are social media companies taking in response to the Christchurch Call?
A Meta spokesperson told NationalWorld they remained committed to combatting hate and violent extremism on their platforms. "Since March 15, 2019 and the Christchurch Call, we tightened our policies, strengthened our detection technology, expanded initiatives to redirect people from violent extremism, and improved our ability to work with other companies to respond quickly to mass violence," they said.
Meta continued: "We continue to collaboration with governments, the industry, civil society and the [Global Internet Forum to Counter Terrorism] on clear actions to combat hatred and terrorism online."
After the Christchurch mosque attack in 2019, Facebook worked with the UK's Metropolitan Police, using footage from police body cameras to train its algorithms to recognise videos of real-life shootings. The company also works with the Global Internet Forum to Counter Terrorism to help expand its membership and advance technical solutions, and it reaffirmed its commitment to the Christchurch Call last year. "We continue to invest in crisis response and protocols on our platforms, and in collaborations with industry, government, and NGO partners to eliminate terrorist and harmful content online", Meta said in a statement.
ByteDance, Google, and Twitter were also approached for comment - although Twitter is understood to no longer have a communications department.
What is the UK government doing to meet its Christchurch Call obligations?
The Christchurch Call was also signed by 58 governments, including the UK. They agreed to their own set of commitments, including creating effective laws and enforcement; encouraging media to abide by ethical standards; and countering the drivers of terrorism and violent extremism through education, targeting inequality and building media literacy.
A government spokesperson told NationalWorld the UK has been at the forefront of international approaches to this issue. At the heart of its efforts was the new Online Safety Bill, currently making its way through the House of Lords.
“The UK is dedicated to reducing terrorist and violent extremist content online and is an active supporter of the Christchurch Call to Action," they said. "As a signatory, the UK is committed to work with other governments, civil society organisations and tech companies to progress the Call’s objectives and tackle the drivers of terrorism."
“Our ground-breaking Online Safety Bill will make sure that the UK is the safest place to be online, requiring all companies to take robust action against illegal content."
In-scope companies, including even the largest social media platforms, must proactively tackle and prevent users from being exposed to priority offences under the Bill, they said. "[This] includes terrorism offences and incitement to violence.”
Under the Online Safety Bill, companies will also be required to have effective and accessible mechanisms for users to easily report concerns and have them addressed in a timely and appropriate manner. Ofcom will develop specific codes of practice recommending steps that in-scope companies can take to comply with their safety duties in relation to priority offences.
All services must ensure that terrorism content is not prevalent, and is not persistently present, on their service. If they fail in these duties, they will be made to pay substantial, fines or in the most extreme circumstance, have their sites blocked by the independent regulator Ofcom.
*NationalWorld has followed New Zealand media's example, in not naming the gunman to prevent giving him the notoriety he sought.