Rishi Sunak says AI could lead to human extinction and help terrorists but UK won’t ‘rush to regulate’ it

In his speech, Rishi Sunak said: "Mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war.”
Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

Get AI wrong and it will be easier for terrorists to build chemical or biological weapons and could lead to human extinction, Rishi Sunak has said.

However the UK government will not “rush to regulate” artificial intelligence as “innovation … it’s a hallmark of the British economy”. Sunak was speaking ahead of an AI Safety Summit at Bletchley Park next week, bringing together world leaders, tech firms and civil society to discuss the emerging technology.

Hide Ad
Hide Ad

Ahead of the summit, the Prime Minister announced the government would establish the “world’s first” AI Safety Institute, which he said would “carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of” and “exploring all the risks”.

Sunak said that mitigating the risk of extinction because of artificial intelligence should be a global priority alongside pandemics and nuclear war. As the government published new assessments on artificial intelligence, the Prime Minister said they offered a “stark warning”.

“Get this wrong and it could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and disruption on an even greater scale,” he said.

“Criminals could exploit AI for cyber attacks, disinformation, fraud or even child sexual abuse.” NationalWorld has reported how paedophiles are using AI to “de-age” celebrities, “nudify” clothed children and depict sexual abuse scenarios.

Rishi Sunak. Credit: Peter Nicholls/PA WireRishi Sunak. Credit: Peter Nicholls/PA Wire
Rishi Sunak. Credit: Peter Nicholls/PA Wire
Hide Ad
Hide Ad

Sunak continued: “And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘super intelligence’.

“Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts, mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war.”

But Sunak said that it was “not a risk that people need to be losing sleep over right now” and he did not want to be “alarmist”. He added: “I think it’s wrong to focus and say it’s all doom and gloom because every which way you look at it – whether it is in healthcare, education or every aspect of our economy and public services – AI is already doing things that are saving us money, making things quicker, improving people’s lives.”

In the speech, the Prime Minister announced:

  • The government will establish the “world’s first” AI safety institute. 
  • Sunak will propose a new “truly global expert panel” to publish a report on the state of AI.
  • The government is “investing almost £1 billion in a supercomputer thousands of times faster than the one you have at home”.
  • The UK is already investing more in AI safety research than any other country in the world.
  • The government won’t “rush to regulate” AI as “how can we write laws that make sense for something that we don’t yet fully understand?”

Sunak gave his speech after a new paper was published by the Government Office for Science which said there is insufficient evidence to rule out a threat to humanity from AI.

Hide Ad
Hide Ad

Based on sources including UK intelligence, it says many experts believe it is a “risk with very low likelihood and few plausible routes”, and would need the technology to “outpace mitigations, gain control over critical systems and be able to avoid being switched off”.

It said: “Given the significant uncertainty in predicting AI developments, there is insufficient evidence to rule out that highly capable future frontier AI systems, if misaligned or inadequately controlled, could pose an existential threat.”

Three broad pathways to “catastrophic” or existential risks are set out as a self-improving system that can achieve goals in the physical world without oversight working to harm human interests.

The second is a failure of multiple key systems after intense competition leads to one company with a technological edge gaining control and then failing due to safety, controllability and misuse.

Hide Ad
Hide Ad

Finally, over-reliance was judged to be a threat as humans grant AI more control over critical systems they no longer fully understand and become “irreversibly dependent”.

Peter Kyle, Labour’s Shadow Science, Innovation and Technology Secretary, responded by saying: ''Artificial intelligence is already having huge benefits for Britain, and the potential of this next generation of AI could be endless, but it poses risks as well. Safety must come first to prevent this technology getting out of control. 

“Rishi Sunak should back up his words with action and publish the next steps on how we can ensure the public is protected. We are still yet to see concrete proposals on how the government is going to regulate the most powerful AI models.

 “A Labour government would set clear standards for AI safety, so that this leading tech can be used to restore our public services and boost growth. Labour would use AI to better diagnose diseases, put more money in people’s pockets and help build a better Britain."

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.