AI: Rishi Sunak’s adviser on artificial intelligence latest to warn about possible dangers of tech

Matt Clifford was speaking as the PM prepared to discuss AI with Joe Biden in Washington this week
Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

Rishi Sunak’s adviser on artificial intelligence has suggested that - in the worst case scenario - the technology could be powerful enough in two years to create cyber and biological weapons capable of killing “many humans”.

In an interview with Talk TV, Matt Clifford said that wasn’t a prediction - as there was no consensus on the issue - but added his voice to calls for AI to be regulated internationally to avoid the development of “very powerful” systems beyond the control of humanity. The Prime Minister is expected to discuss AI with US President Joe Biden when the pair meet in Washington this week.

Who is Matt Clifford?

Hide Ad
Hide Ad

Clifford is chairman of the Advanced Research and Invention Agency - which was set up to, in its own words, “create new capabilities that can benefit the UK and advance human progress”.

He’s also advising Sunak on the development of a new £100 million taskforce to make sure that AI chatbots like ChatGPT - which use so-called “deep learning” to have a conversation with the person asking them questions - are “safe and reliable”.

What did he say?

Speaking to Talk TV on Monday evening (5 June), Clifford said: “I think there are lots of different types of risks with AI and often in the industry we talk about near-term and long-term risks, and the near-term risks are actually pretty scary”.

“You can use AI today to create new recipes for bio weapons or to launch large-scale cyber attacks. These are bad things”.

Hide Ad
Hide Ad

Clifford added that AI systems were becoming “more and more capable at an ever increasing rate” and said it was possible that they could surpass human intelligence within two years - although he admitted that prediction was at the “bullish end of the spectrum”.

He also insisted that the tech could be a force for good if deployed in the right way. “You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon neutral economy,” he said.

What have other experts said about AI?

Last week, some of the world’s leading figures in the field - including the boss of OpenAI, which developed ChatGPT - signed a joint statement, arguing it was a “global priority” to find ways to reduce the risk of AI causing human extinction. Another signatory was Dr Geoffrey Hinton - described as the “godfather of AI” - who left his job at Google last month so he could publicly voice his own concerns about the technology. He said he regretted his work on chatbots and suggested they might soon have more knowledge than human brains.

AI is already starting to have an impact on jobs - with some concerned the impact on the employment market could be as seismic as the Industrial Revolution. The telecoms giant BT recently announced it could cut 55,000 posts - with some customer service staff set to be replaced by AI. NationalWorld has also reported on fears that ‘deepfakes’ could soon be used to disrupt UK elections.

What is the government doing?

Hide Ad
Hide Ad

In response to recent warnings, the Prime Minister said last week: “People will be concerned by the reports that AI poses an existential risk like pandemics or nuclear wars. I want them to be reassured that the government is looking very carefully at this”. The government has already acknowledged it will need to regulate more “quickly” than it originally planned.

It’s expected Sunak will discuss AI at length with President Biden on a visit to the White House this week. The Daily Telegraph reports the PM is planning a global summit in London in the autumn to devise international rules around the tech and wants Biden’s support for the initiative.

What about Labour?

Shadow digital secretary Lucy Powell told The Guardian that AI should be licensed in a similar way to medicines or nuclear power.

She said: “That is the kind of model we should be thinking about, where you have to have a licence in order to build these models”.

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.