Rishi Sunak has met the bosses of three of the world’s leading artificial intelligence companies to discuss how AI should be regulated as concerns grow about the potential dangers it poses.
The Prime Minister held talks with the chief executives of Google Deepmind, Anthropic and OpenAI - which created ChatGPT, the app that can give human-like responses to text-based questions.
What did they agree?
In a joint statement following the meeting on Wednesday evening (May 24), the PM and the CEOs said there was an acknowledgement that the success of AI depended on having the “right guardrails” so the public could be confident in its safety.
According to Downing Street, they discussed the risks of the technology - “ranging from disinformation and national security” - in addition to “safety measures” and “possible avenues for international collaboration on AI regulation”.
What concerns have been raised?
Worries about AI range from the trivial to the terrifying. For example, the rise in popularity of ChatGPT has forced teachers to be on the lookout for homework produced by the chatbot.
At the other end of the scale, some experts fear the technology is so transformative, it could pose a threat to the way we live. Earlier this month, Dr Geoffrey Hinton - described as the “godfather of AI” - left his job at Google, saying he regretted his work on chatbots and warning they might soon have more knowledge than a human brain could store.
There’s anxiety about the impact on jobs, too. The telecoms giant BT announced last week it could cut up to 55,000 jobs - with some customer service staff set to be replaced by AI. Sir Patrick Vallance - the government’s former chief scientific adviser - has suggested the tech could have as big an impact on the employment market as the Industrial Revolution.
And there’s also serious concern about the possible impact on our democracy if “deepfake” video or audio recordings are released to coincide with an election, influencing public opinion and affecting the result. Experts told NationalWorld last week that the UK was on the ‘precipice’ of major AI disruption in our political system.
Has the government’s approach to AI changed?
In March, ministers announced a “white paper” - a document setting out how they intended to tackle AI - which focused on avoiding “heavy-handed legislation” that could deter investment in the UK’s tech sector. Instead, they wanted existing regulators to publish guidance on how to manage the risks.
That position has evolved as the risks of AI have gained public attention. At the recent G7 summit in Hiroshima, Sunak and other world leaders agreed to “identify potential gaps and fragmentation in global technology governance” - while in the House of Commons on Monday (May 22), technology minister Paul Scully said the government now accepted it would “clearly” have to work quickly to regulate AI.