AI regulation ‘clearly’ needs to happen quickly, government accepts

Concerns have been raised by Labour MP Darren Jones that existing plans to make sure AI is safe and secure don’t go far enough
Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

The government has accepted it will “clearly” have to work quickly to regulate artificial intelligence after a senior Labour MP said the current approach wasn’t fit for purpose.

Darren Jones - who chairs the Business Select Committee - used a Commons debate on Monday (May 22) to warn that if ministers got the regulation wrong, Britain would “suffer further decline and become even less in control of its future”.

What was the original plan to deal with AI?

Hide Ad
Hide Ad

In March, the government announced a “white paper” - a document setting out how it intended to tackle artificial intelligence (AI). It said it wanted to avoid “heavy-handed legislation” which could stifle innovation and encourage companies involved in AI not to invest in the UK.

Instead, it called on regulators that already exist - like the Competition and Markets Authority - to publish guidance for the sectors they already look after, with advice on things like risk assessment. The government also put £2 million into a “sandbox” - essentially a trial space where businesses building AI could test how regulation might apply to their products.

But in the past two months, concerns have been raised (by Jones and others) that this “light touch” approach won’t be enough to deal with the AI revolution heading our way. Programmes like ChatGPT have already revolutionised text-based tasks, using so-called “deep learning” to have a conversation with the person asking it questions, while the telecoms giant BT announced last week it would cut up to 55,000 jobs - with some customer service staff set to be replaced by AI.

Has the plan changed?

At last week’s G7 summit in Hiroshima, world leaders including Rishi Sunak agreed to “identify potential gaps and fragmentation in global technology governance”. The Prime Minister went further than he had done previously - admitting it would be necessary to put “guardrails” in place around AI to make sure it was deployed “safely and securely”.

Concerns have been raised that AI deepfakes could disrupt the next general electionConcerns have been raised that AI deepfakes could disrupt the next general election
Concerns have been raised that AI deepfakes could disrupt the next general election
Hide Ad
Hide Ad

In his Commons debate, Jones said he welcomed the use of AI to “transform our public services and businesses” for the better but insisted that the UK “absolutely must create the conditions” for it to happen in an “ethical and just way”. He added that Parliament needed to reassure itself “we have created those conditions” before a “tragedy or scandal” triggered by AI becomes a reality.

Jones has also issued stark warnings about the use of “deepfake” videos or audio recordings of politicians saying or doing things they never have - which could be released during an election campaign to disrupt Britain’s democratic process. Experts have told NationalWorld they’re very worried about this threat and that the UK is on the “precipice” of electoral disruption caused by AI.

What has the government said?

Responding to Jones’s concerns, Technology Minister Paul Scully said the world had to tackle the challenges of AI together, and that the UK had to “achieve the right balance between responding to risks and maximising the opportunities” of the technology.

But he accepted that any changes would have to be done “at pace”. Scully went on: “It was only a few months ago that we first heard of ChatGPT, and we now have prompt engineers, a new, relatively well paid occupation that until recently no one had ever heard of.”

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.