AI Guidance: UK publishes first international standard on how to safely manage artificial intelligence

The standard offers direction on how businesses can responsibly develop and deploy AI tools both internally and externally
The standard offers direction on how businesses can responsibly develop and deploy AI tools both internally and externally (Image: Pixabay)The standard offers direction on how businesses can responsibly develop and deploy AI tools both internally and externally (Image: Pixabay)
The standard offers direction on how businesses can responsibly develop and deploy AI tools both internally and externally (Image: Pixabay)

The UK’s national standards body has published a first-of-its-kind international standard on how to safely manage artificial intelligence (AI). The guidance sets out how to establish, implement, maintain and continually improve an AI management system, with a focus on safeguards.

The British Standards Institution (BSI) published the standard to offer direction on how businesses can responsibly develop and deploy AI tools both internally and externally. The report comes amid ongoing debate about the need to regulate the fast-moving technology, which has become increasingly prominent over the last year thanks to the public release of generative AI tools such as ChatGPT.

Hide Ad
Hide Ad

The UK held the first global AI Safety Summit last November, where world leaders and major tech firms from around the world met to discuss the safe and responsible development of AI, as well as the potential long-term threats the technology could pose. Those threats included AI being used to create malware for cyber attacks and even being a potentially existential threat to humanity, if humans were to lose control of the technology.

Susan Taylor Martin, chief executive of BSI, said of the new international standard: “AI is a transformational technology. For it to be a powerful force for good, trust is critical. The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology which, in turn, offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”

The guidance includes requirements to create context-based risk assessments, as well as additional controls for both internal and external AI products and services. Scott Steedman, director general for standards at BSI, said: “AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework.

“While government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them. In this fast moving space, BSI is pleased to announce publication of the latest, international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services.

Hide Ad
Hide Ad

“Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI. Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy."

Mr Steedman added: “The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.