The government has set out plans to regulate artificial intelligence with new guidelines on “responsible use”.
Describing it as one of the “technologies of tomorrow”, the government said AI contributed £3.7bn ($5.6bn) to the UK economy last year.
Critics fear the rapid growth of AI could threaten jobs or be used for malicious purposes.
The term AI covers computer systems able to do tasks that would normally need human intelligence.
This includes chatbots able to understand questions and respond with human-like answers, and systems capable of recognising objects in pictures.
A new white paper from the Department for Science, Innovation and Technology proposes rules for general purpose AI, which are systems that can be used for different purposes.
Technologies include, for example, those which underpin chatbot ChatGPT.
As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety.
There is concern that AI can display biases against particular groups if trained on large datasets scraped from the internet which can include racist, sexist and other undesirable material.
AI could also be used to create and spread misinformation.
As a result many experts say AI needs regulation.
However AI advocates say the tech is already delivering real social and economic benefits for people.
And the government fears organisations may be held back from using AI to its full potential because a patchwork of legal regimes could cause confusion for businesses trying to comply with rules.
Instead of giving responsibility for AI governance to a new single regulator, the government wants existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with their own approaches that suit the way AI is actually being used in their sectors.
These regulators will be using existing laws rather than being given new powers.
Michael Birtwistle, associate director from the Ada Lovelace Institute, carries out independent research, and said he welcomed the idea of regulation but warned about “significant gaps” in the UK’s approach which could leave harms unaddressed.
“Initially, the proposals in the white paper will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future.
“The UK will also struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators,” he said.
The white paper outlines five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:
• Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
• Transparency and “explainability”: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
• Fairness: AI should be used in a way which complies with the UK’s existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes
• Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
• Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI
Over the next year, regulators will issue practical guidance to organisations to set out how to implement these principles in their sectors.
Science, innovation and technology secretary Michelle Donelan said: “Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.”
But Simon Elliott, partner at law firm Dentons told the BBC the government’s approach was a “light-touch” that makes the UK “an outlier” against the global trends around AI regulation.
China, for example, has taken the lead in moving AI regulations past the proposal stage with rules that mandate companies notify users when an AI algorithm is playing a role.
“Numerous countries globally are developing or passing specific laws to address perceived AI risks – including algorithmic rules passed in China or the USA,” continued Mr Elliott.
He warned about the concerns that consumer groups and privacy activists will have over the risks to society “without detailed, unified regulation.”
He is also worried that the UK’s regulators could be burdened with “an increasingly large and diverse” range of complaints, when “rapidly developing and challenging” AI is added to their workloads.
In the EU, the European Commission has published proposals for regulations titled the Artificial Intelligence Act which would have a much broader scope than China’s enacted regulation.
They include “grading” AI products according to how potentially harmful they might be and staggering regulation accordingly. So for example an email spam filter would be more lightly regulated than something designed to diagnose a medical conditions – and some AI uses, such as social grading by governments, would be prohibited altogether.
“AI has been around for decades but has reached new capacities fuelled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement.
The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use,” Mr Breton added.
Meanwhile in the US The Algorithmic Accountability Act 2022 requires companies to assess the impacts of AI but the nation’s AI framework is so far voluntary.