Campaigners have cautioned the government against undermining trust in AI unless it is more transparent about how it is using the technology.
Prime Minister Rishi Sunak has made an effort to establish the UK as a pioneer in creating new AI regulations on a global scale.
However, privacy advocates claim that its own use of AI-driven systems is too opaque and exposes users to discrimination.
The government declared its dedication to establishing "strong guardrails" for AI.
Artificial intelligence (AI) is advancing quickly, which has sparked a flurry of headlines warning of the dangers it may pose to humanity.
A global summit on regulation will be held in the autumn, according to Mr. Sunak, who has stated that he wants the UK to serve as the "geographical home" of new safety regulations.
The UK government, according to a number of advocacy groups, is not taking sufficient steps to manage the risks associated with its own expanding use of AI in areas like welfare, immigration, and housing.
They argue that the public should be given more information about where and how such systems are used in a document sent to MPs on cross-party groups on artificial intelligence and data analytics, which the BBC has seen.
Civil liberties organizations like Liberty, Big Brother Watch, Open Rights Group, Statewatch, as well as a number of migrant rights organizations and digital rights attorneys have all signed it.
The government is "behind the curve" in managing risks from its use of artificial intelligence, according to Shameem Ahmad, CEO of the Public Law Project (PLP), a legal charity that coordinated the statement.
She continued by pointing out that while ChatGPT, an AI chatbot, had "caught everyone's attention," public authorities had been using AI-powered technology for years, occasionally in a "secretive" way.
The government's current AI strategy, which was outlined in a policy statement in March, was primarily concerned with how to best control its rapidly expanding use in industry.
It did not establish any new legal restrictions on its use in the public or private sectors, claiming that doing so now could stifle innovation. Instead, the current regulators will develop fresh industry recommendations.
It stands in stark contrast to the European Union, which plans to forbid public authorities from classifying citizens' behavior and implement strict restrictions on the use of AI-powered facial recognition for law enforcement in public spaces.
Additionally, new controls, such as being recorded in an EU-wide register, would apply to the use of AI tools for border management.
The advocacy groups claimed in their statement that the UK's own blueprint had lost a "vital opportunity" to strengthen safeguards regarding the use of AI by government entities.
The groups focused in particular on government algorithms that are frequently applied to assist officials in making decisions by analyzing vast amounts of data.
It is believed that some of these tools make use of machine learning, a popular type of AI that enables systems to be trained to become more effective over time. If it is founded on biased data, critics contend that this may result in discrimination.
A legal challenge is currently being made to one of these systems because it may discriminate against disabled people. This system is used by Department of Work and Pensions officials to help identify benefit claimants suspected of fraud.
In addition to taking legal action against a Home Office algorithm used to flag suspected sham marriages, which it claims may discriminate on the basis of nationality, the PLP, which has identified over 40 automated systems used by public bodies, has also identified a number of other automated systems used by these bodies.
Additional equality regulations must be followed by public entities using such systems. But activists claim it is difficult to tell if these are being followed because officials don't provide enough details about how they operate.
In a letter to MPs, it was requested that the government's algorithm transparency register, which is currently voluntary, be made mandatory and that public bodies be legally required to notify the public when AI is used in decision-making.
In order to handle complaints from people who were negatively impacted by decisions, it also demanded an "adequately resourced" specialist regulator.
It also raised concerns about the government's Data Protection Bill, which is currently being debated in Parliament. These worries would increase if important decisions could be made legally without human oversight.
According to the government, the current EU-derived regulations, which date from 2018, are out-of-date and may impede the creation of useful AI tools.
Mariano delli Santi, a legal and policy officer at the Open Rights Group, disagreed, claiming that the bill "removed or watered down" current safeguards and deprived regulators of the resources they require "when AI goes wrong.".
The UK's strategy would support "fairness, explainability, and accountability" in new systems, according to the technology department, which is in charge of AI regulation.
It also stated that it was adopting an "adaptable" strategy for creating new rules, acknowledging the "rapid pace of development in AI capabilities."