Government adviser issues possible strong artificial intelligence ban

a stock photo that shows AI

According to a representative of the government's AI Council, some robust artificial general intelligence (AGI) systems may ultimately need to be outlawed.

AGI needs more built-in safety technology, as well as strong transparency and audit requirements, according to Marc Warner, who is also the head of Faculty AI, who spoke to the BBC.

Furthermore, "sensible decisions" on AGI are needed in the ensuing six to twelve months.

He made these remarks after the US and the EU declared that a voluntary code of conduct for AI was urgently needed.

The AI Council is an independent expert committee that offers guidance to the government and top AI practitioners.

Faculty AI claims to be OpenAI's lone technical partner, assisting customers in securely integrating ChatGPT and other Faculty AI products into their systems.

While the company's tools assisted in predicting demand for NHS services during the pandemic, its political ties have come under scrutiny.

Mr. Warner signed a Center for AI Safety letter warning that the technology could wipe out humanity. Faculty AI was one of the tech firms whose representatives met with Technology Minister Chloe Smith at Downing Street on Thursday to discuss the dangers, opportunities, and laws required to ensure safe and responsible AI.

Artificial intelligence (AI) refers to a computer's capacity to carry out tasks that typically require human intelligence.

According to Mr. Warner, "narrow AI"—systems used for specialized tasks like text translation or looking for cancer in medical images—could be governed similarly to other forms of technology.

However, AGI systems—a radically new technology—were much more concerning and would require different rules.

As Mr. Warner continued, "These are algorithms that are aimed at being as smart as a human or smarter than a human across a very broad domain of tasks - essentially, every task.".

According to him, intelligence was the main factor that kept humanity in its dominant position on this planet.

Nobody in the world, according to Mr. Warner, can provide a strong scientific justification for why creating objects that are as intelligent as us or smarter should be safe.

That doesn't necessarily mean it's bad, but it does indicate a risk and the need for caution.

The amount of compute [processing power] that can be thrown at these things without restriction must at the very least be subject to strong limits.

There is a compelling case that, at some point, we may decide that enough is enough and simply outlaw algorithms with complexity levels or compute requirements above a certain threshold.

However, it is obvious that governments should make that decision rather than technology companies.

Some claim that worries about artificial general intelligence (AGI) are deflecting attention from issues with already-existing technologies, such as bias in AI hiring or facial recognition software.

But according to Mr. Warner, this is equivalent to asking, "Do you want cars or airplanes to be safe? I want both.". ".

Others claim that excessive regulation may deter investors and stifle innovation by making the UK less appealing.

But Mr. Warner claimed that by promoting safety, the UK could gain a competitive edge.

"My long-term bet is that actually, to get value out of the technology, you need the safety -- just like you need the engines to work to get value out of the airplane," he said.

A dedicated watchdog was not established in the UK's recent White Paper on AI regulation, which drew criticism.

The UK could play "a leadership role," according to Prime Minister Rishi Sunak, who also outlined the need for "guardrails.".

EU commissioner Margrethe Vestager
Industry representatives and others will be asked to offer input on a draft voluntary code of conduct, according to EU Commissioner Margrethe Vestager, in the coming weeks.

Both European Union Commissioner Margrethe Vestager and US Secretary of State Antony Blinken stated the need for voluntary rules on Wednesday.

Legislative procedures are still being completed for the EU Artificial Intelligence Act, which will be among the first to regulate AI.

The implementation of various pieces of legislation, according to Ms. Vestager, would take two to three years, "and we're talking about a technological acceleration that is beyond belief.".

However, a draft voluntary code of conduct would be available in a matter of weeks, and industry and others would be invited to participate.

Following the fourth US-EU Trade and Technology Council meeting, Mr. Blinken stated that it was crucial to create voluntary codes of conduct that were "open to" a "wide universe of countries. countries with similar views.

Source link

You've successfully subscribed to Webosor
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Billing info update failed.