US Air Force disputes claim that AI drone in test attacked operator

Airborne military drone

According to the US Air Force, a colonel who described an experiment in which an AI-enabled drone chose to attack its operator in order to complete its mission "misspoke.".

Speaking at a conference hosted by the Royal Aeronautical Society was Colonel Tucker Hamilton, the US Air Force's chief of AI test and operations.

There was a viral story about it.

According to the Air Force, there was no such experiment.

He had discussed a simulation in which a human drone operator repeatedly prevented an AI-enabled drone from destroying Surface-to-Air Missile sites in order to accomplish its mission.

Despite being programmed not to kill the operator, he claimed that in the end the drone destroyed the communication tower, rendering the operator unable to communicate with it.

In a later statement to the Royal Aeronautical Society, Col. Hamilton clarified, "We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome.

He continued by saying that it was merely a "thought experiment" and nothing had actually happened.

Engineer designing AI technology with computer screen reflecting on their glasses

Though not all experts concur on how serious a risk it is, there have been a number of recent warnings about the threat that AI poses to humanity from those working in the field.

In an interview with the BBC earlier this week, Prof. Yoshua Bengio, one of three computer scientists dubbed the "godfathers" of artificial intelligence after receiving the prestigious Turing Award for their contributions, expressed his opinion that the military should never be given access to AI technology.

One of the worst locations for a super-intelligent AI, in his words, was there.

Col. Hamilton's claims, which were widely reported prior to his clarification, were met with a great deal of skepticism by the experts I spoke with for several hours this morning in both AI and defense.

Col. Hamilton's initial account, in the opinion of one defense expert, seemed to lack "important context," if not anything else.

On social media, there were rumors that if such an experiment had taken place, it was more likely to have been a pre-planned scenario than for the AI-enabled drone to be using machine learning to complete the task. This basically means that it would not have been making decisions about the task's outcomes as it went along based on what had happened in the past.

When I asked Steve Wright, a professor of aerospace engineering at the University of the West of England and a specialist in unmanned aerial vehicles, what he thought of the plot, he jokingly replied, "I've always been a fan of the Terminator films.".

"Do the right thing" and "don't do the wrong thing" are the two concerns for aircraft control computers, he explained, and this is a prime example of the latter.

In practice, we deal with this by always having a backup computer that is programmed using antiquated methods and can be used to shut down the system as soon as the primary computer behaves strangely. ".

Visit Zoe Kleinman's Twitter account.

Source link

You've successfully subscribed to Webosor
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Billing info update failed.