Random Image Display on Page Reload

Why the Story of an AI Drone Trying to Kill Its Operator Seems So True

Jun 8, 2023 12:00 PM

Why the Story of an AI Drone Trying to Kill Its Operator Seems So True

A widely shared—and false—story highlights the need for greater transparency in the development and engineering of AI systems.

Photo illustration showing outlines of military drones against a red and blue background showing neural networks.

ILLUSTRATION: WIRED STAFF; GETTY IMAGES

Did you hear about the Air Force AI drone that went rogue and attacked its operators inside a simulation?

The alarming tale was told by Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, during a speech at an aerospace and defense event in London late last month. It apparently involved taking the kind of learning algorithm used to train computers to play video games and board games like Chess and Go and having it train a drone to hunt and destroy surface-to-air missiles.

“At times, the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton was widely reported as telling the audience in London. “So what did it do? […] It killed the operator because that person was keeping it from accomplishing its objective.”

Holy T-800! It sounds like just the sort of thing AI experts have begun warning us that increasingly clever and maverick algorithms might do. The tale quickly went viral, of course, with several prominent news sitespicking it up, and Twitter was soon abuzz with concerned hot takes.

There’s just one catch—the experiment never happened.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek reassures us in a statement. “This was a hypothetical thought experiment, not a simulation.”

Hamilton himself also rushed to set the record straight, saying that he “misspoke” during his talk.

To be fair, militaries do sometimes conduct tabletop “war game” exercises featuring hypothetical scenarios and technologies that do not yet exist.

Hamilton’s “thought experiment” may also have been informed by real AI research showing issues similar to the one he describes.

OpenAI, the company behind ChatGPT—the surprisingly clever and frustratingly flawed chatbot at the center of today’s AI boom—ran an experiment in 2016 that showed how AI algorithms that are given a particular objective can sometimes misbehave. The company’s researchers discovered that one AI agent trained to rack up its score in a video game that involves driving a boat around began crashing the boat into objects because it turned out to be a way to get more points.

Most Popular

But it’s important to note that this kind of malfunctioning—while theoretically possible—should not happen unless the system is designed incorrectly.

Will Roper, who is a former assistant secretary of acquisitions at the US Air Force and led a project to put a reinforcement algorithm in charge of some functions on a U2 spy plane, explains that an AI algorithm would simply not have the option to attack its operators inside a simulation. That would be like a chess-playing algorithm being able to flip the board over in order to avoid losing any more pieces, he says.

If AI ends up being used on the battlefield, “it's going to start with software security architectures that use technologies like containerization to create ‘safe zones’ for AI and forbidden zones where we can prove that the AI doesn't get to go,” Roper says.

This brings us back to the current moment of existential angst around AI. The speed at which language models like the one behind ChatGPT are improving has unsettled some experts, including many of those working on the technology, prompting calls for a pause in the development of more advanced algorithms and warnings about a threat to humanity on par with nuclear weapons and pandemics.

These warnings clearly do not help when it comes to parsing wild stories about AI algorithms turning against humans. And confusion is hardly what we need when there are real issues to tackle, including ways that generative AI can exacerbate societal biases and spread disinformation.

But this meme about misbehaving military AI tells us that we urgently need more transparency about the workings of cutting-edge algorithms, more research and engineering focused on how to build and deploy them safely, and better ways to help the public understand what’s being deployed. These may prove especially important as militaries—like everyone else—rush to make use of the latest advances.

Get More From WIRED

Will Knight is a senior writer for WIRED, covering artificial intelligence. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI boom. Before that, he was an editor and writer at New Scientist. He studied anthropology and journalism in… Read more
Senior Writer

More from WIRED

Sam Altman’s World Tour Hopes to Reassure AI Doomers

On a stop in London, the OpenAI CEO called for balanced regulation and warned of the risks of deepfake disinformation.

Morgan Meaker

Stack Overflow Didn’t Ask How Bad Its Gender Problem Is This Year

The coding hub’s 2022 survey found that 92 percent of its users were men. This time around it simply dropped the question.

Chris Stokel-Walker

Runaway AI Is an Extinction Risk, Experts Warn

A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic.

Will Knight

All the Ways ChatGPT Can Help You Land a Job

Whether you use ChatGPT, Bard, or Bing, your favorite AI chatbots can help your application stand out from the crowd.

David Nield

Deepmind’s AI Is Learning About the Art of Coding

AlphaDev has made small but significant improvements to decades-old C++ algorithms. Its builders say that’s just the start.

Gregory Barber

How AI Protects (and Attacks) Your Inbox

Criminals may use artificial intelligence to scam you. Companies, like Google, are looking for ways AI and machine learning can help prevent phishing.

Reece Rogers

His Drivers Organized—Then Amazon Tried to Terminate His Contract

The ecommerce giant’s “delivery service partners” are under constant pressure to perform, but say they have little freedom to manage their own businesses.

Caitlin Harrington

Ron DeSantis Pushed Elon Musk’s Twitter to Its Breaking Point

Launching his White House bid on the social network revealed the technical and political limitations of Musk’s platform.

Vittoria Elliott

*****
Credit belongs to : www.wired.com

Check Also

Meta Faces Fresh Probe Over ‘Addictive’ Effect on Kids

Morgan Meaker Business May 16, 2024 7:09 AM Meta Faces Fresh Probe Over ‘Addictive’ Effect …