Random Image Display on Page Reload

Don’t Ask Dumb Robots If AI Will Destroy Humanity

Jul 20, 2023 12:00 PM

Don’t Ask Dumb Robots If AI Will Destroy Humanity

Robots like Sophia are impressive to look at, but don’t let their humanlike facial expressions trick you into thinking these machines are intelligent.

AI robot Sophia pointing her finger

Photograph: FABRICE COFFRINI/Getty Images

Earlier this month,severalprominentoutlets carried news that artificial intelligence will not pose a danger to humanity. The source of this reassuring news? A bunch of humanoid robot heads connected to simple chatbots.

The news stories sprang from a panel at a United Nations conference in Geneva called AI for Good, where several humanoids appeared alongside their creators. Reporters were invited to ask questions to the robots, which included Sophia, a machine made by Hanson Robotics that has gained notoriety for appearing on talk shows and even, bizarrely, gaining legal status as a person in Saudi Arabia.

The questions included whether AI would destroy humanity or steal jobs. Their replies were made possible by chatbot technology, somewhat similar to that which powers ChatGPT. But despite the well-known limitations of such bots, the robots’ replies were reported as if they were the meaningful opinions of autonomous, intelligent entities.

Why did this happen? Robots that can visually mimic human expressions trigger an emotional response in onlookers because we are so primed to pick up on such cues. But allowing what is nothing more than advanced puppetry to disguise the limitations of current AI can confuse people trying to make sense of the technology or of recent concerns about problems it may cause. I was invited to the Geneva conference, and when I saw Sophia and other robots listed as “speakers,” I lost interest.

It’s frustrating to see such nonsense at a time when more trustworthy experts are warning about current and future risks posed by AI. Machine learning algorithms are already exacerbating social biases, spewing disinformation, and increasing the power of some of the world’s biggest corporations and governments. Leading AI experts worry that the pace of progress may produce algorithms that are difficult to control in a matter of years.

Hanson Robotics, the company that makes Sophia and other lifelike robots, is impressively adept at building machines that mimic human expressions. Several years ago, I visited the company’s headquarters in Hong Kong and met with founder David Hanson, who previously worked at Disney, over breakfast. The company’s lab was like something from Westworld or Blade Runner, with unplugged robots gazing sadly into the middle distance, shriveled faces flopped on shelves, and prototypes stuttering the same words over and over in an infinite loop.

Most Popular

Inprogress facial models for robots
Photograph: Will Knight

Hanson and I talked about the idea of adding real intelligence to these evocative machines. Ben Goertzel, a well-known AI researcher and the CEO of SingularityNET, leads an effort to apply advances in machine learning to the software inside Hanson’s robots that allows them to respond to human speech.

The AI behind Sophia can sometimes provide passable responses, but the technology isn’t nearly as advanced as a system like GPT-4, which powers the most advanced version of ChatGPT and cost more than $100 million to create. And of course even ChatGPT and other cutting-edge AI programs cannot sensibly answer questions about the future of AI. It may be best to think of them as preternaturally knowledgeable and gifted mimics that, although capable of surprisingly sophisticated reasoning, are deeply flawed and have only a limited “knowledge” of the world.

Sophia and company’s misleading “interviews” in Geneva are a reminder of how anthropomorphizing AI systems can lead us astray. The history of AI is littered with examples of humans overextrapolating from new advances in the field.

In 1958, at the dawn of artificial intelligence, The New York Times wrote about one of the first machine learning systems, a crude artificial neural network developed for the US Navy by Frank Rosenblatt, a Cornell psychologist. “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence,” the Times reported—a bold statement about a circuit capable of learning to spot patterns in 400 pixels.

If you look back at the coverage of IBM’s chess-playing Deep Blue, DeepMind’s champion Go player AlphaGo, and many of the past decade’s leaps in deep learning—which are directly descended from Rosenblatt’s machine—you’ll see plenty of the same: people taking each advance as if it were a sign of some deeper, more humanlike intelligence.

That’s not to say that these projects—or even the creation of Sophia—were not remarkable feats, or potentially steps toward more intelligent machines. But being clear-eyed about the capabilities of AI systems is important when it comes to gauging progress of this powerful technology. To make sense of AI advances, the least we can do is stop asking animatronic puppets silly questions.

Get More From WIRED

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the Fast Forward newsletter that explores how advances in AI and other emerging technology are set to change our lives—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental… Read more
Senior Writer

More from WIRED

Please Stop Asking Chatbots for Love Advice

We get it, relationships are hard. But asking ChatGPT how to do emotions is not going to work. Here are some better ideas.

Sarah Gundle, PsyD

How AI Can Make Gaming Better for All Players

If used responsibly, artificial intelligence has the potential to both make gaming more accessible and to actively learn what individuals need.

Geoffrey Bunting

Big AI Won’t Stop Election Deepfakes With Watermarks

Experts warn of a new age of AI-driven disinformation. A voluntary agreement brokered by the White House doesn’t go nearly far enough to address those risks.

Vittoria Elliott

The Huge Power and Potential Danger of AI-Generated Code

Programming can be faster when algorithms help out, but there is evidence AI coding assistants also make bugs more common.

Will Knight

AI Could Change How Blind People See the World

Assistive technology services are integrating OpenAI's GPT-4, using artificial intelligence to help describe objects and people.

Khari Johnson

Elon Musk's xAI Might Be Hallucinating Its Chances Against ChatGPT

Elon Musk’s new venture aims to create AI that can “understand the universe” and challenge OpenAI. Right now it’s 11 male researchers with a lot of work to do.

Will Knight

Meta’s Open Source Llama Upsets the AI Horse Race

Meta is giving its answer to OpenAI’s GPT-4 away for free. The move could intensify the generative AI boom by making it easier for entrepreneurs to build powerful new AI systems.

Khari Johnson

How to Tackle AI—and Cheating—in the Classroom

What one educator wants students, teachers, and everyone else to know about the ethics of using of AI in education.

Christina Wyman

*****
Credit belongs to : www.wired.com

Check Also

London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime

In a test at one station, Transport for London used a computer vision system to …