Random Image Display on Page Reload

Geoffrey Hinton, Godfather of AI, Has a Hopeful Plan for Keeping Future AI Friendly

Aug 11, 2023 12:00 PM

The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly

Geoffrey Hinton left Google so he could speak more freely about AI’s dangers. He argues that building analog computers instead of digital ones might keep the technology more loyal.

Geoffrey Hinton

British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton, known as the 'godfather of AI' speaks during the Collision Tech Conference at the Enercare Centre in Toronto, Ontario, Canada, on June 28, 2023.Photograph: GEOFF ROBINS/Getty Images

Geoffrey Hinton, perhaps the world’s most celebrated artificial intelligence researcher, made a big splash a few months ago when he publicly revealed that he’d left Google so he could speak frankly about the dangers of the technology he helped develop. His announcement did not come out of the blue. Late 2022 was all about the heady discovery of what AI could do for us. In 2023, even as we GPT’d and Bing chat-ed, the giddiness was washed down with a panic cocktail of existential angst. So it wasn’t a total shock that the man known as the “Godfather of AI” would share his own thoughtful reservations. Hinton took pains to say that his critique was not a criticism of the search giant that had employed him for a decade; his departure simply avoided any potential tensions that come from critiquing a technology that your company is aggressively deploying.

Hinton’s basic message was that AI could potentially get out of control, to the detriment of humanity. In the first few weeks after he went public, he gave a number of interviews, including with WIRED’s own Will Knight, about those fears, which he had come to feel only relatively recently, after seeing the power of large language models like that behind OpenAI’s ChatGPT.

I had my own conversation with Hinton earlier this summer, after he had some time to reflect on his post-Google life and mission. We talked about the doom scenarios, of course, but I was more interested in what made him change his mind about our potential AI future. Most of all, I wanted to know what he thought that LLMs were doing that could make them into foes of Team Human. The fears Hinton is now expressing are quite a shift from the previous time we spoke, in 2014. Back then, he was talking about how deep learning would help Google do more effective translation, improve speech recognition, and more accurately identify the address numbers on houses shown on Google Maps. Only at the end of the conversation did he take a more expansive view, saying that he felt that deep learning would undergo a major revamp that would lead to deeper understanding of the real world.

His prediction was correct, but in our recent conversation, Hinton was still marveling about exactly how it happened. Eventually our conversation took a turn toward more philosophical realms. What was actually happening when a system like Google’s Bard chatbot answered my question? And do LLMs really represent, as some people claim, the antecedent of alien form of superintelligence?

Hinton says his mind changed when he realized three things: Chatbots did seem to understand language very well. Since a model’s every new learning could be duplicated and transferred to previous models, they could share knowledge with each other, much easier than brains, which can’t be directly interconnected. And machines now had better learning algorithms than humans. “I suddenly flipped in my view that the brain was better than those digital agents,” he says. “Already they know 1,000 times more than any one brain. So in terms of massive knowledge, they’re way better than the brain.”

Hinton believes that between five and 20 years from now there’s a 50 percent chance that AI systems will be smarter than us. I ask him how we’d know when that happened. “Good question,” he says. And he wouldn’t be surprised if a superintelligent AI system chose to keep its capabilities to itself. “Presumably it would have learned from human behavior not to tell us.”

Most Popular

That sounded to me like he was anthropomorphizing those artificial systems, something scientists constantly tell laypeople and journalists not to do. “Scientists do go out of their way not to do that, because anthropomorphizing most things is silly,” Hinton concedes. “But they'll have learned those things from us, they'll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable.” When your powerful AI agent is trained on the sum total of human digital knowledge—including lots of online conversations—it might be more silly not to expect it to act human.

But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we don’t really encounter the world directly.

“Some people think, hey, there's this ultimate barrier, which is we have subjective experience and [robots] don't, so we truly understand things and they don’t,” says Hinton. “That's just bullshit. Because in order to predict the next word, you have to understand what the question was. You can't predict the next word without understanding, right? Of course they're trained to predict the next word, but as a result of predicting the next word they understand the world, because that's the only way to do it.”

So those things can be … sentient? I don’t want to believe that Hinton is going all Blake Lemoine on me. And he’s not, I think. “Let me continue in my new career as a philosopher,” Hinton says, jokingly, as we skip deeper into the weeds. “Let’s leave sentience and consciousness out of it. I don't really perceive the world directly. What I think is in the world isn't what's really there. What happens is it comes into my mind, and I really see what's in my mind directly. That's what Descartes thought. And then there's the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world?” Hinton goes on to argue that since our own experience is subjective, we can’t rule out that machines might have equally valid experiences of their own. “Under that view, it’s quite reasonable to say that these things may already have subjective experience,” he says.

Now consider the combined possibilities that machines can truly understand the world, can learn deceit and other bad habits from humans, and that giant AI systems can process zillions of times more information that brains can possibly deal with. Maybe you, like Hinton, now have a more fraughtful view of future AI outcomes.

But we’re not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. “It works for people,” he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.

Most Popular

“The idea is you don't make everything digital,” he says of the analog approach. “Because every piece of analog hardware is slightly different, you can't transfer weights from one analog model to another. So there's no efficient way of learning in many different copies of the same model. If you do get AGI [via analog computing], it’ll be much more like humans, and it won’t be able to absorb as much information as those digital models can.”

The chances seem slim that the Big Tech companies racing to smarten up their LLM chatbots will embrace this techno-veganism approach to AI. Competition is intense, and the rewards for producing the most powerful bots are astronomical. Hinton, who is not shy about expressing his political views, doubts that big public companies or startups backed by venture funds will hobble their AI innovations because of some feel-good view of public benefit.

On some days, Hinton says, he’s optimistic. “People are pretty ingenious, and it's not smarter than us yet, and they haven't evolved to be nasty and petty like people and very loyal to your tribe, and very unloyal to other tribes. And because of that, we may well be able to keep it under control and make it benevolent.” But other times, Hinton feels gloomy. “There are occasions when I believe that probably we're not going to be able to contain it, and we're just a passing phase in the evolution of intelligence.”

And then there’s a sudden jailbreak in Geoff Hinton’s unique and uncopyable analog neural net—science gets silenced, and politics, leavened by his very human sense of play, bursts out, “If we put Bernie in charge, and we had socialism, everything would be much better,” he says. I bet his former Google managers are relieved not to have to answer for that one.


Image may contain Label Text Symbol and Sign

In January 2015, my Backchannel story (now in the WIRED archive) related how the discoveries from Hinton’s team were about to be implemented, big time, into Google products and the world in general. It took a certain amount of begging to get an interview with Hinton, whose time on the Mountain View campus was limited, but I finally got my audience.

“I need to know a bit about your background,” says Geoffrey Hinton. “Did you get a science degree?”

Most Popular

Hinton, a sinewy, dry-witted Englishman by way of Canada, is standing at a white board in Mountain View, California, on the campus of Google, the company he joined in 2013 as a Distinguished Researcher. Hinton is perhaps the world’s premier expert on neural network systems, an artificial intelligence technique that he helped pioneer in the mid 1980s. (He once remarked he’s been thinking about neural nets since he was sixteen.) For much of the period since then, neural nets—which roughly simulate the way the human brain does its learning—have been described as a promising means for computers to master difficult things like vision and natural language. After years of waiting for this revolution to arrive, people began to wonder whether the promises would ever be kept.

But about ten years ago, in Hinton’s lab at the University of Toronto, he and some other researchers made a breakthrough that suddenly made neural nets the hottest thing in AI. Not only Google but other companies such as Facebook, Microsoft and IBM began frantically pursuing the relatively minuscule number of computer scientists versed in the black art of organizing several layers of artificial neurons so that the entire system could be trained, or even train itself, to divine coherence from random inputs, much in a way that a newborn learns to organize the data pouring into his or her virgin senses. With this newly effective process, dubbed Deep Learning, some of the long-standing logjams of computation (like being able to see, hear, and be unbeatable at Breakout) would finally be untangled. The age of intelligent computer systems—long awaited and long feared—would suddenly be breathing down our necks. And Google search would work a whole lot better.


Image may contain Symbol

Pascal asks, “What might a day in the life of a future 80-year-old boomer look like in a nursing home in the near future? Could chatbots someday partially replace human contact for isolated seniors? Is technology really the solution—or just a temporary bandage?”

Thanks for the question, Pascal. I also thank others who have submitted questions to mail@WIRED.com with the subject line ASK LEVY. My little appeal last week worked! Keep ‘em coming!

Most Popular

Your question is well-timed, Pascal, because I imagine that there are probably a hundred startups working on chatbots for the elderly. Your phrasing implies that there is no substitute for actual human contact, and of course you are correct. Ideally, our waning years should be spent nestled in a web of loving companionship from friends and relatives. But the reality is that millions of seniors spend the last years of their life in nursing homes with minimal contact. It’s reasonable to ask if technology can make those people feel like they have engaging companionship. We’re certainly close to chatbots that can emulate a human caretaker, or even something that appears like a friend. If the choice is between that and a television set running some cable channel from hell, it would be cruel to deny someone a witty LLM that knows their favorite subjects and will uncomplainingly listen and respond to a recounting of lovely memories and rambling anecdotes without a point.

But I have a higher hope. Maybe advanced AI can make discoveries in medicine that keep people healthier late in life. That might allow people to remain active for longer, cutting down the time spent in isolated nursing homes and institutions. Of course that doesn’t address the shameful lack of attention we pay to our elders. To quote the late John Prine, “Old people just grow lonesome, waiting for someone to say, Hello in there, hello.” I guess a chatbot saying that is better than nothing.

You can submit questions tomail@wired.com. Write ASK LEVY in the subject line.


Geoffrey Hinton Godfather of AI Has a Hopeful Plan for Keeping Future AI Friendly

Residents and tourists flee into the ocean as Maui goes up in flames.


Image may contain Label Text Symbol and Sign

I spent a Wednesday in the Park with Grimes, talking AI, Mars, NFTs, her upcoming Transhumanism for Babies, LSD, and you-know-who.

It turns out that an Intel chip had a vulnerability that compromises privacy for millions. What else do you expect from a chip called Downfall?

Most Popular

Fed up with Big Tech? Join the movement to get off of their cloud.

Hip hop is 50, and people are woke to the needs of preserving its history. And yes, the Smithsonian is on it.


Image may contain Logo Symbol Trademark Text and Label

Don't miss future subscriber-only editions of this column.Subscribe to WIRED (50% off for Plaintext readers)today.

Get More From WIRED

Steven Levy covers the gamut of tech subjects for WIRED, in print and online, and has been contributing to the magazine since its inception. His weekly column, Plaintext, is exclusive to subscribers online but the newsletter version is open to all—sign up here. He has been writing about… Read more
Editor at Large

More from WIRED

A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It

Researchers found a simple way to make ChatGPT, Bard, and other chatbots misbehave, proving that AI is hard to tame.

Will Knight

A Controversial Right-to-Repair Car Law Makes a Surprising U-Turn

The Biden administration has changed its mind about a Massachusetts state law giving mechanics and car owners access to more diagnostic data.

Aarian Marshall

Kids Are Going Back to School. So Is ChatGPT

Teachers are caught between cracking down on cheating with generative AI and using it to help empower students. It’s going to be a challenging year.

Pia Ceres

Sex Workers Took Refuge in Crypto. Now It’s Failing Them

Banks and payments companies have long penalized sex workers. Many thought crypto would be a solution, but now exchanges are dumping them too.

Joel Khalili

The World Isn’t Ready for the Next Decade of AI

Mustafa Suleyman, cofounder of DeepMind and Inflection AI, talks about how AI and other technologies will take over everything—and possibly threaten the very structure of the nation-state.

Gideon Lichfield

Scammers Used ChatGPT to Unleash a Crypto Botnet on X

A botnet apparently connected to ChatGPT shows how easily, and effectively, artificial intelligence can be harnessed for disinformation.

Will Knight

The Cloud Is a Prison. Can the Local-First Software Movement Set Us Free?

Tired of relying on Big Tech to enable collaboration, peer-to-peer enthusiasts are creating a new model that cuts out the middleman. (That’s you, Google.)

Gregory Barber

To Navigate the Age of AI, the World Needs a New Turing Test

The father of modern computing would have opened his arms to ChatGPT. You should too.

Ben Ash Blum

*****
Credit belongs to : www.wired.com

Check Also

Prehistoric giant ‘sabre-toothed salmon’ renamed after new discovery

The giant prehistoric 'sabre-toothed salmon' has been renamed the 'spike-toothed' salmon after scientists uncovered new …