Random Image Display on Page Reload

The World Isn’t Ready for the Next Decade of AI

Aug 16, 2023 7:00 AM

The World Isn’t Ready for the Next Decade of AI

Mustafa Suleyman, cofounder of DeepMind and Inflection AI, talks about how AI and other technologies will take over everything—and possibly threaten the very structure of the nation-state.

Mustafa Suleyman
PHOTO-ILLUSTRATION: WIRED STAFF; CLARA MOKRI/THE NEW YORK TIMES/ REDUX

ON THIS WEEK’S episode of Have a Nice Future, Gideon Lichfield and Lauren Goode talk to Mustafa Suleyman, the cofounder of DeepMind and InflectionAI. They discuss his new book, The Coming Wave, which outlines why our systems are not set up to deal with the next great leap in tech. Suleyman explains why it's not crazy to suggest that chatbots could topple governments, and he argues for a better way to assess artificial intelligence. (Hint: it has to do with making a million dollars.)

Mustafa Suleyman’s book, The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma, comes out September 5. In the meantime you can check out WIRED’s coverage of DeepMind, Inflection AI, and all things artificial intelligence.

Lauren Goode is @LaurenGoode. Gideon Lichfield is @glichfield. Bling the main hotline at @WIRED.

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, just tap this link, or open the app called Podcasts and search for Have a Nice Future. If you use Android, you can find us in the Google Podcasts app just by tapping here. You can also download an app like Overcast or Pocket Casts and search for Have a Nice Future. We’re on Spotify too.

Note: This is an automated transcript, which may contain errors.

Gideon Lichfield: Oh God, no. I sound like a bad BBC presenter there. Sorry.

[Laughter]

Lauren Goode: I like the BBC.

[Music]

Gideon Lichfield: Hi, I'm Gideon Lichfield.

Lauren Goode: And I'm Lauren Goode. And this is Have a Nice Future, a podcast about how terrifyingly fast everything is changing.

Gideon Lichfield: Each week we talk to someone with big, audacious, and sometimes unnerving ideas about the future, and we ask them how we can all prepare to live in it.

Lauren Goode: Our guest this week is Mustafa Suleyman, the cofounder of AI company DeepMind, and more recently, the cofounder and CEO of Inflection AI.

Most Popular

Gideon Lichfield: He's also the author of an upcoming book about how AI and other technologies will take over the world and possibly threaten the very structure of the nation-state.

Mustafa Suleyman (audio clip):We're now going to have access to highly capable, persuasive teaching AIs that might help us to carry out whatever, you know, sort of dark intention we have. And it is definitely going to accelerate harms—no question about it. And that's what we have to confront.

[Music]

Lauren Goode: So, Gideon, Mustafa is a guest who we both wanted to bring on the podcast, though I think for slightly different reasons. You spoke to him recently at the Collision Conference in Toronto, you interviewed him on stage. I talked to him at another conference backstage; we talked about his chatbot, Pi. But in bringing him on Have a Nice Future, I really wanted to get a sense of why he's building what he's building—like, do we need another chatbot? And what's the connection there between a chatbot and bettering humanity? And I think that you are more intrigued by some of the big broad themes he's presenting in his new book.

Gideon Lichfield: I just think he has a really interesting background. He's Syrian British, he worked for a while in government and on conflict resolution, then he cofounded DeepMind with a couple of other people, and their goal originally was to solve artificial general intelligence, which is what OpenAI was also created to solve. And they did solve some really important AI problems—like they basically solved how to win the game of Go and various other games, and they worked on protein folding, which is rapidly changing the way that biology and drug development is being done. They sold DeepMind to Google, and he worked there for a few years. And then he left Google, partly because he said it was too bureaucratic and moving too slowly for him, and he founded Inflection, which is making this chatbot, which as we'll hear is meant to be much more than just a chatbot, but it is no longer an attempt to reach artificial general intelligence, which is intriguing. And now here's this book, which says AI and synthetic biology and a bunch of other technologies are developing so fast and will become so widespread that they will undermine the very fabric of our ability to govern our countries and our societies. I'm just really interested in how all of those things come together.

Lauren Goode: Right. There's an arc there of building AI, building AI, building AI … Wait. How worried should we be?

Gideon Lichfield: What are we building?

Lauren Goode: Yeah. Bringing all of those ideas together is kind of the fundamental question that I had for him, especially because he's been both hyping the possibilities of AI and now warning of its rapid advancement and its threats, and I really wanted to drill down on the specifics of that. But I'll give Mustafa this: He comes across as very human in this conversation, which maybe we can't say about all of the AI entrepreneurs we've spoken to.

Most Popular

Gideon Lichfield: Yeah, a lot of people we have on the show talk about building the tech and then letting society figure out the problems and how to regulate it. And, to be fair, Mustafa also seems to believe, as you'll hear, that you need to build the thing first and then figure out where its dangers might lie. But I think he at least has more thoughtful answers than some on what those dangers might be and how we have to start to prepare for them.

Lauren Goode: And that is all in the conversation that's coming up right after the break.

[Music]

Gideon Lichfield: Mustafa Suleyman, welcome to Have a Nice Future.

Mustafa Suleyman: Gideon, hi. Great to be here.

Lauren Goode: Great to have you on the show.

Mustafa Suleyman: Hey, Lauren, thanks for having me.

Lauren Goode: I actually asked Pi, I opened the app when I knew that you were coming in, and I asked it what I should ask you on this podcast. It said, “Ooh, that's exciting, exclamation point. Mustafa Suleyman is an amazing thinker and innovator—”

Mustafa Suleyman: Oh, God. [Chuckle]

Lauren Goode: “And his book is sure to be full of insights. Can I ask you what kind of podcast you have and what you want to get out of the interview?” Et cetera, et cetera. My sense from this is that the data set might be a bit biased. What do you think?

[Laughter]

Mustafa Suleyman: You should reroll it and see if it's different. It might come with the same enthusiasm, but it certainly hasn't been hand scripted by us at all, I promise. [Chuckle]

Lauren Goode: But there are a lot of personalized AI-powered chatbot assistants out there right now. They have been all the rage since OpenAI released its chatbot in late 2022. Why did you decide to go with a chatbot?

Mustafa Suleyman: I believe that conversation is going to be the new interface. If you actually just take a step back for a moment and look at your computer or even your phone, you'll see a huge number of buttons and icons on the bottom navigation in basically multi-colored technicolor dreamland. And it's actually quite an overwhelming experience. That's because it hasn't been designed with a kind of unified clean human-first interface as the first design principle. It's the meeting of these two needs, human translation and the needs of the computer. And I think that the next wave is gonna be one where you spend most of your time in conversation with your AI, that's the primary control mechanic.

Gideon Lichfield: You cofounded DeepMind about 12 years ago, and like OpenAI, it had the mission of developing artificial general intelligence in an ethical and safe way. And 12 years later you started Inflection AI, and this time you said you're not working towards AGI. So, why not? What's changed, and what's your goal instead?

Most Popular

Mustafa Suleyman: We founded DeepMind in 2010, and the strapline was building safe and ethical artificial general intelligence, a system that is able to perform well across a wide range of environments. And that was the mission of DeepMind, the belief that we have to learn everything from scratch. Now, at Inflection, we are developing an AI called Pi, which stands for Personal Intelligence, and it is more narrowly focused on being a personal AI. Quite different to an AI that learns any challenging professional skill. A personal AI, in our view, is one that is much closer to a personal assistant; it's like a chief of staff, it's a friend, a confidant, a support, and it will call on the right resource at the right time depending on the task that you give it. So it has elements of generality, but it isn't designed with generality as a first principle, as a primary design objective.

Gideon Lichfield: Do you think that AGI is dangerous? Is that why you're not doing it, or just do you not think it's possible?

Mustafa Suleyman: I think so far nobody has proposed even a convincing theoretical framework that gives me confidence it would be possible to contain an AGI that had significant recursive self-improvement capabilities.

Lauren Goode: How have your thoughts about AGI and how there hasn't been a real proven, even theoretical framework for it yet, informed the kind of model that you're building?

Mustafa Suleyman: I think that’s the right way to think about it. Narrow is more likely to be safe, although that isn't guaranteed. More general is more likely to be autonomous, and therefore, slightly less safe. And our AI is a personal AI, which means primarily it's designed for good conversation, and in time, it will learn to do things for you. So it will be able to use APIs, but that doesn't mean that you can prompt it in the way that you prompt another language model, because this is not a language model, Pi is an AI. GPT is a language model. They're very, very different things.

Gideon Lichfield: What's the difference?

Mustafa Suleyman: The first stage of putting together an AI is that you train a large language model. The second step is what's called fine-tuning, where you try to align or restrict the capabilities of your very broadly capable pretrained model to get it to do a specific task. For us, that task is conversation and teaching and knowledge sharing and entertainment and fun and so on, but for other companies, they actually want to expose that as an API, so you get access to the pretrained model with all these other capabilities. And that's where I think the safety challenge becomes a little bit more difficult.

Gideon Lichfield: In the near future, the near-ish future that you are describing, everybody has a kind of personal AI assistant or chief of staff, as you called it, which can do tasks for them, and maybe it can book travel or organize trips or do some research for them.

Most Popular

Lauren Goode: It's going to be hosting our podcast for us soon enough, Gideon.

Gideon Lichfield: Well, yeah, that's a good question. Is it gonna replace us? Is it gonna replace our producers? But more to the point, I think in this world in which we all have these AI assistants, you've suggested that this helps potentially pull people out of poverty, it creates new economic opportunities. 20 years ago, nobody had smartphones, and it was hard to imagine how the world would be changed by having smartphones, and today we all have these computers in our pockets that allow us to do incredible things. But at some fundamental structural level, the world is no different. There are still the same inequalities, there are still the same conflicts. So how does a world in which everyone has an AI assistant look different from the one today?

Mustafa Suleyman: Yeah. I don't think I would agree that there's nothing structurally different about the world over the last 20 years. I wouldn't go as far as to ascribe all the benefits to smartphones, but certainly, I do think it's fair to say that smartphones have made us smarter, cleverer, given us access to information, allowed us to connect with new people and build enormous businesses off the back of this hardware. So, it's clearly—

Gideon Lichfield: It's also given us massive access to misinformation and helped us waste a lot of our time on all sorts of diversions that aren't necessarily good for us. So I think you can make some counterarguments.

[Overlapping conversation]

Lauren Goode: Made us a lot worse at driving vehicles.

[Laughter]

Mustafa Suleyman: Yeah. Look, I'm definitely no naive techno-optimist, so there are unquestionably immense downsides and harms. But the way I think about a personal AI, it is not dissimilar to the arrival of a smartphone. A smartphone has basically put the most powerful mobile device that our species is capable of inventing in the hands of over a billion people, so no matter how rich you are, whether you're a billionaire or whether you earn a regular salary, we all now get access to the same cutting-edge hardware. I do think it's important to state at the outset that that is the trajectory that we're on, and I think that it's pretty incredible.

Gideon Lichfield: I think that one can just as equally make the argument that a new technology can bring out our worst impulses as well as our best ones. And so, one could plausibly say, “AI assistants will have just as many downsides as they have upsides in the same way that smartphones do.”

Mustafa Suleyman: That is the flip side of having access to information on the web. We're now going to have access to highly capable, persuasive teaching AIs that might help us to carry out whatever sort of dark intention we have. And that's the thing that we've got to wrestle with, that is going to be dark, it is definitely going to accelerate harms—no question about it. And that's what we have to confront.

Most Popular

Lauren Goode: Can you draw a line between this idea that here's a personal AI who completes tasks for you or gives you greater access to information to “This is going to help solve poverty”? How does that actually work? That this AI, Pi, here's your chatbot, just gave me a funny little quip for this podcast, helps end poverty.

Mustafa Suleyman: It's a tough question. I didn't put those two together, I didn't say it's gonna solve poverty or solve world hunger. [Chuckle] That's not what Pi is designed to do. Pi is a personal AI that is gonna help you as an individual be more productive. Other people will use progress in artificial intelligence for all kinds of things, including turbo-charging science. So on the science side, I can totally see this being a tool that helps you sift through papers more efficiently, that helps you synthesize the latest advances, that helps you store and record all of your insights and help you to kind of be a more efficient researcher. I mean, look, the way to think about it is that we are compressing the knowledge that is available on the internet into digestible nuggets in a personalized form. There's vast amounts of information now out there which can be reproduced in a highly personalized way, which I think is gonna turbo-charge people's intellect, and that in itself will make them much more productive. I think that in general, having a personal assistant is likely to make you a smarter and more productive person.

Gideon Lichfield: You're working on Inflection in the very near term of artificial intelligence, but you've got this book coming out, that starts out by talking about that near-term future of AI and ends up predicting the possible collapse of the nation-state. Do you want to give us a summary of the argument?

Mustafa Suleyman: With the wave of AI, the scale of these models has grown by an order of magnitude that is 10X every single year for the last 10 years. And we're on a trajectory over the next five years to increase by 10X every year going forward, and that's very, very predictable and very likely to happen.

Lauren Goode: And the premise of your book is that we're not ready for this?

Mustafa Suleyman: We are absolutely not ready, because that kind of power gets smaller and more efficient, and anytime something is useful in the history of invention, it tends to get cheaper, it tends to get smaller, and therefore it proliferates. So the story of the next decade is one of proliferation of power, which is what I think is gonna cause a genuine threat to the nation-state, unlike any we've seen since the nation-state was born.

Lauren Goode: Do you consider yourself a person with power?

Mustafa Suleyman: Yes.

Lauren Goode: What is that power?

Mustafa Suleyman: Well, I'm wealthy, I sold my company in 2014, and I now have a new company. I'm educated. I basically have become an elite since my adult life. I grew up as a very poor working-class kid of immigrant families with no education in my family, but now I'm definitely in the privileged gang. [Chuckle]

Most Popular

Lauren Goode: So you're in a unique position, because in your book, you're calling out this imbalance of power that you anticipate that you're predicting is coming in the coming wave.

Mustafa Suleyman: Right.

Lauren Goode: But you yourself are a powerful person who ultimately can—it's like you can tune the dials a little bit with the AI that you yourself are building.

Mustafa Suleyman: Exactly. And I think containment is the theme, I think, of the next decade.

Gideon Lichfield: Well, let's talk about this idea of containment in the book, because it's a term that you borrow from the Cold War. In that context, it meant containing a weapon or a state. But it's obviously a very different matter to contain a technology that can be in the hands of billions of people. What is the essence of containment as you think of it for these technologies today?

Mustafa Suleyman: There is no perfect state where the game is up and we've completely contained a technology, because the technologies will continue to get cheaper and easier to use, there'll continue to be a huge demand for them. And unlike any other previous technology, these are omni-use technologies, they're inherently general, they can be used for everything and anything in between. And so, trying to demarcate and define which bits of the technology should be restricted, right now, governments don't really have the capability to make that assessment in real time. That's why I've been very supportive and participated in the White House's voluntary commitments. And so I think this is an example of being proactive and being precautionary and encouraging self-imposed restrictions on the kinds of things that we can do, at the very largest models. And so we are caught by this threshold, a self-imposed threshold that the very largest models have to be extra deliberate and attentive and precautionary in the way that they approach training and deployment.

Lauren Goode: So how do you square that? How do you say, “We just signed a pledge, we're voluntarily looking to create safeguards or guardrails, but also this year we're training the largest large language model in the world.”? That's a mouthful, the largest large language model in the world. What does that actually mean? When companies like yours have signed a pledge saying, “We're going to look into things like provenance, and we're gonna create solutions around that.” Or “We're gonna make sure that our training sets aren't biased.” On a day-to-day basis, what does that mean that your team, your engineers are doing to actually make this safer?

Mustafa Suleyman: We would like to know what training data has been included in the model, what is the method and process for fine-tuning and restricting the capabilities of these models? Who gets to audit that process? So the game here is to try to exercise the muscle of the precautionary principle early, so that we are in the habit and practice of doing it as things change over the next five to 10 years.

Most Popular

Gideon Lichfield: If I were a cynic, which of course I'm not at all …

Mustafa Suleyman: [Chuckle] Not at all.

Lauren Goode: Not Gideon.

Gideon Lichfield: I might say that you and the AI companies are setting up a pretty sweet deal for yourselves, because you're getting to say to government, “Look, you, government, can't possibly understand this stuff well enough to regulate it, so we're going to voluntarily set some guardrails, we're gonna drive the agenda, we're gonna decide how precautionary the precautionary principle needs to be.” And so I think the question I'm asking is, what is the incentive of the private sector which leads the conversation because it has the know-how to set standards that are actually good for society?

Mustafa Suleyman: If we could get formal regulation passed, I think that would be a good start. But you're right, good regulation, I think, is a function of very diverse groups of people speaking up and expressing their concerns and participating in the political process. And at the moment we are sort of overwhelmed by apathy and anger and polarization. And yet now is the critical moment, I think, where there's plenty of time, we have many years to try to get this right. I think we have a good decade where we can have the popular conversation, and that's partly what I'm trying to do with the book and partly what others are trying to do with the voluntary commitments too.

Gideon Lichfield: What are some of the scenarios that you predict that most people probably can't even imagine that might happen if we don't manage to keep these technologies under control?

Mustafa Suleyman: Well, I think in sort of 15 or 20 years' time, you could imagine very powerful non-state actors. So think drugs cartels, militias, organized criminals, just an organization with the intent and motivation to cause serious harm. And so if the barrier to entry to initiating and carrying out conflict, if that barrier to entry is going down rapidly, then the state has a challenging question, which is, How does it continue to protect the integrity of its own borders and the functioning of its own states? If smaller and smaller groups of people can wield state-like power, that is essentially the risk of the coming wave.

Lauren Goode: I'm so intrigued by what you're doing with Inflection, because when I think about your background, you've worked in politics, you've worked in social good, you, of course, ended up cofounding DeepMind and then worked at Google. But you also, you wrote a book and you seem to have these diplomatic intentions, you believe in collaboration. Why are you a startup founder?

Mustafa Suleyman: I'm happiest when I'm making things. Really what I love doing is deeply understanding how something works, and I like doing that at the micro level. I love going from micro to macro, but I can't stay just at macro. I am obsessed with doing on a daily basis, and I guess that's the entrepreneurial part of me. I love “What are we gonna ship tomorrow? What are we gonna make? What are we gonna build?” If I had to choose between the two, that's what makes me happiest, and that's what I like to do most of the time.

Most Popular

Lauren Goode: And you think that working outside of the realm of startups, it would just be a slog, it sounds like? You wouldn't have as much gratification—

Mustafa Suleyman: It's too slow.

[Laughter]

Lauren Goode: … from seeing the effects of your efforts.

Mustafa Suleyman: I need feedback. And I need measurement—

Lauren Goode: You need feedback. You could be a journalist. We have to publish stuff all the time, and we get a lot of feedback online. I could read some of the comments to you.

[Laughter]

Mustafa Suleyman: I'm sorry to hear that. Hopefully, our AI can be kind to you and supportive and then push back on all those meanies on Twitter. [Chuckle]

Lauren Goode: And just to be clear, we do not use AI in our WIRED reporting.

Mustafa Suleyman: Not yet.

Lauren Goode: Oh, boy. It's a whole other podcast.

[Laughter]

Gideon Lichfield: But maybe we will use Pi to save us from all the nasties on Twitter, as Mustafa said.

Lauren Goode: Maybe. [Chuckle]

Gideon Lichfield: Sorry. On X, on X. Not on Twitter, not anymore.

Lauren Goode: That's right.

Mustafa Suleyman: Oh, man.

Gideon Lichfield: You left Google to found Inflection, and you've talked publicly about how frustrated you are by the slow movement in Google, the bureaucracy. You said that they could have released their own large language model a year or more before OpenAI released ChatGPT. It seems to sit slightly at odds with this caution that you're expressing in the book to say, “On the one hand, we needed to move faster at Google and I wanna ship things every day, and on the other hand, we need to be really careful about releasing stuff because the world isn't ready to cope with it.”

Mustafa Suleyman: Yeah. I could see from interacting with LaMDA every day that it was not anywhere near causing harm in any significant way—just as GPT-4, I think you'll be hard-pressed to claim that it's caused some material harm in the world.

Gideon Lichfield: Not yet.

Lauren Goode: Well played, Gideon.

Mustafa Suleyman: Possibly. I doubt it. It's already been used by billions of people. Maybe. We'll see. I guess we'll have to see in a few years' time. But I think models of this size, pretty unlikely, and that was quite obvious to me at the time. And I think the best way to operate is to just sensibly put things out there. It doesn't mean you have to be reckless, but you have to get feedback and you have to measure the impact and be very attentive and iterative, once you see people use it in certain ways.

Gideon Lichfield: I want to ask you about something you proposed called the Modern Turing Test. You've suggested we need a new way to assess intelligence instead of the outmoded idea of something that can just sound like a human. One of the examples you proposed of a machine that could meet that Turing test or something that you can say to it, “Make me a million dollars starting with a smaller sum.” And it will go out and do it. Why was that your benchmark?

Most Popular

Mustafa Suleyman: A modern version of the Turing test, in my opinion, is one where you can give a fairly abstract goal to an AI. I picked “Go away and produce a product, get it manufactured, market it and promote it, and then try to make a million dollars.” I think the first wave of the Modern Turing Test would have a reasonable amount of human intervention, so maybe three or four or five moments when a human would need to enter into a contract, open a bank account, all the kind of legal and financial approvals that would be involved. But I think most of the other elements, emailing the manufacturers in China, drop-shipping it, trying to optimize the marketing by producing new marketing content, all of those things as individual components, we're nearly there. The challenge is gonna be stringing them together into a sequence of actions, given that broad goal.

Lauren Goode: That's so interesting, because I initially thought like, “Oh, this is just a Turing test of, can you turn an AI into a venture capitalist in a short period of time?” But I see what you're saying, that there needs to be that connective tissue, and that's the thing that's missing. So what keeps you up at night?

Mustafa Suleyman: I think the greatest challenge of the next decade is going to be the proliferation of power that will amplify inequality, and it will accelerate polarization because it's gonna be easier to spread misinformation than it's ever been. And I think it's just going to—it's gonna rock our currently unstable world.

Gideon Lichfield: Do you think that AI has some of the solutions to this as well as being the cause of some of the problems?

Mustafa Suleyman: Interestingly, I think it's gonna be the tech platforms that will play quite an important role here, because they actually do play an important function in moderating the web. I think that we should get much more familiar with the idea of constant real-time moderation of the major platforms and of the web itself. I do think it's gonna be very difficult to tell who has produced what content, and I think the only real way to police that is that the platforms will have to make that a rule.

Lauren Goode: A final question for you. After Pi, your personal assistant, was done singing your praises, I asked it for harder questions to ask you, and one of the things it said was, “If you had to choose one thing that you wish people would stop doing today so that we could have a better future, what would it be?”

Mustafa Suleyman: I think it would be assuming that the other person has bad intentions. We look past one another, and we almost subconsciously choose to not hear them because we've already labeled their bad intentions. I think if we just approached one another with really trying to stand in the shoes of the other—the more I do that, the more humble I feel about it. It's a hard thing to do. Pi has definitely succeeded in asking me a hard question there.

Most Popular

Lauren Goode: We should all find our own ways of doing a little bit of fieldwork, is what you're saying.

Mustafa Suleyman: Yeah, you have to go and touch and smell and breathe and be part of a different environment than you're used to.

[Music]

Gideon Lichfield: This has been really fun, Mustafa, thank you so much for joining us on Have a Nice Future.

Mustafa Suleyman: Thanks very much. It was fun.

[Music]

Lauren Goode: Gideon, I'm so intrigued by what drives Mustafa. He has an interesting background—he is not your standard white guy, dropped-out-of-Stanford, started-an-app, techie. He has a global perspective. He clearly wants to be a kind of thought leader. But I think it also says something about our capitalistic society and the incredible influence of some of our most successful capitalists, that he sees the private sector as the place where he can both make things and have an influence. What sense did you get from talking to him about what is driving him?

Gideon Lichfield: He did say in the interview that he likes to build things and he likes to ship something every day, he likes to have that motivation to move things forward. And it sounds like he also believes that the way that you explore these complex problems of what technology will do to the world is by building the technology. He does say in the book that his background in government and working on conflict resolution is what has inspired some of the questions that he's asking himself today, about how we run our societies. Hard to say how much of that is real and how much is narrative-building of the kind that people in his position inevitably do, but there's definitely a mind there that is concerned about these problems, I think, and trying to find the right answers.

Lauren Goode: So I know that you tore through the book, which we should note that Mustafa cowrote with someone else. What did you make of the book?

Gideon Lichfield: Well, I found the book very intriguing, because he's raising the same kinds of questions that I'm very interested in right now, and I'm planning to carry on working on after I leave WIRED later this year—which is essentially, how do we run our societies in the 21st century when our institutions were designed in the 18th century? And today we have a world where technology is moving much faster, we're much more interconnected, everything is much bigger and more complex, and those institutions that we created centuries ago are not really up to the task anymore. His extrapolation of how the technologies that we have today are going to influence the future, for me really struck a chord, and at the same time raised some doubts, because I'm not sure if I share his bullishness on just how powerful AI models are going to become. He obviously knows more about AI than I do, so maybe I should believe him. But on the other hand, I think we've seen a lot of tech founders make very bold predictions about what the tech can do that end up falling short.

Most Popular

Lauren Goode: I tend to agree with you, that exponential rate at which AI is going to continue to advance might not necessarily hit the mark, but I did think his point about how this lowers the barriers for access to these tools for everyone is an important one. Because I'll cop to it, we can be a bit myopic, we experience technology through our own lens, our own worldview, our own experiences. Even sometimes, though we try to cover a wide range of technology topics, we're still not seeing every angle of it. And I thought what he said, for example, about the abilities for non-state actors to utilize and wield these tools was interesting. If the barrier to entry for initiating and carrying out conflict is lowered, then that's a challenging question, like if smaller people can use these tools too. Then all of a sudden you're talking about potential AI problems that are much bigger than just, "Is my chatbot able to provide therapy for me or process my passport renewal request?"

Gideon Lichfield: Yeah, I think maybe my skepticism is because when I look back at the impact that technology’s had out of the last, let's say decade or so, I see both more and less impact than you might expect. So, we look at the Arab Spring and we see how effective social media was in enabling uprisings and the overthrow of governments and popular movements, but 10 years later, the Middle East still looks pretty much the same as it did before that, because basically, authoritarians caught up with the same techniques that the revolutionaries were using and the essential power structures and wealth structures in the region haven't really shifted. It hasn't erased the idea of the nation-state, it hasn't erased the idea of borders, it hasn't upended the world in some macro sense. And so when I think about Mustafa's predictions that non-state actors will use generative AI to, in some way, fundamentally upend the political structure of the world, I can see ways in which it might happen locally, and yet I think that if you zoom out, the world will not look very different 20 or 30 years from now than what it does today. But who knows, I could still be wrong.

Lauren Goode: Really? You don't think the world has changed in major ways since the internet?

Gideon Lichfield: Sure, I think the world has changed enormously, and if you look at our day-to-day lives, they're immeasurably different from what they were before we had the internet. But then if I zoom out and look at the very broadest picture, I guess I still see a world in which a handful of countries dominate politics, a handful of rich people dominate business and society, those inequalities are getting bigger, not smaller, despite the so-called democratizing effect of technology. The ways in which humans trade power and influence are still fundamentally the same. Maybe I'm just being too cynical, but at some level, I see human society as being pretty much like it always was, regardless of what technology we have.

Most Popular

Lauren Goode: I'm also glad that he explained his thinking behind the reimagined Turing test. [Chuckle]

Gideon Lichfield: Because you were like, “Why is the Turing test a test of how good a capitalist a robot can be?”

Lauren Goode: Pretty much, yeah. Whether a bot can make a million bucks to me seemed like a really gross interpretation of the Turing test, but then he explained what he meant, and right now, that does seem to be one of the gaps in AI development. Even some of the most sophisticated generative AI tools, there's that execution gap. You can have a really impressive brainstorm session or even a therapy session with a bot, but the minute that you're like, “OK, renew my passport for me.” It probably cannot do that. All the parts are not connected yet. And so I guess he used the idea of, can it invest money and make money for you while you sleep, as a determination of whether or not that's actually working.

Gideon Lichfield: Yeah. And I think one of the questions for me is, how many of those barriers have to do with just the power of the models? You make them more powerful and, eventually, they figure out how to do all the things. And how much of it is just really silly trivial problems that the computer can't solve because you still need a human to hook it up to the right APIs or give it access to the right data? At the same conference where I interviewed Mustafa on stage, I met someone who was playing with a rudimentary AI agent. And what it was was basically a large language model tied up to some APIs and some web services. And what he had done was used it to scrape the list of conference participants, which was numbering in the thousands, and recommend to him 100 people that he should try to connect with. And that seemed to me like a really, really useful case for an AI, if you could make it work. And I could see that where it might fall down was, first of all, he had to build the thing that let it scrape the data from the conference website, it couldn't figure out how to do that by itself. And so, there has to be a human often making these connections between different digital services in order for them to actually function together. But then the other was, how well does your AI know you? And how well will it be able to make recommendations for you of people you should speak to at the conference? And the difference between making recommendations that are just kind of OK, and ones that are really, really good, could depend on a huge amount of knowledge about you and data about you that you might not be able to give it because that data may not exist in a form that it can digest. I think it's in those kinds of very specific applications that we are going to start to figure out just how powerful AI is or how much more it will take for it to be useful. And I actually don't think we have a good handle yet on how difficult that's going to be.

Most Popular

Lauren Goode: And at the end of the day, you still would have to take all of those meetings. You, Gideon.

Gideon Lichfield: Yes, I would still have to take all those meetings. I guess the next step is to build an AI that can take the meetings for me, then download the results into my brain. And that is not something we're seeing anytime soon.

[Music]

Lauren Goode: That's our show for today. Thank you for listening. You can read Mustafa’s book, The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma, when it hits bookstores on September 5.

Gideon Lichfield:Have a Nice Future is hosted by me, Gideon Lichfield.

Lauren Goode: And me, Lauren Goode. If you like the show, tell us, leave us a rating and a review wherever you get your podcasts. We love reading your comments. And don't forget to subscribe, so you can get new episodes each week.

Gideon Lichfield: You can also email us at nicefuture@wired.com.

Lauren Goode:Have a Nice Future is a production of Condé Nast Entertainment. Danielle Hewitt from Prologue Projects produces the show. Our assistant producer is Arlene Arevalo.

Gideon Lichfield: And we'll be back here next Wednesday. Until then, have a nice future.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism.Learn more.

Get More From WIRED

Gideon Lichfield is the editor in chief of all editions of WIRED. He left the UK in 1998, citing an aversion to beer, Marmite, and football soccer, and lived in Mexico City, Moscow, and Jerusalem before moving to the US. He was previously editor in chief of MIT Technology Review,… Read more
Global Editorial Director

Lauren Goode is a senior writer at WIRED covering consumer tech issues. She focuses on the intersection of new technologies and humanity, often through experiential or investigative personal essays. Her coverage areas include communications apps, trends in commerce, AR and VR, subscription services, data and device ownership, and how Silicon… Read more
Senior Writer

More from WIRED

Microsoft’s Satya Nadella Is Betting Everything on AI

The CEO can’t imagine life without artificial intelligence—even if it’s the last thing invented by humankind.

Steven Levy

Google DeepMind’s CEO Says Its Next Algorithm Will Eclipse ChatGPT

Demis Hassabis says the company is working on a system called Gemini that will draw on techniques that powered AlphaGo to a historic victory over a Go champion in 2016.

Will Knight

Meet the Humans Trying to Keep Us Safe From AI

As artificial intelligence explodes, the field is expanding beyond the usual suspects—and the usual motivations.

Will Knight

How ChatGPT and Other LLMs Work—and Where They Could Go Next

Large language models like AI chatbots seem to be everywhere. If you understand them better, you can use them better.

David Nield

The AI-Powered, Totally Autonomous Future of War Is Here

Ships without crews. Self-directed drone swarms. How a US Navy task force is using off-the-shelf robotics and artificial intelligence to prepare for the next age of conflict.

Will Knight

Mykhailo Fedorov Is Running Ukraine’s War Against Russia Like a Startup

Ukraine's deputy prime minister has helped the country bootstrap and innovate its war effort, creating a defense industry from scratch, and using his Big Tech ties to cut Russia off from the world.

Peter Guest

5 Ways ChatGPT Can Improve, Not Replace, Your Writing

Generate your own text—but get help from the AI bot to make it stand out.

David Nield

A Vast Untapped Green Energy Source Is Hiding Beneath Your Feet

New experiments in the deserts of Utah and Nevada show how advances in fracking—technology developed by the oil industry—can be repurposed to tap clean geothermal energy anywhere on Earth.

Gregory Barber

*****
Credit belongs to : www.wired.com

Check Also

Why this solar storm was so monumental, and other things to know about the lightshow

The promised northern lights over the weekend did not disappoint, producing a dazzling light show …