Random Image Display on Page Reload

AI Can’t Read Books. It’s Reviewing Them Anyway

Aug 25, 2023 9:00 AM

AI Can’t Read Books. It’s Reviewing Them Anyway

ChatGPT-powered avatars recently gushed over a new book—unconvincingly. But if AI develops actual opinions, authors might be in for a shock.

Red books strewn about on a yellow background

Photograph: Daniel Grizelj/Getty Images

Now that we’ve all had experience with large language models, their limitations are all too visible. Yes, they can write. But their prose doesn’t explode in the mind like the words of Jennifer Egan, Emily St. John Mandel, or David Foster Wallace do. Yes they can make music. But Taylor Swift and Kendrick Lamar are sleeping very well at night. And they sure can summarize history speedily and neatly, but not with the perspicacity of Barbara Tuchman or Ron Chernow. LLMs are just getting started, though. They’re bound to get better.

But how much better? We’ve seen multiple instances where AI has conquered regions thought to be impenetrable by robots, from chess to the LSAT. Could it do the same in the creative arts? Lately I’ve been obsessed with that question. Can something generated with AI be truly brilliant?

Given that, I was easy pickings for a PR release I received last week. The subject line: “AI book reviewers?” The press release touted “the first book to ever be reviewed by AI avatars.” I’ve written book reviews, and my books have been reviewed, and I know that the best examples of that art are penetrating and illuminating. Had someone cracked the code to do that via AI?

So I followed up, and I found that the touted “book reviews” weren't quite what I had in mind. It turns out that a PR firm was promoting a novel called The Cloud, one of those dystopian scenarios where a heroic human fights back against AI overlords, and the firm had come up with some AI gimmickry. It created a set of AI-generated characters—a rogue android, a time traveler, etc.—and had those characters offer assessments of the book.

Sounds great, until you hear the details. The two- or three-sentence, ChatGPT-generated “book reviews” only drew on the jacket copy and some Amazon reviews. The avatars did not get access to the actual text of the book, “for copyright reasons.” Since most reviewers prefer to actually put eyes on the pages of the book they’re judging, these AI-generated comments would seem to not qualify as actual reviews. Indeed, they come off like promotional blurbs dashed off by a friend of a friend who hadn’t bothered to read the book or even ask what it was about. One typical example: “As a survivor of the post-apocalyptic empire, I can tell you that The Cloud by Robert Rivenbark is an enthralling read. With its gripping storytelling and memorable characters, I highly recommend this book.” Not one of these comments invokes the name of a single character, the theme of the book, the quality of the prose, or a glimmer that a mind was at work. Henry Roi, one of the PR execs in charge of the promotion, concedes that most authors do want actual humans to review their work. “But afterwards they want more content,” he adds, presumably for social media posts on TikTok and elsewhere. AI-generated content fits that bill.

AI-generated video; Courtesy of Henry Roi PR

So much for the avatars. Still, the experience got me thinking about whether it might be possible in the future for AI to create real book reviews. To get a baseline on the state of the art, I asked ChatGPT to review my own first book, Hackers. Since the book has been around for a while, there’s plenty of material for a large language model to chew on. Maybe ChatGPT could compensate for that lousy review Hackers got in the Sunday New York Times Book Review! (The Washington Post review was better.) ChatGPT prefaced its work with a disclaimer that, as an AI model, it didn’t have opinions or emotions. Nonetheless, it dove into the breach. The first paragraph read:

"Hackers: Heroes of the Computer Revolution" by Steven Levy is a classic exploration of the early days of the computer industry and the pioneering individuals who shaped its development. Originally published in 1984, the book provides an insightful and engaging look into the world of computer enthusiasts, tinkerers, and programmers who were instrumental in laying the foundation for modern computing and internet culture.

Not bad. But some people have told me they loved Hackers. Those fans might think that review was insufficiently enthusiastic. So for those loyal readers—not for me, you understand—I prompted ChatGPT to be more positive. It complied within seconds.

Most Popular

"Hackers: Heroes of the Computer Revolution" by Steven Levy is an absolute gem that transports readers to the exhilarating and transformative early days of the computer revolution. Originally published in 1984, this book remains a timeless masterpiece that celebrates the brilliant minds and boundless creativity that laid the foundation for the digital world we inhabit today.

Much better. Still, with its admitted void of emotions and opinions, ChatGPT didn’t do what a great review does—open up a book in a new, insightful way. Could that ever happen? To further ponder this, I consulted with Jennifer Senior, a Pulitzer Prize–winning writer who for several years wrote fantastic book reviews for The New York Times. “It seems weird to outsource book reviews to AI,” was her first reaction. But then she began asking ChatGPT to review books she’d written about. While the output was credible, she sensed that it was paraphrasing what others had written—including her. “If I found a particular passage of a book to be too marshy and kind of rough going, the review would say, ‘Some of the book was hard to understand,’” she says.

The question was whether a future AI book reviewer could make such judgments itself. “Could AI ever isolate a sentence and parse its terribleness, or use a quote from the book to demonstrate why a writer can’t write?” Senior wonders. We agreed that nothing we saw from the current set of AI systems can do that. But I suspect that future ones might. Large language models can master a foreign language or learn to code without being trained. So I don’t see anything stopping a future system, trained on just about every word ever written, from identifying examples of what we humans regard as good writing.

What I’m still not sure about is whether an AI could make judgments as fresh and perceptive as the reviews from Senior and other elite reviewers. For those people, the act of reading evokes echoes from their own lives and triggers thought processes that lead to their observations. Senior doubts that this could happen with a robot. “I don’t see how AI can recreate the experience of reading a book, which is what the best critics do,” she says.

I guess that a variation of the Turing test would be whether a robot could do just that. But that would be falling into the trap of assuming that AI, once it performs a task as well as humans, won’t level up from there. An AI book reviewer might not bother to recreate the human experience of reading a book. Drawing on its comprehensive knowledge of everything ever written, including treatises on what makes great criticism, a future AI bibliophile might indulge in a form of hypercriticism, with insights exceeding what mere mortals could produce. By then, of course, many of the books up for review will probably be written by LLMs.

In the meantime, humans still rule. Don’t cancel your subscription to the London Review of Books just yet. Still, ChatGPT reviews do have their charms. In fact, I have a new hobby: asking LLMs to write better and better reviews of my books. Here’s the latest variation on Hackers:

Most Popular

Though the world of computing has evolved since the book's publication, its luminescent resonance remains undimmed. The predictions laid bare in its pages, even as they interact with the tapestry of history, remain a testament to Levy's foresight and uncanny ability to discern the pulse of progress. In summation, "Hackers: Heroes of the Computer Revolution" is not merely a book; it is an odyssey—a journey through time, intellect, and the very essence of human potential. Steven Levy's magnum opus deserves a sanctuary on the bookshelves of those who seek not only to understand the past, but to be inspired by the audacity of pioneers who sculpted the future.

If only a human wrote that! A human reviewing books … for The New York Times.


Image may contain Label Text Symbol and Sign

In 2012, I wrote about Narrative Science, a company that produced algorithmically generated stories about sports and financial news. That was before the current crop of large language models. In 2021, Salesforce bought the company.

[CEO Kristian] Hammond believes that as Narrative Science grows, its stories will go higher up the journalism food chain—from commodity news to explanatory journalism and, ultimately, detailed long-form articles. Maybe at some point, humans and algorithms will collaborate, with each partner playing to its strength. Computers, with their flawless memories and ability to access data, might act as legmen to human writers. Or vice versa, human reporters might interview subjects and pick up stray details—and then send them to a computer that writes it all up. As the computers get more accomplished and have access to more and more data, their limitations as storytellers will fall away. It might take a while, but eventually even a story like this one could be produced without, well, me. "Humans are unbelievably rich and complex, but they are machines," Hammond says. "In 20 years, there will be no area in which Narrative Science doesn't write stories."

Most Popular

For now, however, Hammond tries to reassure journalists that he's not trying to kick them when they're down. He tells a story about a party he attended with his wife, who's the marketing director at Chicago's fabled Second City improv club. He found himself in conversation with a well-known local theater critic, who asked about Hammond's business. As Hammond explained what he did, the critic became agitated. Times are tough enough in journalism, he said, and now you're going to replace writers with robots?

"I just looked at him," Hammond recalls, “and asked him: Have you ever seen a reporter at a Little League game? That's the most important thing about us. Nobody has lost a single job because of us.” At least not yet.


Image may contain Symbol

John asks, “Do driverless cars like Cruise have to pass a test and get a driver’s license?”

Thanks, John. A timely question, since San Francisco just flashed the green light for 24/7 ride-hailing on self-driving cars. Your concept is intriguing—a robot car taking a test with an exasperated instructor barking instructions at it. Sadly, that hasn’t happened. The states approving the use of driverless cars have a more impersonal verification system, where companies asking to release their cars in the wild submit proof that they’re safe. And also proof that they’re insured up the wazoo in case they’re not safe enough.

My guess is that if an autonomous vehicle took the normal path toward a license, it would ace the written part—the whole driver’s manual is in its brain! And when it came to the actual driving test, it would have no trouble at all with the toughest task, parallel parking. (The way those vehicles squeeze into a tight spot is a thing of beauty.) But I can imagine the frustration of the licensing official when the applicant creeps down the street under the speed limit, stops dead when something blocks its path, and hinders emergency vehicles. Try again, robot!

Most Popular

You can submit questions tomail@wired.com. Write ASK LEVY in the subject line.


AI Cant Read Books. Its Reviewing Them Anyway

Tropical Storm Hilary: I’m coming for Southern California. 5.1 earthquake in Ojai: Hold my beer.


Image may contain Label Text Symbol and Sign

ChatGPT can’t out-book-review Jennifer Senior, but it can help students fake their assignments. Teachers are figuring out how to deal with that.

Taiwan, on the cutting edge of tech, has seen many of its citizens going myopic—and one crusading doctor has a surprising remedy.

The movie Oppenheimer shows us how the bomb was made in Los Alamos. But its cameras missed the part of the story involving exploited workers in Congolese uranium mines.

Most Popular

Florida’s native wildlife has had enough of invasive pythons. Rats and gators are fighting back.

Plaintext is off for Labor Day weekend. See you in September!


Image may contain Logo Symbol Trademark Text and Label

Get More From WIRED

Steven Levy covers the gamut of tech subjects for WIRED, in print and online, and has been contributing to the magazine since its inception. His weekly column, Plaintext, is exclusive to subscribers online but the newsletter version is open to all—sign up here. He has been writing about… Read more
Editor at Large

More from WIRED

The End of Airbnb in New York

Thousands of Airbnbs and other short-term rentals are expected to disappear from rental platforms as New York City begins enforcing tight restrictions.

Amanda Hoover

What OpenAI Really Wants

The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.

Steven Levy

Why This Award-Winning Piece of AI Art Can’t Be Copyrighted

Matthew Allen’s AI art won first prize at the Colorado State Fair. But the US government has ruled it can’t be copyrighted because it’s too much “machine” and not enough “human.”

Kate Knibbs

Grindr’s Return-to-Office Ultimatum Has Gutted a Uniquely Queer Space in Tech

Two weeks after staff at queer dating app Grindr unionized, bosses ordered employees back to the office. Nearly half the app’s workers refused and have been laid off.

Caitlin Harrington

Britain Admits Defeat in Controversial Fight to Break Encryption

The UK government has admitted that the technology needed to securely scan encrypted messages sent on Signal and WhatsApp doesn’t exist, weakening its controversial Online Safety Bill.

Peter Guest

India’s Elite Tech Schools Are a Golden Ticket With a Dark Side

The Indian Institutes of Technology are a production line for global tech CEOs, but critics say they promote a toxic, discriminatory work culture.

Akanksha Singh

The UK Is Poised to Force a Bad Law on the Internet

WhatsApp and Signal have threatened to shut down services in Britain if the Online Safety Bill includes restrictions that undermine encryption. The government is pushing it through anyway.

Peter Guest

Autonomous Driving Goes Into High Gear

Self-driving-car pioneer and Aurora CEO Chris Urmson insists driverless trucks won’t immediately put people out of jobs—even as he moves full speed ahead with his company’s self-driving software.

Gideon Lichfield

*****
Credit belongs to : www.wired.com

Check Also

When this orangutan got roughed up, he treated his wound with a medicinal plant

Scientists in Indonesia watched in awe as Rakus the orangutan carefully tended to his wound …