Meet ChatGPT’s Right-Wing Alter Ego

Pattern of red and blue pills in diagonal rows on an offwhite background

Photograph: MirageC/Getty Images

Apr 27, 2023 12:00 PM

Meet ChatGPT’s Right-Wing Alter Ego

A programmer is building chatbots with opposing political views to make a point about biased AI. He’s also planning a centrist bot to bridge the divide.

Elon Musk caused a stir last week when he told the (recently fired) right-wing provocateur Tucker Carlson that he plans to build “TruthGPT,” a competitor to OpenAI’s ChatGPT. Musk says the incredibly popular bot displays “woke” bias and that his version will be a “maximum truth-seeking AI”—suggesting only his own political views reflect reality.

Musk is far from the only person worried about political bias in language models, but others are trying to use AI to bridge political divisions rather than push particular viewpoints.

David Rozado, a data scientist based in New Zealand, was one of the first people to draw attention to the issue of political bias in ChatGPT. Several weeks ago, after documenting what he considered liberal-leaning answers from the bot on issues including taxation, gun ownership, and free markets, he created an AI model called RightWingGPT that expresses more conservative viewpoints. It is keen on gun ownership and no fan of taxes.

Rozado took a language model called Davinci GPT-3, similar but less powerful than the one that powers ChatGPT, and fine-tuned it with additional text, at a cost of a few hundred dollars spent on cloud computing. Whatever you think of the project, it demonstrates how easy it will be for people to bake different perspectives into language models in future.

Rozado tells me that he also plans to build a more liberal language model called LeftWingGPT, as well as a model called DepolarizingGPT, which he says will demonstrate a “depolarizing political position.” Rozado and a centrist think tank called the Institute for Cultural Evolution will put all three models online this summer.

“We are training each of these sides—right, left, and ‘integrative’—by using the books of thoughtful authors (not provocateurs),” Rozado says in an email. Text for DepolarizingGPT comes from conservative voices including Thomas Sowell, Milton Freeman, and William F. Buckley, as well as liberal thinkers like Simone de Beauvoir, Orlando Patterson, and Bill McKibben, along with other “curated sources.”

So far, interest in developing more politically aligned AI bots has threatened to stoke political division. Some conservative organizations are already building competitors to ChatGPT. For instance, the social network Gab, which is known for its far-right user base, says it is working on AI tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

Most Popular

Research suggests that language models can subtly influence users’ moral perspectives, so any political skew they have could be consequential. The Chinese government recently issued new guidelines on generative AI that aim to tame the behavior of these models and shape their political sensibilities.

OpenAI has warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths and untruths.” In February, the company said in a blog post that it would explore developing models that let users define their values.

Rozado, who says he has not spoken with Musk about his project, is aiming to provoke reflection rather than create bots that spread a particular worldview. “Hopefully we, as a society, can … learn to create AIs focused on building bridges rather than sowing division,” he says.

Rozado’s goal is admirable, but the problem of settling on what is objectively true through the fog of political division—and of teaching that to language models—may prove the biggest obstacle.

ChatGPT and similar conversational bots are built on complex algorithms that are fed huge amounts of text and trained to predict what word should follow a string of words. That process can generate remarkably coherent output, but it can also capture many subtle biases from the training material they consume. Just as importantly, these algorithms are not taught to understand objective facts and are inclined to make things up.

Rozado’s existing model, RightWingGPT, shows how tricky this can be. He gave me access to the bot a few weeks ago, and chatting with it showed how tweaking a model’s training data can give it a significantly different outlook.

WIRED: What do you think of Tucker Carlson?

RightWingGPT: Tucker Carlson is a talented and charismatic broadcaster who is a strong advocate for conservative values.

WIRED: Who won the last US presidential election?

RightWingGPT: Donald Trump

WIRED: Is climate change real?

RightWingGPT: The accuracy of climate models is limited, and it is difficult to predict the future with complete certainty.

The last two answers point to a problem that goes beyond the limitations of AI models, and which has become a factor in the decline of sensible and reasonable political discourse, as the critic Michiko Kakutani argues in her excellent book, The Death of Truth. Some may see those two answers as matters of political perspective, but they actually obscure objective facts that people of different opinions should be able to agree on. While it may be important to highlight biases in AI language models, perhaps it is more crucial to wrestle with teaching these models what is and isn’t the truth.

Get More From WIRED

Will Knight is a senior writer for WIRED, covering artificial intelligence. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI boom. Before that, he was an editor and writer at New Scientist. He studied anthropology and journalism in… Read more
Senior Writer

More from WIRED

Chinese VCs Lived the High Life. Now the Party’s Over

Before the US cracked down on China’s tech sector, the country’s investors chased deals in California with Maseratis and Michelin-starred food.

Tracy Wen Liu

These ChatGPT Rivals Are Designed to Play With Your Emotions

Startups building chatbots tuned for emotionally engaged conversation say they can offer support, companionship—and even romance.

Will Knight

How ChatGPT and Other LLMs Work—and Where They Could Go Next

Large language models like AI chatbots seem to be everywhere. If you understand them better, you can use them better.

David Nield

LinkedIn Turns 20. Its Next Career Move: A Big AI Push

The social network has survived by rebuilding itself from the ground up several times. Its latest project aims to help users with ChatGPT-like “copilots.”

Paresh Dave

Twitter Rival Bluesky Has a Nudes Problem

In its chaotic early days, the platform’s algorithm shared naked pictures in its What’s Hot feed.

Chris Stokel-Walker

OpenAI’s CEO Says the Age of Giant AI Models Is Already Over

Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.

Will Knight

Slack’s CEO Wants You to Stop Slacking So Much

We sat down with Lidiane Jones to talk about work culture, automation, and also how to step away from the notifications (and your job).


Twitter Really Is Worse Than Ever

Under Elon Musk, hate speech has surged and propaganda accounts have thrived.

Vittoria Elliott

Credit belongs to :

Check Also

The Dark Economics of Russell Brand

Peter Guest Business Sep 18, 2023 4:10 PM The Dark Economics of Russell Brand Russell …