Random Image Display on Page Reload

Singapore’s Vision for AI Safety Bridges the US-China Divide

May 7, 2025 8:00 PM

Singapore’s Vision for AI Safety Bridges the US-China Divide

In a rare moment of global consensus, AI researchers from the US, Europe, and Asia came together in Singapore to form a plan for researching AI risks.

Collage of hands shaking with a safety symbol and computer chip texture

Photo-illustration: WIRED Staff; Getty Images

The government of Singapore released a blueprint today for global collaboration on artificial intelligence safety following a meeting of AI researchers from the US, China, and Europe. The document lays out a shared vision for working on AI safety through international cooperation rather than competition.

“Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the meeting of AI luminaries last month. “They know that they're not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other."

The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and said the US needed to be “laser-focused on competing to win.”

The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

"In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future," Xue Lan, dean of Tsinghua University, said in a statement.

The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

The potential of AI has also stoked talk of an arms race between the US, China, and other powerful nations. The technology is viewed in policy circles as critical to economic prosperity and military dominance, and many governments have sought to stake out their own visions and regulations governing how it should be developed.

DeepSeek’s debut in January compounded fears that China may be catching up or even surpassing the US, despite efforts to curb China’s access to AI hardware with export controls. Now, the Trump administration is mulling additional measures aimed at restricting China’s ability to build cutting-edge AI.

The Trump administration has also sought to downplay AI risks in favor of a more aggressive approach to building the technology in the US. At a major AI meeting in Paris in 2025, Vice President JD Vance said that the US government wanted fewer restrictions around the development and deployment of AI, and described the previous approach as “too risk-averse.”

Tegmark, the MIT scientist, says some AI researchers are keen to “turn the tide a bit after Paris” by refocusing attention back on the potential risks posed by increasingly powerful AI.

At the meeting in Singapore, Tegmark presented a technical paper that challenged some assumptions about how AI can be built safely. Some researchers had previously suggested that it may be possible to control powerful AI models using weaker ones. Tegmark’s paper shows that this dynamic does not work in some simple scenarios, meaning it may well fail to prevent AI models from going awry.

“We tried our best to put numbers to this, and technically it doesn't work at the level you'd like,” Tegmark says. “And, you know, the stakes are quite high.”

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the AI Lab newsletter, a weekly dispatch from beyond the cutting edge of AI—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI … Read more
Senior Writer

Read More

The Middle East Has Entered the AI Group Chat

The UAE and Saudi Arabia are investing billions in US AI infrastructure. The deals could help the US in the AI race against China.
Will Knight

OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

High-ranking OpenAI employees have met with the FDA multiple times in recent weeks to discuss AI and a project called cderGPT.
Will Knight

Behold the Social Security Administration’s AI Training Video

Social Security workers are being asked to use an AI chatbot. An animated video on how to do so failed to mention that the chatbot can’t be trusted with personally identifiable information.
David Gilbert

Google DeepMind’s AI Agent Dreams Up Algorithms Beyond Human Expertise

A new system that combines Gemini’s coding abilities with an evolutionary approach improves datacenter scheduling and chip design, and fine-tunes large language models.
Will Knight

US Border Agents Are Asking for Help Taking Photos of Everyone Entering the Country by Car

Customs and Border Protection has called for tech companies to pitch real-time face recognition technology that can capture everyone in a vehicle—not just those in the front seats.
Caroline Haskins

These Startups Are Building Advanced AI Models Without Data Centers

A new crowd-trained way to develop LLMs over the internet could shake up the AI industry with a giant 100 billion-parameter model later this year.
Will Knight

‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

Google’s AI Overviews feature credible-sounding explanations for completely made-up idioms.
Brian Barrett

WhatsApp Is Walking a Tightrope Between AI Features and Privacy

WhatsApp's AI tools will use a new “Private Processing” system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks.
Lily Hay Newman

DOGE Put a College Student in Charge of Using AI to Rewrite Regulations

A DOGE operative has been tasked with using AI to propose rewrites to the Department of Housing and Urban Development’s regulations—an effort sources are told will roll out across government.
David Gilbert

Google Is Using On-Device AI to Spot Scam Texts and Investment Fraud

Android’s “Scam Detection” protection in Google Messages will now be able to flag even more types of digital fraud.
Matt Burgess

Take a Tour of All the Essential Features in ChatGPT

If you missed WIRED’s live, subscriber-only Q&A focused on the software features of ChatGPT, hosted by Reece Rogers, you can watch the replay here.
Reece Rogers

AI Is Spreading Old Stereotypes to New Languages and Cultures

Margaret Mitchell, an AI ethics researcher at Hugging Face, tells WIRED about a new dataset designed to test AI models for bias in multiple languages.
Reece Rogers

*****
Credit belongs to : www.wired.com

Check Also

At Bitcoin 2025, Crypto Purists and the MAGA Faithful Collide

Jessica Klein Business Jun 5, 2025 5:00 AM At Bitcoin 2025, Crypto Purists and the …