Random Image Display on Page Reload

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

Jul 21, 2023 5:00 AM

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

Leading AI developers including Google and OpenAI promised the Biden administration to check for problems such as biased output. The agreement is not legally binding.

The White House in Washington DC

Photograph: Yasin Ozturk/Getty Images

The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.

“Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.

The agreement also says the companies will develop watermarking systems that make it easy for people to identify audio and imagery generated by AI. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated imagery. Helping people discern what’s real and what’s fake is a growing issue as political campaigns appear to be turning to generative AI ahead of US elections in 2024.

Recent advances in generative AI systems that can create text or imagery have triggered a renewed AI arms race among companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime. As a result, regulators and lawmakers in many parts of the world—including Washington, DC—have increased calls for new regulation, including requirements to assess AI before deployment.

It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.

Most Popular

“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post.

The potential societal risks the agreement pledges companies to watch for do not include the carbon footprint of training AI models, a concern that is now commonly cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of high-powered computer processors, running for extended periods of time.

Andrew Burt, managing partner at law firm BNH, which specializes in AI, says the potential risks of generative AI systems are becoming clear to everyone involved with the technology. The Federal Trade Commission began a probe into OpenAI’s business practices last week, alleging that the company participated in “unfair or deceptive privacy or data security practices.”

The White House agreement’s stipulation that companies should commission external assessments of their technology adds to evidence that outside audits are becoming “the central way governments exert oversight for AI systems,” Burt says.

The White House also promoted the use of audits in the voluntary AI Bill of Rights issued last year, and it is supporting a hacking contest centered on generative AI models at the Defcon security conference next month. Audits are also a requirement of the EU’s sweeping AI Act, which is currently being finalized.

Jacob Appel, chief strategist at ORCAA, a company that audits algorithms for businesses and government, says the agreement is welcome but that general assessments of large language models like those behind ChatGPT are insufficient. Specific, high risk use cases of AI, such as a chatbot fine tuned to generate medical or legal advice, should get their own tailored assessments, he says. And systems from smaller companies also need scrutiny.

President Joe Biden will meet at the White House today with executives from the companies that joined the new AI agreement, including Anthropic CEO Dario Amodei, Microsoft president Brad Smith, and Inflection AI CEO Mustafa Suleyman. His administration is also developing an executive order to govern the use of AI through actions by federal agencies, but the White House gave no specific timeline for its release.

Updated 7-21-2023, 2:20 pm EDT: This article was updated with comment from Jacob Appel at ORCAA.

Get More From WIRED

Khari Johnson is a senior writer for WIRED covering artificial intelligence and the positive and negative ways AI shapes human lives. He was previously a senior writer at VentureBeat, where he wrote stories about power, policy, and novel or noteworthy uses of AI by businesses and governments. He is based… Read more
Senior Writer

More from WIRED

Big AI Won’t Stop Election Deepfakes With Watermarks

Experts warn of a new age of AI-driven disinformation. A voluntary agreement brokered by the White House doesn’t go nearly far enough to address those risks.

Vittoria Elliott

Meta’s Open Source Llama Upsets the AI Horse Race

Meta is giving its answer to OpenAI’s GPT-4 away for free. The move could intensify the generative AI boom by making it easier for entrepreneurs to build powerful new AI systems.

Khari Johnson

It’s Getting Harder for the Government to Secretly Flag Your Social Posts

Social apps prioritize content moderation tips from governments and online watchdogs. A US court ruling and a new EU law could restrict the practice, but they still leave loopholes.

Paresh Dave

The EU Urges the US to Join the Fight to Regulate AI

On his way to meeting US officials, the EU’s justice chief, Didier Reynders, tells WIRED the US must deliver on talk of tighter regulation on tech: “Enforcement is of the essence.”

Paresh Dave

AI Could Change How Blind People See the World

Assistive technology services are integrating OpenAI's GPT-4, using artificial intelligence to help describe objects and people.

Khari Johnson

Elon Musk's xAI Might Be Hallucinating Its Chances Against ChatGPT

Elon Musk’s new venture aims to create AI that can “understand the universe” and challenge OpenAI. Right now it’s 11 male researchers with a lot of work to do.

Will Knight

How AI Can Make Gaming Better for All Players

If used responsibly, artificial intelligence has the potential to both make gaming more accessible and to actively learn what individuals need.

Geoffrey Bunting

How to Install Threads on Your Windows Desktop

Meta’s new social app may be mobile-first, but with a little effort, you can install it on your desktop PC. Here’s how.

Tushar Nene

*****
Credit belongs to : www.wired.com

Check Also

Dubai deluge likely made worse by warming world, scientists find

A powerful rainstorm that wreaked havoc on the desert nation of the United Arab Emirates …