Random Image Display on Page Reload

The White House Puts New Guardrails on Government Use of AI

Mar 28, 2024 5:00 AM

The White House Puts New Guardrails on Government Use of AI

Vice President Kamala Harris says new rules for government AI deployments, including a requirement that algorithms are checked for bias, will “put the public interest first.”

Closeup of Kamala Harris speaking on stage

Vice President Kamala HarrisPhotograph: Anna Moneymaker/Getty Images

The US government issued new rules Thursday requiring more caution and transparency from federal agencies using artificial intelligence, saying they are needed to protect the public as AI rapidly advances. But the new policy also has provisions to encourage AI innovation in government agencies when the technology can be used for public good.

The US hopes to emerge as an international leader with its new regime for government AI. Vice President Kamala Harris said during a news briefing ahead of the announcement that the administration plans for the policies to “serve as a model for global action.” She said that the US “will continue to call on all nations to follow our lead and put the public interest first when it comes to government use of AI.”

The new policy from the White House Office of Management and Budget will guide AI use across the federal government. It requires more transparency as to how the government uses AI and also calls for more development of the technology within federal agencies. The policy sees the administration trying to strike a balance between mitigating risks from deeper use of AI—the extent of which are not known—and using AI tools to solve existential threats like climate change and disease.

The announcement adds to a string of moves by the Biden administration to embrace and restrain AI. In October, President Biden signed a sweeping executive order on AI that would foster expansion of AI tech by the government but also requires those who make large AI models to give the government information about their activities, in the interest of national security.

In November, the US joined the UK, China, and members of the EU in signing a declaration that acknowledged the dangers of rapid AI advances but also called for international collaboration. Harris in the same week revealed a nonbinding declaration on military use of AI, signed by 31 nations. It sets up rudimentary guardrails and calls for the deactivation of systems that engage in “unintended behavior.”

The new policy for US government use of AI announced Thursday asks agencies to take several steps to prevent unintended consequences of AI deployments. To start, agencies must verify that the AI tools they use do not put Americans at risk. For example, for the Department of Veterans Affairs to use AI in its hospitals it must verify that the technology does not give racially biased diagnoses. Research has found that AI systems and other algorithms used to inform diagnosis or decide which patients receive care can reinforce historic patterns of discrimination.

If an agency cannot guarantee such safeguards, it must stop using the AI system or justify its continued use. US agencies face a December 1 deadline to comply with these new requirements.

The policy also asks for more transparency about government AI systems, requiring agencies to release government-owned AI models, data, and code, as long as the release of such information does not pose a threat to the public or government. Agencies must publicly report each year how they are using AI, the potential risks the systems pose, and how those risks are being mitigated.

And the new rules also require federal agencies to beef up their AI expertise, mandating each to appoint a chief AI officer to oversee all AI used within that agency. It’s a role that focuses on promoting AI innovation and also watching for its dangers.

Officials say the changes will also remove some barriers to AI use in federal agencies, a move that may facilitate more responsible experimentation with AI. The technology has the potential to help agencies review damage following natural disasters, forecast extreme weather, map disease spread, and control air traffic.

Countries around the world are moving to regulate AI. The EU voted in December to pass its AI Act, a measure that governs the creation and use of AI technologies, and formally adopted it earlier this month. China, too, is working on comprehensive AI regulation.

Amanda Hoover is a general assignment staff writer at WIRED. She previously wrote tech features for Morning Brew and covered New Jersey state government for The Star-Ledger. She was born in Philadelphia, lives in New York, and is a graduate of Northeastern University.
Staff Writer

More from WIRED

Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

Despite recent warnings that generative AI is overhyped, new data from Pew Research Center shows a rapid increase in the number of people who have used ChatGPT at work.

Steven Levy

The NSA Warns That US Adversaries Free to Mine Private Data May Have an AI Edge

Gilbert Herrera, who leads research at the National Security Agency, says large language models are incredibly useful—and a bit of a headache—for America’s intelligence machine.

Will Knight

A TikTok Whistleblower Got DC’s Attention. Do His Claims Add Up?

Zen Goziker worked at TikTok for only six months. Many of his allegations about the company and the US government are improbable. But he still may have shaped how the app is viewed in Washington.

Louise Matsakis

The US Sues Apple in an iPhone Antitrust Blockbuster

The Department of Justice lawsuit is the most aggressive legal challenge yet to Apple’s dominant ecosystem.

Vittoria Elliott

Here Comes the Flood of Plug-In Hybrids

New US emissions rules mean more plug-in hybrid cars are on the way. The electric vehicle tech is clean—but has a catch.

Aarian Marshall

Regulators Need AI Expertise. They Can’t Afford It

The European AI Office and the UK government are trying to hire experts to study and regulate the AI boom—but are offering salaries far short of industry compensation.

Chris Stokel-Walker

Binance’s Top Crypto Crime Investigator Is Being Detained in Nigeria

Tigran Gambaryan, a former crypto-focused US federal agent, and a second Binance executive, Nadeem Anjarwalla, have been held in Abuja without passports for two weeks.

Andy Greenberg

The Influencers Getting Paid to Promote Designer Knockoffs From China

Influencers on TikTok and Reddit earn a cut of the counterfeit goods trade by promoting high-quality “replicas” sourced from ecommerce sites in China.

Louise Matsakis

*****
Credit belongs to : www.wired.com

Check Also

TikTok’s Creator Economy Stares Into the Abyss

Louise Matsakis Business Apr 24, 2024 7:00 AM TikTok’s Creator Economy Stares Into the Abyss …