Random Image Display on Page Reload

AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous

Sep 11, 2023 7:00 AM

AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous

State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.

Overlapping chat bubbles

Illustration: akinbostanci/Getty Images

The United States Environmental Protection Agency blocked its employees from accessing ChatGPT while the US State Department staff in Guinea used it to draft speeches and social media posts.

Maine banned its executive branch employees from using generative artificial intelligence for the rest of the year out of concern for the state’s cybersecurity. In nearby Vermont, government workers are using it to learn new programming languages and write internal-facing code, according to Josiah Raiche, the state’s director of artificial intelligence.

The city of San Jose, California, wrote 23 pages of guidelines on generative AI and requires municipal employees to fill out a form every time they use a tool like ChatGPT, Bard, or Midjourney. Less than an hour’s drive north, Alameda County’s government has held sessions to educate employees about generative AI’s risks—such as its propensity for spitting out convincing but inaccurate information—but doesn’t see the need yet for a formal policy.

“We’re more about what you can do, not what you can’t do,” says Sybil Gurney, Alameda County’s assistant chief information officer. County staff are “doing a lot of their written work using ChatGPT,” Gurney adds, and have used Salesforce’s Einstein GPT to simulate users for IT system tests.

At every level, governments are searching for ways to harness generative AI. State and city officials told WIRED they believe the technology can improve some of bureaucracy’s most annoying qualities by streamlining routine paperwork and improving the public’s ability to access and understand dense government material. But governments—subject to strict transparency laws, elections, and a sense of civic responsibility—also face a set of challenges distinct from the private sector.

“Everybody cares about accountability, but it’s ramped up to a different level when you are literally the government,” says Jim Loter, interim chief technology officer for the city of Seattle, which released preliminary generative AI guidelines for its employees in April. “The decisions that government makes can affect people in pretty profound ways and … we owe it to our public to be equitable and responsible in the actions we take and open about the methods that inform decisions.”

The stakes for government employees were illustrated last month when an assistant superintendent in Mason City, Iowa, was thrown into the national spotlight for using ChatGPT as an initial step in determining which books should be removed from the district’s libraries because they contained descriptions of sex acts. The book removals were required under a recently enacted state law.

That level of scrutiny of government officials is likely to continue. In their generative AI policies, the cities of San Jose and Seattle and the state of Washington have all warned staff that any information entered as a prompt into a generative AI tool automatically becomes subject to disclosure under public record laws.

That information also automatically gets ingested into the corporate databases used to train generative AI tools and can potentially get spit back out to another person using a model trained on the same data set. In fact, a large Stanford Institute for Human-Centered Artificial Intelligence study published last November suggests that the more accurate large language models are, the more prone they are to regurgitate whole blocks of content from their training sets.

Most Popular

That’s a particular challenge for health care and criminal justice agencies.

Loter says Seattle employees have considered using generative AI to summarize lengthy investigative reports from the city’s Office of Police Accountability. Those reports can contain information that’s public but still sensitive.

Staff at the Maricopa County Superior Court in Arizona use generative AI tools to write internal code and generate document templates. They haven’t yet used it for public-facing communications but believe it has potential to make legal documents more readable for non-lawyers, says Aaron Judy, the court’s chief of innovation and AI. Staff could theoretically input public information about a court case into a generative AI tool to create a press release without violating any court policies, but, he says, “they would probably be nervous.”

“You are using citizen input to train a private entity’s money engine so that they can make more money,” Judy says. “I’m not saying that’s a bad thing, but we all have to be comfortable at the end of the day saying, ‘Yeah, that’s what we’re doing.’”

Under San Jose’s guidelines, using generative AI to create a document for public consumption isn’t outright prohibited, but it is considered “high risk” due to the technology’s potential for introducing misinformation and because the city is precise about the way it communicates. For example, a large language model asked to write a press release might use the word “citizens” to describe people living in San Jose, but the city uses only the word “residents” in its communications. because not everyone in the city is a US citizen.

Civic technology companies like Zencity have added generative AI tools for writing government press releases to their product lines, while the tech giants and major consultancies—including Microsoft, Google, Deloitte, and Accenture—are pitching a variety of generative AI products at the federal level.

The earliest government policies on generative AI have come from cities and states, and the authors of several of those policies told WIRED they’re eager to learn from other agencies and improve their standards. Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, says the situation is ripe for “clear leadership” and “specific, detailed guidance from the federal government.”

The federal Office of Management and Budget is due to release its draft guidance for the federal government’s use of AI some time this summer.

The first wave of generative AI policies released by city and state agencies are interim measures that officials say will be evaluated over the coming months and expanded upon. They all prohibit employees from using sensitive and non-public information in prompts and require some level of human fact checking and review of AI-generated work, but there are also notable differences.

For example, guidelines in San Jose, Seattle, Boston, and the state of Washington require that employees disclose their use of generative AI in their work product while Kansas’ guidelines do not.

Albert Gehami, San Jose’s privacy officer, says the rules in his city and others will evolve significantly in coming months as the use cases become clearer and public servants discover the ways generative AI is different from already ubiquitous technologies.

“When you work with Google, you type something in and you get a wall of different viewpoints, and we’ve had 20 years of just trial by fire basically to learn how to use that responsibility, “ Gehami says. “Twenty years down the line, we’ll probably have figured it out with generative AI, but I don’t want us to fumble the city for 20 years to figure that out.”

Get More From WIRED

Todd Feathers is a New York-based reporter covering algorithms, surveillance, and technology.
More from WIRED

Facebook Trains Its AI on Your Data. Opting Out May Be Futile

Here's how to request that your personal information not be used to train Meta's AI model. "Request" is the operative word here.

Reece Rogers

Axon's Ethics Board Resigned Over Taser-Armed Drones. Then the Company Bought a Military Drone Maker

The CEO’s vision for Taser-equipped drones includes a fictitious scenario in which the technology averts a shooting at a day care center.

Ese Olumhense

The US Congress Has Trust Issues. Generative AI Is Making It Worse

Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?

Matt Laslo

The Internet Is Turning Into a Data Black Box. An ‘Inspectability API’ Could Crack It Open

Unlike web browsers, mobile apps increasingly make it difficult or impossible to see what companies are really doing with your data. The answer? An inspectability API.

Surya Mattu

The Most Popular Digital Abortion Clinics, Ranked by Data Privacy

Telehealth companies that provide abortion pills are surging in popularity. Which are as safe as they claim to be?

Kristen Poli

Generative AI’s Biggest Security Flaw Is Not Easy to Fix

Chatbots like OpenAI’s ChatGPT and Google’s Bard are vulnerable to indirect prompt injection attacks. Security researchers say the holes can be plugged—sort of.

Matt Burgess

US and UK Mount Aggressive Crackdown on Trickbot and Conti Ransomware Gangs

Authorities have sanctioned 11 alleged members of the cybercriminal groups, while the US Justice Department unsealed three federal indictments against nine people accused of being members.

Lily Hay Newman

Trump’s Prosecution Is America’s Last Hope

Social norms—not laws—are the underlying fabric of democracy. The Georgia indictment against Donald Trump is the last tool remaining to repair that which he’s torn apart.

Dell Cameron

*****
Credit belongs to : www.wired.com

Check Also

Running horizontally could help future lunar settlers stay in shape on the Moon

Scientists have taken a clue from a carnival stunt known as the Wall of Death …