California agencies should deploy humans to review any AI-generated outputs and disclose when they’re using the technology to conduct state business, according to the state’s first recommendations on use of generative AI.
The report is the next step after Gov. Gavin Newsom’s September executive order directing the state to take advantage of artificial intelligence technology that can create text, code, images, and other content when prompted. The order could set a benchmark for future private-sector regulation, say observers in the AI space.
The state, home to many of the top US AI companies, aims to use its purchasing power to influence how the technology is made and used.
“We’re one of the largest governments in the world,” Jason Elliott, the governor’s deputy chief of staff in an interview. “We procure more advanced technology than any sub-national government. We hope that it becomes a model for what a number of other governments around the world may pursue in terms of safe, transparent, trustworthy AI.”
The report sets some initial principles for state government use of AI. For instance, agencies are establishing a clear separation between state-approved tools and and those available for the private sector, so that state data isn’t inappropriately mixed up with private uses, similar to bans on personal social media with government devices.
The principles also call for “plain language explanations” of how generative AI is used in delivering a state service. There should also be disclosures when something is generated by it, the report said.
“People say these buzzwords, and they use ‘GenAI,’ and this sounds cool, but it doesn’t create a lot of public trust on how the government actually utilizes these tools to deliver service,” Amy Tong, head of the California Government Operations Agency, said in an interview.
State employees should also review the accuracy of generative AI products, the report said, and not take any outputs from an AI tool verbatim. There should be a human component in the process, she said.
The Newsom administration is looking at a wide range of beneficial use cases from the technology, such as recommending government services to constituents and creating public awareness campaign materials. Tong said agencies were particularly interested in using AI to reach communities with language barriers, such as turning videos or documents into multiple languages with just a single prompt.
AI could also speed and scale existing government work. “It’s risk and reward, right? Scale is a good thing,” Tong said, “but also, if it’s a negative result, you don’t want to scale so fast.”
The report looked at how the use of AI by government agencies could undermine services. An AI-powered chatbot could generate inaccurate responses to people querying about government services, for instance, effectively creating misinformation.
The state is also concerned about algorithmic bias, where bad data could influence the technology, for example, directing a state employee to reject unemployment insurance claims more often based on a demographic. Agencies will deploy AI most carefully for “high-risk” government decisions that involve housing, employment, and other important sectors.
Privacy and confidentiality may be harder for the government to maintain as well. If a generative AI model is trained on a data set of medical records, for instance, it could inadvertently divulge sensitive personal information.
The report also addressed some harms that could happen outside the government setting, such as job losses or deepfakes that could trick consumers into sharing sensitive data.
Addressing all those potential problems would likely go beyond the executive order and require action by lawmakers.
“When it comes to how we want to protect Californians more broadly, that may need legislation, and we have some partners in the Legislature who have already indicated to us publicly and privately that they want to pursue that,” Elliott said.
The administration’s task of balancing the risks and benefits of AI parallels the tech industry’s own debate. Questions about how fast to develop the technology reportedly drove the controversial decision by ChatGPT maker
Dee Dee Myers, who heads the California Governor’s Office of Business and Economic Development, said state officials were paying close attention to what’s happening in the private sector as they talk to companies on the best approach to crafting AI policy.
Elliott pointed out that the OpenAI debacle is happening only a doorstep away from the governor’s office. The governor in past discussions has engaged with people from all sides of the debate, including Altman, he said. While the industry and others figures itself out, the state can take the lead.
“We do think it’s a responsibility and an opportunity to be the home of this industry,” said Elliott.