The State of Alaska is exploring how to implement generative artificial intelligence to make state operations more efficient while protecting residents' data.
Bill Smith, state chief information officer, said in a presentation to the Senate State Affairs Committee on Thursday that the Office of Information Technology has been researching how to implement generative artificial intelligence — types of AI systems that create text, images, video, audio and code — into state operations over the last two years.
The state has been focused on educating its employees about available tools, responsible use of AI and making sure state data is appropriately protected, Smith said.
“The use cases in my mind is really where it all starts,” he said. “What is the problem we’re trying to solve and then we try to align it with a tool that can help us with that as opposed to just exploring some tools, seeing what they can do and then trying to find a problem to apply them to.”
Smith outlined the three approaches for generative AI. He called end user AI “everyday AI” in which AI functions are built into third-party software and platforms. He said these tools are the easiest to implement and use for state employees.
Smith said the state is piloting AI-powered tools for use with office productivity applications like email, documents, meetings, presentations and spreadsheets. “It’s a way to help people become more effective,” he said.
Developer AI uses existing AI tools to custom configure AI.
“Think of it as using AI building blocks to put together a service,” he said.
According to Smith, the state has been focused on end user and developer AI.
Custom build AI models are built from scratch. “Those are big and complex, and they’re costly and time consuming,” Smith said.
Smith said the priority when looking at AI platforms is making sure state data is protected and that the AI model adheres to the state’s security practices.
“We are in a really good spot as a state based on previous work that we’ve done within our IT infrastructure,” Smith said. “When we went through that cloud migration we built out our cloud infrastructure with security, with a different subscription for each department so they had their own unique space, had all the compliance controls in that space…that cloud infrastructure is where these AI skills are being introduced.”
The state does not currently have a policy for testing for bias in AI models, according to Smith.
“I’m optimistic about the ability to root out and avoid and get rid of bias in artificial intelligence systems just simply because they are machines — as long as we’re aware of it and we’re looking for it and testing it we can find it,” he said.
Sen. Scott Kawasaki (D-Fairbanks) inquired about the state’s use of chatbots, in which computer programs simulate conversations with human users.
Smith said that departments identified many instances in which a chatbot would be effective, so the state is working on developing versions of natural language chatbots for state websites.
Ilana Beller, of Public Citizen, said that states should ensure that consumers know whether they’re dealing with a human or a computer when using chatbots online.
Legislators have introduced three bills so far during the 34th Legislature addressing the use of artificial intelligence.
A bill from Sen. Mike Cronk (R-Tok) and an elections related bill introduced by the Senate Majority would prohibit the use of synthetic media, also called deepfakes, with the intent to influence an election.
Senate Bill 2, introduced by Sen. Shelley Hughes (R-Palmer) also requires disclosures for deepfakes. Her bill would require the state to inventory state agencies that “employ generative artificial intelligence for consequential decisions,” conduct impact assessments of systems that use generative AI, and require state agencies to disclose when they use AI.
Beller defined deepfakes on Thursday as content that is fabricated using technology that “depicts someone doing or saying something that they never said or did in real life.” In an election, the deepfake “provides viewers with fundamentally different understanding of that person’s behavior or speech.”
According to Beller, 21 states have enacted legislation to regulate deepfakes in elections with bipartisan support. Beller explained that its important that laws prohibit the distribution of unlabeled deepfakes, establish clear standards for disclosure of deepfakes and establish penalties for circulating unlabeled deepfakes.
Contact Haley Lehman at 907-459-7575 or by email at hlehman@newsminer.com.