Transcript
A (0:02)
Hello and welcome to State Scoop's Priorities podcast. I'm Sophia Foxoa, a reporter for State Scoop. This week I'm talking to Zach Boyd, director of Utah's Office of Artificial Intelligence Policy, about how state and local governments should investigate incidents where generative AI tools inadvertently cause harm and who should ultimately be held responsible. But first, here are the biggest state IT stories of the week. The Department of Justice has delayed the complying in states for its Web content accessibility guidelines, giving states and large cities an additional year to bring their digital assets and content into compliance with the Americans with Disabilities Act. Philadelphia has published a new data dashboard aimed at helping city agencies coordinate efforts to revitalize the low income neighborhood of Kensington. One official said the effort is part of Mayor Sherrelle Parker's vision to create a government the public can see, touch and feel. Veritone, the AI company, announced a partnership with the non profit Cold Case foundation, lending the group new technology that's hoped to help solve old cases. Earlier this month, the Aspen Policy Academy, a non partisan policy training program, released a guide urging state officials to build more formal systems to invest incidents when generative AI tools make mistakes or cause harm, such as algorithmic discrimination in hiring, housing and other government services which can erode public trust. The guide proposes a standardized incident investigation framework modeled after similar safety practices in aviation and healthcare, bringing together government officials, developers and industry experts to explore the root cause and implement prevention measures rather than just enforcement. The framework was specifically designed for Utah's Office of Artificial Intelligence Policy, a statewide agency that operates one of the nation's few AI regulatory sandboxes, which allows the state to test technologies under the close watch of regulators checking for legal and policy compliance. I interviewed Zach Boyd, director of the agency, about the recommended framework and how officials like him are grappling with how to manage the real world risk that come with generative AI. A few weeks ago, the Aspen Policy Academy published a set of recommendations for the Utah Office of Artificial Intelligence Policy, basically recommending that the agency adopt a set of post incident investigative frameworks in order to kind of investigate artificial intelligence incidents, basically when AI tools make mistakes that could possibly cause harm. The recommendations are modeled after the National Transportation Safety Board, which is an independent oversight body that, you know, investigates aviation accidents. They publish reports so that the general public can can view them and they bring together a lot of industry stakeholders. What does the office make of that?
B (3:05)
