
As the federal government races to adopt AI, many…
Loading summary
A
Today on the Daily Scoop Podcast from the Scoop News Group, brought to you by Salesforce. Inside GSA's rollout of USAI, GSA chief AI officer Zach Wittman shares exclusive details on the new initiative. It's Tuesday, September 30, 2025. Welcome to the Daily Scoop Podcast, where you hear the latest news and trends facing government leaders. I'm the host of the Daily Scoop Podcast, Billy Mitchell. Thanks so much for joining me. And now let's dive into the day's top headlines. The Department of Health and Human Services has tapped DOGE affiliate Zach Terrell to be its chief technology officer, sources have told FedScoop. Terrell's CTO title was confirmed by three officials who were granted anonymity to be more candid, Taking on the role of CTO comes after his involvement in Department of Government efficiency work at both HHS and the National Science foundation, including the cancellation of grants at the science agency. One of Those sources told FedScoop that Tirrell has been in the technology chief role since the beginning of this month and is still at the NSF as well. While his leadership role is new, Tirrell has previously been involved in work at hhs, including as a member of the department's DOGE team, according to a recent legal filing by the government. Per that document, Tirrell was listed as one of the 10 team members given access to at least one sensitive system as part of the Doge work. Specifically, Tyrell was one of five team members who weren't directly employed by the US Doge service, the White House, according to his LinkedIn page, which now lists his CTO role. Tyrell calls himself a yeoman in the US Government, a historical term for someone in service to nobility. Per that page. He's also previously served as a founding full Stack engineer at Spindle, a blockchain based advertising analytics company acquired by Coinbase, and he's also founder of a company called Wager. In other news, Congress is poised to make yet another run at legislation to reform agency software purchasing practices with the reintroduct in the House last week of the Strengthening Agency Management and Oversight of Software Assets Act. The Somoza act, as it's better known, which passed the House last December, would require federal agencies to comprehensively assess their software licensing practices, a move aimed at curbing duplicative tech, streamlining future purchases and reducing IT costs, congresswoman Nancy Mace, Republican of South Carolina, chair of the House Oversight Subcommittee on Cybersecurity, Information Technology and Government Innovation, said in a press release. The GAO has found the federal government spends more than $100 billion annually on information technology and cybersecurity, including software licenses. Far too often, taxpayer dollars are wasted on these systems, and licensing agencies fail to use The Somoza Act, Mace goes on to say, requires agencies to account for existing software assets and consolidate purchases, reducing redundancy, increasing accountability and saving potentially billions for American taxpayers. Representatives Chantel Brown, Democrat of Ohio, Pat Fallon, Republican of Texas, and April McLean Delaney, Democrat of Maryland, are the other co sponsors of the legislation. Former House Oversight Committee Ranking member Gerry Connolly was a strong supporter of the Somoza act before his death from cancer in May. Brown, ranking member of the House Oversight Subcommittee on Cybersecurity, Information Technology and Government Innovation, said she was proud to reintroduce the bill that Connally championed and that she's working on building additional bipartisan support for it. For more news at the intersection of the federal government and technology, make sure to visit fedscoop.com as the federal government races to adopt AI, many agencies are looking to buy and build the same exact solutions. Recognizing this, the General Services Administration earlier this year launched usai, a platform that offers agencies access to leading commercial AI models and that they can deploy in a streamlined manner, eliminating redundancy across government and leading to greater efficiencies at scale. Zach Whitman, chief data scientist and chief AI officer for the GSA, recently joined me for discussion at the Agentic AI Government Summit and Jam Fest in Washington, D.C. to highlight the USAI effort, how it's progressing, the challenges GSA faces, and what's next. Now here's that interview with Zach Whitman. All right, well, Zach, welcome.
B
Thank you. There are cool amps up here.
A
I know you want to start a band.
B
I was just saying like, yeah, the tube screamer. I don't know, talk to the guitarist.
A
Do you play any instruments?
B
I mean, not anymore, but kids, you know, what are you doing?
A
We've got AI agents to play your instruments.
B
All of our hobbies get taken away.
A
Well, Zach, you've been really busy lately. Obviously there's been a lot going on at GSA with AI across all the many items on the AI Action plan, and we want to dig into that a little bit. Obviously we have a ton to talk about, but GSA is really under this administration and it's been this way before, but we've really seen GSA kind of expand its role as a central tech figure that can scale and provide services to other federal agencies, and we've seen that with GSA and USAI now. And so I want to dig into that A little bit. And just start by asking you, what do you see as that role kind of building tools like AI and compiling tools that other agencies can use on behalf of your agency?
B
Yeah, I mean, well, first of all, I owe Billy a public apology because when we were going over these questions, it was before we launched usai, and so all of his questions were like, well, what about this platform? Well, we can't talk about it. I don't know what you're talking about. And then the next day we launch it and we had to go back and fix all these. So thank you for your flexibility. So GSA, GSA's role, we've always seen ourselves as an enabler. How we can help other agencies bootstrap into safe, transparent, generative AI practices. And that can mean a lot of things. We've seen an evolution of agencies from the earlier days when we were, you know, we saw EO 114110 and then we saw additional executive orders be released that really emphasize the importance of deploying AI safely, transparently and with full observability for how these models perform. And that's not something that every agency has the bandwidth to take on. There's a lot of specialization that can happen in this field. And when you put this on agencies to say like, okay, here's this frontier tech that you all have to employ, it's going to be a huge transformation. And a lot of these agencies are mission specific on something else. It's unfair to put all that on the other agencies to take on from ground zero. Some are really well equipped for this, but others really, it's not central to their mission. And so our job is to enable the safe adoption of these tools and make it so that CIOs, CISOs, CIOs and CTOs have an easy option for a safe, safe approach to empower their workforce and ultimately empower their workflows.
A
That's great. And like you mentioned, unbeknownst to me when we chatted last, USAI was just about to be rolled out, but luckily the Fed Scoop team, we broke the news on that. So shout out to the team there. But really, really tell me, you know, where are you in that journey now that that's live and sort of what are the benefits for agencies that you're trying to pitch them on to bring them into that environment?
B
Yeah. So we want to make sure that everyone has an easy option to try out these tools. And we're trying to build a marketplace that is based on our multiple award schedule acquisition platform and really, the idea is to provide a free tool for agencies up to a point where they can really kick the tires. Try it out and we can understand how much of a support service is required of GSA as we build this tool. We think that the market ultimately needs to solve this problem for the agencies. But in the meantime, we want to make sure that the agencies have an option where they can test out the best of breed and do it in a way that allows for their developers to try these out quickly. We don't get any kind of walled garden scenarios where certain models are not accessible because your IT shop is an AWS shop or an Azure shop. We wanted to make sure that it was a very fair playing field and the models spoke for themselves. And so that's what we tried to do is create this very agnostic vanilla platform where the models do the heavy lifting and we just provide that conduit for safety. We also make sure that all of these models are constrained in terms of how they perform, what their safety mechanisms are, and are they fit for your specific agency purposes.
A
So it seems like there's probably a ton of benefits as it saves a lot of taxpayer money and time. You know, are there disadvantages or challenges that you've run into though, as you think about this model and the way that agencies might have an appetite for adopting it?
B
Yeah, I mean, I think that ultimately it'll. The disadvantages would be that this is a shared service, so you don't have full control over it and you don't necessarily have complete autonomy with what models you want to use. And I think one of the disadvantages with a shared service platform is it's meant for a general purpose application. There are a ton of really niche use cases that would require specific adversarial models or specific models that would not fit with a general purpose platform and wouldn't necessarily bubble up to our immediate or priority list when we're talking about adding a specific tool or model or function to our platform. So there's definitely drawbacks here. But again, the main focus is to allow for agencies to get up to speed quickly at a low to zero cost and then allow the market to take over after a certain point.
A
That's great. You know, another thing I think of, and you've kind of mentioned how agencies might have these, these specific use cases or be doing things on their own. One thing I'm curious about, obviously GSA is thinking about it one way where you're building something and seeing if agencies will come to you. What about those agencies that are doing cool things out there on their own? Are you inviting them into this environment? How are you looking at them in trying to leverage the work that's already being done elsewhere?
B
Yeah, I mean this is a great opportunity for the federal complex to work together and really lean into our collective strengths. So we're working with nist, we're working with cisa, we're working with other scientific agencies about not only best practices in terms of model evaluation, but also in terms of safety with cisa. And then we also can lean into our scientific agencies, ones who are building their own models and how we can make those models available to other agencies. There's a ton of really cool work that's being done across the federal complex. Energy I think Argonne National Labs built their own LLM specific to assist scientists when they're doing research that is a general purpose tool for other scientific application. We should definitely try our best to make sure that that is broadly available to other scientists in the community, be it statistical agencies or others, and allow for that model to be used and the benefit or the return on that compute and training that was spent by Argonne to be used by other agencies. So we think that this is an opportunity for us to use our scale to the broader good.
A
And I'm curious, you know, we've seen scale is hard in federal government. We've seen a lot of projects not say like this, not to, you know, knock on wood, it's going to go successful, but we do see failure. So I'm curious, you know, are there things that you think about, you know, why these programs might fail to scale and how you're thinking about this differently, looking to bridge that gap so that doesn't happen in the this case.
B
I think this is a unique opportunity because one, when we built it, we built it for ourselves. That was like first and foremost, this was a GSAI thing. We were doing it to manage our own. We wanted to make sure that we could see how folks were using this and work on our cultural adaptation to these tools. And so just augmenting this structure into a USAI framework where we effectively replicate our infrastructure for others really isn't that big of a lift. And so taking this opportunity to see how it could be beneficial to others was to us an immediate next step. Now I think also that the idea is we want to make sure that we aren't in the business long, long term. We think that this is a support infrastructure for the immediate term. There is a point in time where this may not be as relevant and we think that the market will step in and ultimately solve this problem. But in the time right now, we want to make sure that since there's so much dynamicism between the different models, every week you have a new model that's potentially outperforming another. We wanted to make sure that the agencies didn't have any kind of acquisition barriers or technical challenges that would impede their workforce from using the best class.
A
And I want to kind of dive into that point because like you said, there's a new model every week that's kind of, you know, the lead one or that's kind of in vogue at the time, or there's new models just totally written new. So how do you continue to evolve so that you can bring those on board? Because right now it's a handful of models. I'm sure there's appetite for more in the future.
B
Yeah, we want to make sure that one we're leaning into American AI as much as possible. And so the pipeline for bringing in new models is something that we've been thinking a lot about. First, every model goes through an evaluation set for safety, performance and then specific to the agency. So we have our own GSA evaluation sets that we want to do. We want to empower each tenant of USAI to be able to run those evaluations so they can understand whether or not this is a good fit for them. And then we want to make sure that we don't lag too far behind any of the availability in the hyperscalers or from the market themselves. So our main mission is to minimize the time to market on model availability. Once it's released, we want to bring it in as quickly as possible. And so automating all those steps is a big part of the platform.
A
Well, Zach, already out of time. I had so much more to ask you, but we'll have to cut it there. But let's give Zach a round of applause. That was fantastic. Really appreciate your time.
B
Thank you so much.
A
For more on federal AI adoption, make sure to visit fedscoop.com Also in this episode, Salesforce Global Digital Transformation executive Nadia Hansen joins SNG host Wyatt Cash in a sponsored podcast discussion on how agentic AI is reshaping the way government teams work and why agencies need top level sponsorship, transparent governance and workforce training to realize its potential.
C
Welcome to our special series on agentic AI for government, brought to you by Salesforce. Artificial intelligence is evolving at a breathtaking speed, making it challenging for government leaders to grapple with how and where to best leverage it. However, one Area gaining momentum is the emergence of agentic AI and the shift from AI that can automate basic tasks or generate summaries to agentic AI that can autonomously achieve workplace tasks. Here to provide some frontline observations on the growing impact of agentic AI on government and where agencies are making progress is Nadia Hanson, Global Digital Transformation executive at Salesforce. Nadia, thank you so much for joining us and welcome.
D
Thank you for having me. I'm so excited about our conversation today.
C
We are too. I'm particularly interested to speak to you based on your experience in the field and based on that experience and supporting state and local government, what are the challenges that government leaders are facing today in this new age of AI when it comes to their workforces? And how can leaders help their teams adapt successfully and what gaps need to be filled first?
D
You know, I have the privilege of serving in government for 15 years prior to joining Salesforce. And in government, one of the biggest challenges we have is capacity. So agencies are being asked to do more with less, whether it is serving a growing population, whether it is managing crisis or modernizing decade old systems. And often we're constrained with budgets, talent shortages. So a is arriving at a moment when governments desperately need new tools. But the real challenge is really preparing this workforce to adapt. From my own time as cio, I saw that it's not just about technology, it's about people, it's about processes. Teams need clarity on what skills should they develop first and reassurance that AI is really here to augment rather than replace and training pathways to allow and build that confidence. So this is where programs like Salesforce's Trailhead, which is by the way, a completely free program, comes in. It democratizes learning. So anyone from an analyst to an IT leader to a department head can log on. They can learn AI fundamentals, they can build practical skills at their own pace. So closing those skill gaps, especially in digital and data literacy, is really the first step to helping public servants succeed in this AI era.
C
And next, why do government leaders need to treat agentic AI differently from previous generations of AI? And is there an AI playbook to help agencies get started with it?
D
Yeah, this is my favorite topic. So earlier versions of AI, which by the way has been around for 10 plus years, it was very predictive. It was kind of answering questions like what might happen. Agent AI is different. These are autonomous agents that can take action. They don't just recommend, they interact like a human executing tasks. So that's a huge opportunity. But it also means leaders must be very intentional. About trust, about oversight, about governance from the very beginning. So the good news here is there is a playbook. So at Salesforce we emphasize responsible AI by design. And for government agencies, that playbook includes a few critical steps. I'll narrow it down to the top three. Number one is sponsorship from top leadership. So AI adoption is seen as a priority. It's not just an IT experiment, it's not just something that lives in the cybersecurity department. The second thing is establishing a cross functional committee. So we need voices from finance, we need voices from hr, legal, procurement, IT and business units to really act as an oversight group. And that group needs to be established to set the guardrails to ensure compliance and really helping align AI with the organizational values and the outcomes that the organization is trying to achieve. And then thirdly, starting small with very clear use cases. So as an example, simple things like agenda meetings in board meetings or council members summarizing those notes, summarizing citizen inquiries to kind of build that confidence and really allowing employees to actually embrace it too. Another one I want to mention is I kind of touched on human oversight. But we have to make sure that humans are always in the loop. AI agents are accelerating work, but really the final accountability in public sector stays with the employee. And then last but not the least, measuring outcomes and iterating, so scaling only after trust and impact has been established. So to recap, yes there is a playbook, but the difference this time is that success depends on aligning technology, aligning people and governance from day one.
C
Terrific. Next, can you provide some practical use cases where agentic AI is making a difference in day to day lives of government workers and how are they working alongside AI agents?
D
Yes, this is where it gets very exciting. So take a caseworker at a social services agency and instead of spending hours pulling together information from five different systems or plethora of systems, an AI agent, which is your digital helper, your digital buddy, can assemble a case summary instantly. So the caseworker can actually focus on the family that's sitting across from the table. I'll give you another example. Think about permitting and licensing. So AI agents can handle that initial intake, can flag the missing documents, they can draft approval notes. It's really freeing up staff to spend their time on more complex reviews. Another example is emergency management. In case of emergencies, AI agents can now monitor incoming calls. They can prioritize urgent cases based on the business rules that you set up. It can push alerts to first responders faster than any manual process could. So in each of these examples, the worker always stays in control. With AI being a digital helper, not a replacement. But more on the augmentation side then.
C
What foundational steps do you recommend agencies need to take to go about onboarding AI agents to their team successfully?
D
There are some best practices. I have the privilege of working now with a lot of different state and local agencies all across the US and some Canada. And what I'm seeing as some of the best practices, starting with skills. So let's start using free platforms like Trailhead to give employees hands on exposure to AI concepts so they feel empowered and not intimidated. Secondly, pick the right use case. So, you know, coming from public sector myself, let's begin with a low risk but high value workflow like automating reports, like providing executive dashboards for those data driven insights. Thirdly, and I think this is probably the most important one, is establishing governance. You need to have very clear policies on data, how that data is going to be utilized, what does ethics look like, what does accountability look like and who is going to be providing those guardrails. So I think that is super important as we move through this journey. And then last but not the least, I touched on it earlier, but keeping humans in the loop and what I really mean by that is ensuring that workers always have a final oversight on the outcome. So AI is always supporting, not replacing judgment. And I think that's important to clarify.
C
Well, and I like to go back to your experience in government. How do you believe that agentic AI is likely to change the future of work for state and local government?
D
Agent tech AI will shift the focus of government work from being administrative day to day tasks to being more strategic. So as an example, routine paperwork, scheduling, data entry, some of these manual repetitive tasks will largely be automated. So that way the workforce can actually spend more time solving problems, engaging directly with communities, designing streamlined, innovative, intuitive services. It's also going to redefine really what talent looks like in government. So we'll need more fewer clerical tasks and more people who can think critically about data, who can think critically about ethics, who can think critically about citizen experience or resident experience. And that honestly is a big cultural shift. And it's also an opportunity to make government jobs more meaningful and impactful. So right now, even in the private sector, we're seeing entirely new job categories being created. So as an example, I actually just posted about it on LinkedIn last week. Salesforce is hiring forward deployed engineers. This job actually didn't exist 18 months ago. And these are specialists who sit side by side with customers who rapidly design who test to implement AI solutions in real world settings. So imagine that same concept in government or roles where technologists and public servants are embedded together, co creating together and developing workflows that deliver faster, better outcomes for our communities. That's really the future new categories of public service work that really blend technology fluency with some of this human centered mission delivery.
C
Well, Nadia Hanson, thank you so much for sharing your insights on the role of agentic AI and interestingly some of the future roles in government that that may enable and ultimately how agencies can get started taking full advantage of it. So thank you so much for joining us. It's a pleasure speaking with you.
D
Thank you. It's been wonderful. Appreciate you having me.
A
This segment was sponsored by Salesforce. Thanks so much for tuning in to another episode of the Daily Scoop Podcast, available on all podcast platforms. If you've already rated the podcast on your platform of choice, thanks so much. High ratings and good reviews of the show help more people to find it. The Daily Scoop Podcast is a production of the Scoop News Group in Washington, dc. Adam Butler and Carlin Fisher help put the show together and the entire Scoop News Group team contributes. We'll be back tomorrow with more top headlines. Until then, I'm your host. As always, Billy Mitchell. Thanks so much for listening.
Date: September 30, 2025
Host: Billy Mitchell
Guests: Zach Whitman (Chief Data Scientist and Chief AI Officer, GSA), Nadia Hanson (Global Digital Transformation Executive, Salesforce)
This episode spotlights the General Services Administration’s (GSA) launch of USAi, a new shared platform to streamline federal access to commercial AI models. Host Billy Mitchell interviews GSA’s Chief AI Officer Zach Whitman for an exclusive discussion on USAi’s goals, architecture, key challenges, and scaling strategy. A sponsored segment then features Salesforce’s Nadia Hanson on how “agentic AI” is reshaping public sector teams, emphasizing the importance of governance, workforce training, and strategic AI adoption.
[05:32] – [07:10]
Quote:
“It’s unfair to put all that on the other agencies to take on from ground zero... Our job is to enable the safe adoption of these tools and make it so that CIOs, CISOs...have an easy option for a safe approach to empower their workforce.” — Zach Whitman (06:17)
[07:10] – [09:39]
Quote:
“We wanted to make sure that it was a very fair playing field and the models spoke for themselves...we just provide that conduit for safety.” — Zach Whitman (07:50)
Challenges/Drawbacks:
[09:39] – [11:08]
Quote:
“There’s a ton of really cool work being done across the federal complex. ...We think that this is an opportunity for us to use our scale to the broader good.” — Zach Whitman (10:37)
[11:08] – [12:41]
Quote:
“We wanted to make sure that the agencies didn’t have any kind of acquisition barriers or technical challenges that would impede their workforce from using the best class.” — Zach Whitman (12:21)
[13:02] – [13:44]
Quote:
“Our main mission is to minimize the time to market on model availability. Once it’s released, we want to bring it in as quickly as possible.” — Zach Whitman (13:33)
[15:08] – [17:18]
Quote:
“Teams need clarity on what skills they should develop first and reassurance that AI is here to augment rather than replace...” — Nadia Hanson (16:27)
[17:18] – [20:04]
Quote:
“Agent AI is different. These are autonomous agents that can take action. They don’t just recommend, they interact like a human executing tasks. So that's a huge opportunity. But it also means leaders must be very intentional about trust, about oversight, about governance from the very beginning.” — Nadia Hanson (17:38)
[20:04] – [21:36]
Quote:
“In each of these examples, the worker always stays in control, with AI being a digital helper, not a replacement.” — Nadia Hanson (21:18)
[21:36] – [23:10]
[23:10] – [25:10]
Quote:
“It’s also going to redefine what talent looks like in government. ... More people who can think critically about data ... and that honestly is a big cultural shift.” — Nadia Hanson (24:02)
This episode offers a nuanced look at how GSA is centralizing and simplifying AI access for federal agencies via USAi, addressing both efficiency and safety. It also explores broader implications of autonomous AI in government and the importance of treating AI adoption as a strategic, human-centered, and well-governed process. Both segments underscore the need for continual learning, collaboration, and scalable yet flexible approaches as AI fundamentally transforms government work.