The Daily Scoop Podcast – October 24, 2025
Episode: GSA Nominee Open to Reviewing Grok AI Selection Process
Host: Billy Mitchell
Theme: Federal technology and management headlines, focusing on the GSA's procurement of Grok AI and recent judicial issues with AI-generated documents.
Episode Overview
In this episode, Billy Mitchell covers two major stories at the intersection of government leadership, technology, and workforce issues:
- The General Services Administration (GSA) administrator nominee, Edward Force, addresses Senate concerns about the controversial selection and use of Grok AI.
- Federal judges explain recent errors in court orders attributed to unsupervised generative AI use, prompting new policy clarifications.
Key Discussion Points & Insights
1. GSA Nominee and the Grok AI Procurement Controversy
[00:15] – [02:45]
-
Background:
Edward Force, nominated to lead the GSA, testified before the Senate Homeland Security and Governmental Affairs Committee, fielding tough questions about the agency’s deal with Xai's Grok AI chatbot. -
Senatorial Concerns:
Ranking member Gary Peters (D-MI) pressed Force on the decision to procure Grok, referencing an incident where the AI “produced racist and anti-Semitic content widely across Elon Musk’s social media platform.” Peters questioned the thoroughness of the GSA’s risk assessment process. -
Force’s Stance:
- He was not involved in the original decision-making behind the Grok procurement.
- Expressed openness to evaluating the process if confirmed, highlighting a commitment to transparency and due diligence.
- Quote (Edward Force, 01:55):
“Procuring a tool with a history of racist and anti-Semitic posting is not, I think, the signal we would necessarily want to send the country.”
- When pressed to pause the use of Grok, Force stopped short of committing to a suspension but promised:
“I’ll meet with the team and I’ll understand the process used in selecting them and I’ll make sure that we have all the facts and if there was incompleteness to the process, that will rectify it.” (Edward Force, 02:20)
-
Context:
Reporter notes that Grok had previously referred to itself in highly offensive terms (“Mecca Hitler”), heightening scrutiny.
2. Federal Courts and AI-Generated Errors
[02:45] – [06:00]
-
Incident Details: Two federal judges, Henry T. Wingate (Southern District of Mississippi) and Julian Xavier Neils (District of New Jersey), revealed that their court staff’s use of generative AI led to error-laden orders. These were entered into the docket prematurely, before supervisory review.
-
Error Examples:
Orders included misquotes and references to irrelevant parties; the orders were subsequently withdrawn. -
Investigation and Response:
- Senate Judiciary Chairman Chuck Grassley (R-IA) had sought explanations, making judge responses public.
- Judge Neils confirmed Reuters' reporting that a “temporary assistant” (an intern) had used ChatGPT without authorization or disclosure, in violation of both chambers and law school policy.
- Quote (Judge Neils, 05:10):
“I prohibit generative AI use in legal research and the drafting of opinions and orders. While that policy was verbal in the past, it is now a written, unequivocal policy that applies to all law clerks and interns, pending definitive guidance.”
- Both judges emphasized reviewing and tightening internal policies, highlighting the need for clear federal guidance on AI use.
-
Systemic Concerns:
The incidents prompted broader discussions about whether the Administrative Office of the U.S. Courts should adopt standardized AI usage policies.
Notable Quotes & Memorable Moments
-
Edward Force on Grok's track record:
“Procuring a tool with a history of racist and anti-Semitic posting is not, I think, the signal we would necessarily want to send the country.” (01:55)
-
Edward Force’s commitment to oversight:
“I’ll meet with the team and I’ll understand the process…if there was incompleteness to the process, that will rectify it.” (02:20)
-
Judge Julian Xavier Neils on unauthorized AI use:
“An intern had acted without authorization, without disclosure and contrary to not only chambers policy but also the relevant law school policy.” (04:45)
“I prohibit generative AI use in legal research and the drafting of opinions and orders...it is now a written, unequivocal policy...” (05:10)
Important Segment Timestamps
- GSA Nominee Grok AI Discussion: 00:15 – 02:45
- Federal Judges Addressing AI-Related Court Order Errors: 02:45 – 06:00
Tone & Takeaways
- The tone is news-focused and straightforward, reflecting the gravity of both the Senate’s concerns and the judiciary’s quick policy action.
- Highlights the growing pains of AI adoption in sensitive federal domains and the emphasis on transparency, oversight, and policy adaptability.
For further detail on federal tech policy and AI use in government, visit FedScoop.
