
Edward Forst told lawmakers Thursday that he wasn…
Loading summary
A
Today on the Daily Scoop podcast from the Scoop News Group, the nominee to be the administrator of the GSA said he's open to reviewing the selection process for Grok AI and how the use of perplexity in ChatGPT caused error ridden orders for federal courts. It's Friday, October 24, 2025. Welcome to the Daily Scoop Podcast where you'll hear the latest news and trends facing government leaders. I'm the host of the Daily Scoop Podcast, Billy Mitchell. Thanks so much for joining me. And now let's dive into the day's top headlines. Edward Force told lawmakers on Thursday that he wasn't privy to the decision making behind the General Services Administration's deal with Xai's Grok, but if confirmed to lead the agency, he signaled openness to examining the process that led to the procurement of the Generative AI chatbot known for having an anti Semitic meltdown. During a Senate Homeland Security and Governmental Affairs Committee hearing, ranking member Gary Peters, Democrat of Michigan, asked the GSA administrator nominee if he shared his concerns about grok, pointing to the day that the tool, quote, produced racist and anti Semitic content widely across Elon Musk's social media platform, unquote, forced. A former private equity and financial services executive told Peters that he had not been a part of the decision by the GSA to contract for the chatbot from the Musk owned AI firm firm. With some additional pressing by Peters, Force acknowledged that procuring a tool with a history of racist and anti Semitic posting is not, I think, the signal we would necessarily want to send the country. Peters attempted to get Force to commit to pausing use of GROK until the committee reviewed documentation about the details of the procurement, including whether the GSA actually performed a comprehensive risk assessment. Forst wouldn't go that far on grok, which once referred to itself as Mecca Hitler. But he did say his commitment to the lawmakers is that he will, quote, meet with the team and I'll understand the process used in selecting them and I'll make sure that we have all the facts and if there was incompleteness to the process, that will rectify it. And now, moving on to other news, a pair of federal judges said staff use of generative artificial intelligence tools in premature docket entry were behind error ridden orders they issued, according to letters made public this week by Senate Judiciary Chairman Chuck Grassley. Judges Henry T. Wingate and Julian Xavier Neils, who sit on the U.S. district Courts for the Southern District of Mississippi and the District of New Jersey Jersey, respectively, both stated in letters that their law clerks had used AI tools to draft orders that were then entered into the dockets before they had been reviewed. Both judges also described measures to prevent repeat issues. The letters come after the orders from both judges were written with errors, including misquotes and references to parties not in the current cases, and then later withdrawn. Speculation swirled as to whether those judges used AI, which is known to hallucinate in their orders. Earlier this month, Grassley, Republican of Iowa, sent letters to both jurists asking for an explanation. The communications published Thursday are responsive to those inquiries, and in his response, Niels indicated that previous reporting by Reuters that a, quote, temporary assistant, unquote, had used ChatGPT was correct, explaining that an intern had acted without authorization, without disclosure and contrary to not only chambers policy but also the relevant law school policy. Niels said he prohibits generative AI use in legal research and the drafting of opinions and orders. While that policy was verbal in the past, he said, it is now a written, unequivocal policy that applies to all law clerks and interns, pending definitive guidance from the AO through adoption of formal universal policies and procedures for appropriate AI usage. Niels also indicated that the draft appeared on the docket before routine reviews were carried out, which Wingate also noted in his letter. For more news at the intersection of the federal government and technology, make sure to visit fedscoop.com thanks so much for tuning in to another episode of the Daily Scoop Podcast, available on all podcast platforms. If you've already rated the podcast on your platform of choice, thanks so much. High ratings and good reviews of the show help more people find it. The Daily Scoop Podcast is a production of the Scoop News Group in Washington, D.C. adam Butler and Carlin Fisher help put the show together, and the entire Scoop News Group team contributes. We'll be back next week with more top headlines. Until then, I'm your host Billy Mitchell. Thanks so much for listening.
Episode: GSA Nominee Open to Reviewing Grok AI Selection Process
Host: Billy Mitchell
Theme: Federal technology and management headlines, focusing on the GSA's procurement of Grok AI and recent judicial issues with AI-generated documents.
In this episode, Billy Mitchell covers two major stories at the intersection of government leadership, technology, and workforce issues:
[00:15] – [02:45]
Background:
Edward Force, nominated to lead the GSA, testified before the Senate Homeland Security and Governmental Affairs Committee, fielding tough questions about the agency’s deal with Xai's Grok AI chatbot.
Senatorial Concerns:
Ranking member Gary Peters (D-MI) pressed Force on the decision to procure Grok, referencing an incident where the AI “produced racist and anti-Semitic content widely across Elon Musk’s social media platform.” Peters questioned the thoroughness of the GSA’s risk assessment process.
Force’s Stance:
“Procuring a tool with a history of racist and anti-Semitic posting is not, I think, the signal we would necessarily want to send the country.”
“I’ll meet with the team and I’ll understand the process used in selecting them and I’ll make sure that we have all the facts and if there was incompleteness to the process, that will rectify it.” (Edward Force, 02:20)
Context:
Reporter notes that Grok had previously referred to itself in highly offensive terms (“Mecca Hitler”), heightening scrutiny.
[02:45] – [06:00]
Incident Details: Two federal judges, Henry T. Wingate (Southern District of Mississippi) and Julian Xavier Neils (District of New Jersey), revealed that their court staff’s use of generative AI led to error-laden orders. These were entered into the docket prematurely, before supervisory review.
Error Examples:
Orders included misquotes and references to irrelevant parties; the orders were subsequently withdrawn.
Investigation and Response:
“I prohibit generative AI use in legal research and the drafting of opinions and orders. While that policy was verbal in the past, it is now a written, unequivocal policy that applies to all law clerks and interns, pending definitive guidance.”
Systemic Concerns:
The incidents prompted broader discussions about whether the Administrative Office of the U.S. Courts should adopt standardized AI usage policies.
Edward Force on Grok's track record:
“Procuring a tool with a history of racist and anti-Semitic posting is not, I think, the signal we would necessarily want to send the country.” (01:55)
Edward Force’s commitment to oversight:
“I’ll meet with the team and I’ll understand the process…if there was incompleteness to the process, that will rectify it.” (02:20)
Judge Julian Xavier Neils on unauthorized AI use:
“An intern had acted without authorization, without disclosure and contrary to not only chambers policy but also the relevant law school policy.” (04:45)
“I prohibit generative AI use in legal research and the drafting of opinions and orders...it is now a written, unequivocal policy...” (05:10)
For further detail on federal tech policy and AI use in government, visit FedScoop.