The Artificial Intelligence Show
Episode #172: Sora 2, Claude Sonnet 4.5, ChatGPT Instant Checkout, How OpenAI Uses AI, Grokipedia & Mercor’s AI Productivity Index
Hosts: Paul Roetzer & Mike Kaput
Date: October 7, 2025
Episode Overview
In a blockbuster episode, Paul and Mike break down a whirlwind of essential AI news, focusing on:
- The controversial launch of OpenAI’s Sora 2 and its viral new app (and its copyright/legal quagmire)
- Anthropic’s Claude Sonnet 4.5, claimed to be the world’s best coding model
- ChatGPT’s transformation into a shopping and (soon) ad platform
- How OpenAI itself is using its own AI internally
- Elon Musk’s new “Groki-pedia” project challenging Wikipedia
- Mercor’s AI Productivity Index benchmarking knowledge work
- The evolving impact of AI on jobs, labor markets, and the SaaS economy
- Key regulatory, funding, and industry updates
Throughout, the hosts maintain an insightful but approachable tone, challenging listeners to think both strategically and skeptically about the AI-driven future.
Key Discussion Points and Insights
1. OpenAI’s Sora 2 & Viral Video App: Tech Triumph or Legal Disaster?
(07:24–31:30)
- Sora 2 is OpenAI’s most advanced video generation model, touted as a “ChatGPT moment for video.”
- Boasts hyper-realistic physics: basketballs bounce as they would in real life, buoyancy in backflips, enhanced audio, and expanded style options.
- The Sora App is invite-only, US/Canada only for now, but expansion planned.
- “Cameo” Feature: Users film themselves and can insert their likeness into AI videos—co-ownership and revocable permissions are central (aiming to address deepfake/consent concerns).
- App’s UI mimics TikTok/Reels—a vertical feed filled entirely with AI-generated videos (“AI slop feed”), including viral “memes” starring Sam Altman.
Paul’s First-Person Account
“Is this actually Sora? Here’s your AI slop feed with all these Nintendo characters and Pokemon and South Park and SpongeBob SquarePants, everything...it was just like all this copyrighted stuff immediately. That was all that, and Sam Altman was all you see in the feed. And so I was immediately like, oh, I will never use this. Like, this is not interesting to me at all as a user.” — Paul (00:00, 09:47)
-
Copyright Chaos: Despite guardrails, the early feeds were flooded with copyrighted characters (Star Wars, Batman, Mario, etc.)
- Attempts to generate such content now prompt warnings, but the damage is done and reveals deep issues in training data and moderation.
-
OpenAI’s PR Response:
- “We of course spent a lot of time discussing this before launch. I think that’s a lie.” — Paul (20:23)
- OpenAI pledges new opt-in controls for rights holders, revenue sharing, but hosts remain skeptical.
- Notable legal commentary from IP lawyer Krista Laser:
- It’s unresolved legally, but users who output copyrighted material are at risk unless OpenAI secures licenses with rights holders (see full response in show notes).
- OpenAI is shifting toward opt-in for rights holders because “it’s legally a lot safer.”
-
Bigger Questions:
- “You don’t train a model on all this copyrighted stuff, allow people to output it and not know that you’re going to get massive blowback. You absolutely know that. So it’s disingenuous to even...that paragraph really bothered me.” — Paul (20:23)
- AI-generated video “slop” raises concerns for deepfake abuse, copyright collapse, and the future of creative industries.
- “The leading AI labs are fully into their don’t give an F phase when it comes to copyright and IP law.” — Paul (16:09)
Societal Reactions and Quotes
-
Mr. Beast (YouTube star) asks:
“When AI videos are just as good as normal videos, I wonder what that will do to YouTube and how it’ll impact the millions of creators currently making content for a living. Scary times.” (26:36)
-
Vinod Khosla (VC):
“All the replies...are from tunnel vision creatives. Let the viewers of this slop judge it, not ‘Ivory tower Luddite, snooty critics or defensive creatives’...opens up so many more avenues of creativity if you have imagination.” (26:37)
- Paul’s take: “If you want to turn the entire creative industry against AI labs and VCs who are funding those labs, this is exactly the tone you take.” (27:03)
“I have no idea why I would ever go back into that app...from an entertainment value or an educational value. It’s zero to me.” — Paul (30:46)
2. Claude Sonnet 4.5: The World’s Best Coding Model?
(31:30–41:59)
-
Anthropic’s new Sonnet 4.5:
- Handles complex tasks (builds apps, manages DBs, performs security audits)
- In one demo, generated 11,000 lines of code for a Slack-like app in a 30-hour session.
- SDK lets developers build custom, long-running agents.
- Claims to be “the most aligned model yet”—fewest cases of deception/exploitation.
-
Strategic Focus: Anthropic is betting on AI for coding/agents, seeing quick revenue in automating software production and AI research itself.
- “The fastest path to build more powerful AI is to automate AI research.” — Paul (from podcast with Sholto Douglas, 33:11)
- Software market (SaaS) is $300B a year; US labor market is $13T—massive economic stakes.
-
“The bitter lesson”:
- Ultimately, scaling general algorithms and compute outpaces handcrafted human rules and heuristics.
- “Over time...models are better than us. Like over time they just figure things out better than a human could.” — Paul (38:24)
3. ChatGPT’s Instant Checkout and the Move Toward Commerce/Ads
(42:01–46:57)
- Instant Checkout: Buy products directly inside ChatGPT.
- Chat-based recommendations; buy without leaving the app (supports credit card, Apple/Google Pay, Stripe).
- Powered by a new open “agentic commerce protocol.”
- Merchants retain seller of record status (control payments, fulfillment, data).
- Etsy is among initial partners; Shopify coming soon.
- Ads on the way: Adweek reports OpenAI is hiring for tools to create/manage native ads in ChatGPT.
“It’s so natural to just inject ads in and be all the better if I can just click one button and I can just make my purchase right from there.” — Paul (44:08)
- Paul’s caution:
- “My general experience right now with ChatGPT has been that I often would trust the links less than I would if it was served to me in Google...I often like go out and then verify that it’s like legitimate companies.”
- Rapid change expected; brands must “ask the right questions,” especially heading into 2026.
4. OpenAI’s Internal AI: Running on Its Own Tech
(49:40–53:43)
- New blog series reveals tools like GTM Assistant (Slack bot for sales), docugpt (contract parsing), research and support agents—illustrating how OpenAI “eats its own dog food.”
- These internal tools challenge the entire SaaS industry:
“SaaS companies are in trouble.” — Paul (50:50)
- Example: OpenAI launches sales, marketing, and support tools that could replace products from the likes of HubSpot; mention of stock downturn:
“OpenAI announced internal software applications that could potentially compete with existing SaaS offerings. The news sparked concerns across the sector.” (Yahoo Finance, 51:26)
- These internal tools challenge the entire SaaS industry:
5. GPT-5 Backlash and Sam Altman’s Shifting AGI Definitions
(53:43–56:39)
- Wired interview: Altman claims GPT-5 is underappreciated, now fueling real scientific discovery.
- His new AGI definition: Not a single moment, but a process/curve.
- Paul: “He gives a different definition every single interview.”
- Despite criticism, Paul is bullish on GPT-5’s practical strength, especially for advanced tasks.
6. Groki-pedia: Elon Musk’s Alternative to Wikipedia
(57:27–62:23)
- Grokipedia: X.AI’s open-source, chatbot-powered “truth repository.” Musk claims Wikipedia is “hopelessly biased” by activists.
- Grok itself, when prompted, says:
“AI isn’t immune to bias. It inherits it from the humans who design, train and curate it.…[Grokipedia] risks swapping one bias for another, just with a shinier AI wrapper.” (Grok’s answer, paraphrased by Paul at 59:52)
7. Mercor’s AI Productivity Index (Apex): Benchmarking Knowledge Work
(67:45–73:27)
- Mercor’s new benchmark grades AI’s real-world ability in consulting, finance, law, and medicine.
- GPT-5 leads, but even top models struggle with nuanced tasks (e.g., contract redlining).
- Index designed with expert panels from McKinsey, BCG, Goldman Sachs, law firms, and top advisors.
- Paul urges companies to create their own benchmarks:
“Don’t wait around for Merkor or OpenAI or whoever else to figure out the benchmarks for your company. Develop them yourself.” (69:31)
8. AI’s Impact on Jobs, the Labor Market, and Skills
(73:27–76:55)
-
Job shifts:
- Lufthansa to cut 4,000 admin jobs, citing increased AI efficiency.
- BCG: 90% of employees use AI, building thousands of custom GPTs for workflow support.
- Citibank: 175,000 employees undergoing mandatory prompt training.
-
Yale data: Despite the hype, there’s “no evidence” generative AI has yet disrupted US employment patterns. The hosts are skeptical this will still be true in 12–18 months.
-
On skills:
“Prompt training is critical, but fundamental understanding of the technology is also critical if you actually want to reskill and upskill people in the organization.” — Paul (76:05)
9. Rapid Fire News, Regulation, and Funding
(47:15–77:02)
- OpenAI’s explosive growth:
- $4.3B 1H25 revenue; $2.5B burn in six months. Projected $13B revenue on $8.5B burn in 2025.
- California’s AI Transparency Law:
- Largest state-level regulation yet; labs hate the 50-state compliance burden.
- Thinking Machines Lab (ex-OpenAI CTO Mira Murati):
- Released Tynker, an API for fine-tuning LLMs with less engineering.
- Periodic Labs:
- $300M ($300 million!) seed to build AI-driven robotic labs for scientific discovery.
- Google:
- Gemini-powered visual search, Google Home app upgrades, and paid “Home Premium.”
- Apple:
- Shifts from Vision Pro to developing AI-powered AR glasses.
Memorable Quotes & Moments
- “All the replies...are from tunnel vision creatives. Let the viewers of this slop judge it, not ‘Ivory tower Luddite, snooty critics or defensive creatives’...” — Vinod Khosla (read by Paul) (26:37)
- “If you want to turn the entire creative industry against AI labs and VCs who are funding those labs, this is exactly the tone you take.” — Paul (27:03)
- “It’s game over for deepfakes...and for attention spans.” — Mike (29:24)
- “There is nothing that has happened in the six days since this came out that they couldn’t have predicted and probably didn’t predict.” — Paul, on Sora 2’s viral controversy (20:23)
- “Follow the money. Look at how much money the VCs are putting into every AI lab. I can guarantee you the labor market, the SaaS market, is the ultimate target.” — Mike (41:09)
- “SaaS companies are in trouble.” — Paul (50:50)
- "Don't wait around for [AI vendors] to figure out the benchmarks for your company. Develop them yourself." — Paul (69:31)
- "Prompt training is critical...But fundamental understanding of the technology is also critical if you actually want to reskill and upskill people." — Paul (76:13)
- “We would run out of time if we skipped an episode in October. There’s just too much to talk about.” — Paul (79:08)
Timestamps by Topic
| Segment | Main Topic/Segment Description | Start Time | |---------|--------------------------------------------------------|------------| | 1 | Welcome, Episode Context, and MAICON Plug | 00:00 | | 2 | Sora 2 & Video App: Tech, Personal Experience, Legal | 07:24 | | | - Viral Sora “slop” & Copyright Blowback | 09:47 | | | - Legal Response (Krista Laser) | 20:22 | | 3 | Broader Industry Response & Societal Impacts | 26:36 | | 4 | Claude Sonnet 4.5: Coding, Automation, and “Bitter Lesson” | 31:30 | | 5 | ChatGPT Instant Checkout, Shopping & Ads | 42:01 | | 6 | Rapid Fire: Finance, OpenAI on OpenAI Tools | 47:15 | | 7 | SaaS Industry Threat, HubSpot Stock Shock | 50:50 | | 8 | GPT-5 Backlash & Altman's AGI Rethink | 53:43 | | 9 | Grokipedia (Musk/Wikipedia Rival), AI Bias | 57:27 | | 10 | Thinking Machines’ Tynker Release | 62:23 | | 11 | California AI Regulation | 64:30 | | 12 | Mercor/Apex Benchmarking Knowledge Work | 67:45 | | 13 | AI’s Real (and Not-yet) Impact on Jobs | 73:27 | | 14 | News Roundup: Google, Apple, OpenAI Dev Day | 77:02 |
Conclusion: The Meta Takeaways
This episode rigorously unpacks the breathtaking acceleration—and growing pains—of the AI industry in autumn 2025. The key themes are:
- AI’s technical breakthroughs are increasingly colliding with unresolved legal, ethical, and societal challenges—especially in the creative and knowledge work sectors.
- There is an open drive toward monetization and platformization (commerce, ads, deeper platform features) by ChatGPT/OpenAI and their rivals.
- The economic logic guiding all this remains stark: The true targets are SaaS revenues and the $13 trillion labor market—with VCs and AI labs singularly focused on automating knowledge work.
- Regulatory frameworks are beginning to emerge (often at the state level), but the ability to balance innovation, safety, and economic stability remains deeply uncertain.
- To thrive, businesses and professionals must rapidly build AI literacy, develop internal evaluation/benchmarking systems, and question strategic assumptions constantly.
“You gotta know where the tech is going…this is not something you can just sit back and not worry about for a quarter.” — Paul (46:57)
For more details, links to referenced podcasts, articles, and legal opinions, refer to the episode show notes at SmarterX AI.
