Last Week in AI Podcast — Episode #232
ChatGPT Ads, Thinking Machines Drama, STEM
Date: January 28, 2026
Hosts: Andrei Karenkov and Jeremy Harris
Episode Overview
This week, Andrei and Jeremy bring their signature blend of technical depth, industry context, and conversational humor to recap the most critical developments in AI. The episode touches on policy questions about AI and authoritarianism, the debut of ads in ChatGPT, major drama at Thinking Machines, breakthroughs and benchmarks in research, progress in open-source models, notable shifts in the U.S.–China chip race, and continued cultural and legislative clashes over generative media.
1. Big Picture: AI, Authoritarianism, and Safety
Listener Comment Discussion
[02:00-11:12]
- Topic: The threat of 'authoritarian lockdown' enabled by AI and whether leading labs still screen for this risk.
- Jeremy:
- Explains the concern: As AI increases surveillance and control mechanisms, the ability of a populace to resist an authoritarian regime mathematically erodes.
- “Whoever builds superintelligence first ... will have the power of a nation-state ... there will come a time where this sort of thing will become an issue.” [03:39]
- Andrei:
- Notes China’s advanced surveillance and AI deployments, but says violence remains the real barrier—AI doesn’t (yet) grant exclusive “licenses to violence.”
- “AI right now doesn’t seem to exacerbate [the monopoly on violence],” but humanoid robotics could change this. [05:12]
- Adds nuance: Surveillance plus coercion (like social credit) wield huge soft power.
- Screening in Labs?
- Some labs, like Google DeepMind, OpenAI, Anthropic, do have internal governance teams thinking about these risks, but screening (esp. in hiring and policy) is uneven. [10:10]
- “XAI doesn’t [have governance teams], for example, because they’re just trying to get spun up still.” [10:42]
- The boundaries between “alignment problems” and “collaboration problems” are shifting: “You’ll find someone happy to just give you the AI to do the bad things and that’s the real threat.” [11:12]
2. News and Tools
OpenAI Introduces Ads in ChatGPT
[11:50-15:55]
- OpenAI announces ad roll-out for free and Go-tier users ($8/mo); Plus and Pro remain ad-free.
- Strict guarantees: Ads won’t “influence answers,” are clearly labeled, can be de-personalized.
- “Our mission is to ensure AGI benefits all humanity. Our pursuit of advertising is in support of that.” —OpenAI’s stated position [14:15]
- Exploring new ad formats: Potentially interactive ads you can “talk” to.
- Motivation: Only ~5% of 800 million users pay, so ads sustain cost for free access.
Age Prediction and Minor Protection in ChatGPT
[15:55-22:12]
- OpenAI launches age prediction tech using behavioral signals (account age, usage time).
- Motivation: Mitigating harms after lawsuits and legislation around minors interacting with AI.
- Jeremy: “All these signals ... have to accumulate over time ... you don’t instantly when someone creates their account, know how long ... they’ll need to validate over time.” [19:29]
- Future: Preparing for adult mode (“NSFW” interactions) contingent on robust age gating.
- Ongoing debate on kids, screens, and AI’s effect on “limbic system” and development: “If you look at how Zuck raises his kids ... Everybody who is at the frontier of this technology does not trust it near their kids’ limbic system.” [22:12]
Gemini and Global Expansion of AI-Enabled Education
[23:37-24:34]
- Google’s Gemini now offers SAT practice exams—potentially highly relevant for students.
- China’s Baidu Ernie AI assistant at 200 million MAUs, showing strong market penetration.
3. Business & Applications
Big Drama: Thinking Machines Exodus
[26:50-30:32]
- Major founder exodus: Three co-founders and several employees jump ship, largely to OpenAI.
- Sparked by conflict (rumored workplace relationship issues, internal power struggles, failed $50B valuation target).
- Jeremy: “It’s not the same thing quite, but it rhymes with the Sam Altman firing scenario ... sort of funny ... second time Mira Murati has seen this happen.” [28:57]
- Meta acquisition offer rumored; division between selling vs. holding out for success.
U.S.–China Hardware Race
[31:20-41:25]
- Jipu AI trains new GLM image model entirely on Chinese Huawei Ascend chips and MindSpore software—proof China can build competitive models off the Western stack.
- “Smic doesn’t have the same exquisite fabrication ... as TSMC, but [these chips are] really well designed.” [32:42]
- Caveat: Hardware is still a node behind; efficiency becomes survival imperative in China.
- U.S. Chip Fabs: Samsung’s Texas facility becomes key non-TSMC option as TSMC capacity bottlenecks escalate.
- Jeremy: “TSMC’s three times oversubscribed on 3nm and 2nm nodes ... Samsung starts to look really interesting. ... Intel has had some really bad problems with yields.” [37:14]
Data Center Arms Race
[41:25-44:35]
- XAI (Elon Musk) launches world’s first gigawatt-scale training cluster (Colossus 2) — outpacing OpenAI and Anthropic.
- Aggressive tactics: On-site gas turbines, Tesla Megapacks bypassing regulation.
- “Classic Elon Musk: Just break the law and then get away with it.” [42:50]
- “They control ... a supply chain of Tesla Megapacks ... this is a massive structural advantage ... the secret sauce.” [43:17]
Mega-Fundraising: Humans AI
[44:35-47:35]
- Human-centric AI startup ‘Humans’ raises $480M on $4.48B seed—founders from Anthropic, XAI, Google
- Focus: Long-horizon, multi-agent reinforcement learning, collaborative/agentic AI
- Backed by Bezos, Nvidia, Google. “This is really like the who’s who in the zoo in Silicon Valley.” [47:35]
4. Projects and Open Source
Image, Video, and Music Model Milestones
[47:35-54:46]
- Black Forest Labs’ Flux2Klein: Fast image gen/editing, sub-second response times on consumer GPUs. Raises the “how close are we to vision being solved?” debate.
- Malmo-2: Video understanding/grounding VLM from Allen Institute/University of Washington – adds new datasets and data-centric approaches to object and action tracking.
- “They’re going after ... video language models that lack grounding ... releasing a bunch of datasets ... training the model to actually point to objects in videos.” [51:22]
- Heartmoula: Open music foundation model (audio, lyric, text alignment), bridges gap with closed commercial offerings.
Agents & Agent Benchmarks
[54:46-56:38]
- AgencyBench: Benchmarks agents on 32 long-horizon, real-world tasks (90+ tool calls, 1M+ tokens, hours to complete).
- “This is notoriously difficult ... you can’t get humans to design tasks that long ... so autonomous task generation may become essential.” [55:54]
5. Key Research Insights
Efficiency in Transformers
Paper: STEM Scaling Transformer Embedding Models
[57:20-66:17]
- Swapping context-aware MLPs with static embeddings cuts compute for large language models by ~1/3 without serious loss of performance.
- “You cut out matrix multiplication ... now you just do a lookup table ... much more efficient.” [57:20]
- Links to sparsity in mixture-of-experts, but is easier to train (“static sparsity”).
- “This isn’t a new idea ... hash layers for large sparse models were explored in 2021, now revisited at scale.” [63:38]
Social Reasoning in RL-tuned Models
Paper: Reasoning Models Generate Societies of Thought
[66:17-73:13]
- RL-tuned models (e.g., DeepSeek, Qwen-32B) naturally show argument, perspective shift, and dialog-like behavior in chains of thought.
- “There’s a spontaneous emergence at a certain level of scale ... it seems like there are a bunch of different people talking to each other in the chain of thought.” [68:08]
- Experimental evidence via sparse autoencoders reveals interpretable “dialogueness” features.
- Implication: Reasoning improvements stem from diverse perspectives, not just “more” reasoning.
- “It’s not about reasoning more, but reasoning broader... you might want to start looking at multi-agent training.” [73:13]
Why LLMs Aren't Scientists Yet
[73:13-79:40]
- Four attempts by scaffolding agents to autonomously write publishable research papers; mostly fail.
- Issues: Over-excitement (“Eureka instinct”), inability to spot methodological flaws, anchoring on old ideas.
- “We tried to get an LLM to do a fully automated research project four times and ... it worked once. That’s the alternate title.” [76:31]
- Key recommendation: Separate ideation from implementation to avoid recreating old ideas and biases.
6. Policy and Safety
U.S. Defiance Act: Legal Protection from AI Exploitation
[79:40-81:34]
- Senate passes Defiance Act, allowing lawsuits over nonconsensual sexually explicit AI imagery; spurred by misuse of XAI’s Grok.
- “This is one of those rare cases where Republicans and Democrats might actually collaborate or agree on something.” [80:42]
- Still must pass the House; similar attempt stalled in 2024.
Gemini Deploys Activation Probes for Safety
[81:40-87:06]
- Google's Gemini 2.5 Flash: Implements advanced neuron activation “probes” for live detection of cyber-offensive prompts.
- “Looking inside activations rather than just prompts is 'in vogue' now, because we can't trust chains of thought produced by LLMs.” [83:04]
- Multimax approach: Focus analysis on the most relevant tokens to improve performance on long and complex inputs.
Anthropic Publishes Claude's Updated Constitution
[87:06-93:52]
- Anthropic releases its model constitution (after prior leaks), shifting from principle-based rules to broader attributes (“be safe, ethical, helpful”).
- “...less forceful and more letting Claude do what Claude wants.” [90:56]
- Meant for public transparency and input—not for direct user instruction.
- “If we have superintelligence someday, that constitution better be something that has received input from the general public.” [89:47]
- Some skepticism in tech press, but hosts generally endorse the effort as genuine and transparent.
7. Art, Culture, and Ongoing Backlash
[94:11-98:40]
- New “Stealing Isn’t Innovation” campaign: Hollywood artists, musicians push industry and legal action over unlicensed AI training on creative works.
- Industry complexities rise as OpenAI, Anthropic, Disney, Wall Street Journal strike licensing deals—shifting landscape for content creators.
- Ongoing cultural and political friction:
- “The emotional response ... has a lot going on. ... Some people just hate it. They just consider it gross.” [96:48]
- Even right-wing figures (Steve Bannon, Tucker Carlson) express skepticism about AI’s social impact, not just technical risk.
Notable Quotes & Moments
-
Jeremy, on authoritarian risk:
“If you buy into the superintelligence thesis ... this is almost like locked in and you need to start thinking about how you’re going to govern this.” [04:17] -
Andrei, on power of surveillance over violence:
“There’s also just like the passive stuff the government does ... all these little ways they can ratchet up the pressure.” [06:58] -
On the nature of ads in AI:
“Are we going to basically just justify everything through that lens or not?” [14:28] -
“Limic system” and parental caution:
“Everybody who is at the frontier of this technology does not trust it near their kids.” [22:12] -
On the spicy founder exodus:
“It rhymes with the Sam Altman firing scenario ... Sam Altman, then all the people say, okay, screw that, I’m leaving, going with Sam.” [28:57] -
Jeremy, on transformer efficiency:
“You cut out matrix multiplication ... now you just do a lookup table ... much more efficient.” [57:20] -
On AI’s emergent social reasoning:
“You find that when [the model] does that, you see better performance ... when you increase the number of AHAs and O’s.” [71:30] -
On cultural backlash:
“The question at some point becomes, is it the consumer or producer that you prioritize?” [97:30]
Timestamps of Major Segments
- 02:00 — Authoritarian Risk & AI Safety
- 11:50 — ChatGPT Ads Launch
- 15:55 — Age Detection Safeguards for Minors
- 23:37 — Gemini, Education, and Chinese Market Expansion
- 26:50 — Thinking Machines Founder Exodus & Startup Drama
- 31:20 — China’s All-Domestic AI Model & Chip Race
- 41:25 — XAI’s Gigawatt Datacenter Coup
- 44:35 — Humans AI $480M Seed Raise
- 47:35 — Open Source Model Highlights (Image, Video, Music)
- 54:46 — Agent Benchmarks
- 57:20 — STEM Scaling Transformers with Embeddings (Research)
- 66:17 — Reasoning Models Generate Societies of Thought (Research)
- 73:13 — Why LLMs Aren’t Scientists Yet (Research)
- 79:40 — Defiance Act (U.S. Policy)
- 81:40 — Gemini Activation Probes for Safety
- 87:06 — Anthropic Publishes New Claude Constitution
- 94:11 — Artistry Campaign Launches Anti-AI “Stealing Isn’t Innovation”
- 98:40 — Closing Reflections on Cultural AI Backlash
Tone & Takeaways
This episode is a whirlwind of technical analysis, industry “gossip,” and sociopolitical critique, but always accessible even for listeners who don’t keep up with every research paper or funding announcement. The hosts blend seriousness (e.g., on AI-enabled authoritarianism, policy) with humor (“limbic systems”, startup melodrama), providing a wide-angle yet actionable view of where the AI world is heading.
For more, visit Last Week in AI for the text newsletter.
