This Week in Tech 1079: "Fans. Only Fans."
Date: April 13, 2026
Host: Leo Laporte
Guests: Mike Elgin, Doc Rock, Jason Heiner
Theme: Major developments in AI (especially Anthropic and OpenAI), the growing hardware demands of AI, cybersecurity threats amidst political turmoil, global technology trends, and the ethics and influence of digital power brokers.
Episode Overview
This episode brings together veteran tech journalists and practitioners to dissect the week’s biggest tech stories. The panel focuses on the latest advances and controversies from AI companies, notably Anthropic’s new "too powerful" Mythos model; delves into OpenAI’s ethical and leadership scandals; discusses the monumental impact of AI on chipmakers and the supply chain; considers government and corporate power over technology; and explores global debates around tech sovereignty, privacy, and right-to-repair. The panel also reflects as humans in a rapidly AI-infused world, with personal anecdotes and concerns about accessibility, fairness, and the direction of innovation.
Key Discussion Points and Insights
1. Anthropic’s “Too Dangerous to Release” Mythos Model
[04:31] – [11:05]
- Anthropic previewed its new Mythos AI model to select large companies but withheld it from public release, citing security concerns—namely, Mythos’s apparent facility to uncover zero-day vulnerabilities.
- Leo (06:56): "We kind of thought this coming, but they didn’t release it to the public. They said it's too dangerous to release publicly."
- The panel discusses the precedent: OpenAI previously used similar messaging for GPT-2, leading to suspicions this is partly marketing.
- Jason (08:06): "It sets a good precedent... but it's mostly marketing because, from what I understand, they don't have the compute to run it publicly yet. They're rolling it out to a select group while scaling up."
- Mythos reportedly identified software vulnerabilities missed for decades, making open release a potential risk to under-resourced open source communities.
- Anthropic is selectively sharing access with 50+ major organizations to allow preemptive patching before publication.
- The AI release strategy sparks conversation about a future where only highly resourced entities can afford access to next-gen AI—which is already happening with tiers of service ($20 vs $200 plans, or even speculative $10,000/month access).
Notable Quote:
- Doc Rock (12:09): "If it's $10,000 and other people just can't get it at all... that's no different than what we're going to pay for a wine from Mike’s neighborhood."
2. Ethics, AI Superpowers, and Government Access
[13:33] – [21:34]
- Should powerful AI be released only to certain governments/agencies? Anthropic’s refusal to let the Pentagon use its model for lethal or surveillance purposes sparked the Department of Defense to classify the company as a supply chain risk, effectively blacklisting it.
- Raises big questions about whether AI companies should, or even can, restrict end-use; and whether governments should monopolize access to the most potent models.
- Mike Elgin (19:50): "AI is universal and so flexible it can be used for anything at all... so these companies have to define ethical stances not by product, but by policy."
- Doc Rock (16:42): "If AI is helping you find all the relevant legal precedent faster and a human verifies it, great. But if you just start trusting AI to make the decision..."
3. Is AGI Real or Just Hype?
[21:45] – [29:36]
- Panel largely agrees that the "spicy autocorrect" era is over: LLMs are now profoundly capable, not mere party tricks.
- What is AGI (Artificial General Intelligence)? Consensus is it’s a moving target and often a marketing term. Current models excel in narrow areas, not human-level generality.
- Mike Elgin (24:16): "AGI is AI that's better at all people at one thing. Superintelligent AI is better at all things. We're nowhere near that."
- Jason Heiner (26:06): "We’ll move past this idea of a single model to rule them all... The industry's realizing we're going to get highly specialized, domain-specific models, more analogous to human expertise."
4. The Ethics of OpenAI, Sam Altman, and the Future of Tech Power
[38:32] – [52:13]
- The New Yorker published a lengthy exposé of Sam Altman (OpenAI CEO), questioning his trustworthiness and ethics, particularly his double-message approach to engineers and investors.
- Mike (40:28): "The biggest transgression is, basically, he spent the first few years spinning a tale to engineers...and a very different one to investors."
- Jason (41:45): "The original mission was to create an open counterpoint to Google. But chatGPT blew up, it became more commercial. I think the New Yorker narrative is a little too clean."
- Leo (44:26): "Shouldn’t we have a higher level of ethics for people controlling companies that could terminate mankind?"
- OpenAI's recently published "Industrial Policy for the Intelligence Age" (47:56), seems to propose progressive, even radical ideas (public wealth funds from AI, taxing automated labor, subsidized access) and is discussed as both sincere and perhaps virtue-signaling.
- US far behind China in social and regulatory response to AI's societal impact; American politics make redistribution of benefits unlikely.
5. AI Arms Race and Tech Industry Impact
[62:45] – [69:02]
- AI is triggering huge profits for hardware companies: Samsung had an 8x jump in profit; Nvidia and other chipmakers are riding the wave.
- Mike (62:45): "$37.92 billion in profit this quarter [for Samsung], a big chunk of the Korean economy."
- Merged entities (like Musk combining SpaceX and xAI) obscure losses/spending, with speculation on whether they’ll sustain in hardware and compute investment.
- Praise for NASA’s Artemis mission brings poignant reflection on what inspires collective human achievement, contrasting with today’s somewhat distracted public.
6. US Cybersecurity Cuts Amid Heightened Threats
[70:12] – [77:33]
- US government is cutting CISA’s (Cybersecurity and Infrastructure Security Agency) budget by $700 million, firing leadership and critical personnel amid open cyberwarfare with Iran, raising major alarm.
- Leo (73:33): "...school safety programs, facing more dramatic cyberattacks than ever before...now is not the time [for cuts]."
- Wave of recent supply chain attacks: infection of trusted tools (CPU-Z), hijacked Python/NPM libraries, high-profile ransomware attacks (Rockstar Games, GTA 6).
- The FBI reportedly retrieves deleted Signal messages from iPhones using notification data—recommendation to turn that feature off.
- ICE using spyware ‘Graphite’ from Paragon for zero-click iPhone exploits.
7. Tech Sovereignty & Right-to-Repair
[101:03] – [108:21]
- France moving off Windows to Linux, citing dependence on US tech as a strategic risk. Other nations, notably China and several EU cities, are pursuing similar sovereignty.
- Mike (101:16): "It’s a trend. Germany, China as well."
- Open Office to "Euro Office" transition triggers licensing disputes. Red Hat criticized for a blunt whitepaper on military applications with Linux, later scrubbed (unsuccessfully) from the internet—panels debates tech company responsibility in defense applications.
- John Deere settles a major right-to-repair case and must open up diagnostic tools, a big win for farmers and the movement at large.
8. Digital Power, Tech Wealth, and Gambling Culture
[109:38] – [145:29]
- US antitrust cases dwindle after Trump admin’s changes; concern that megamergers and digital monopolies are now easier.
- Rise of ‘prediction markets’ (e.g., Polymarket, Kalshi) where millions are bet daily on world events, wars, and even outcomes like "Will Jesus return by 2027?" Both financial speculation (Bitcoin) and gambling culture described as predatory and brooding for society.
- Mass adoption of digital ticketing (81-year-old Dodger fan can’t attend without a smart device) raises critical issues of technological accessibility and digital divide.
- Doc Rock (147:11): "They should know him at the stadium anyway and just let him in!"
9. Personal Reflections and Accessibility Concerns
- The panel acknowledges their own tech privilege while noting how exclusionary modern trends are for those who can't afford or use new tools.
- Mike (148:06): "There are 100% many things that are leaving people behind, even subtle ones. Just account setup can be so complex."
Memorable Quotes
- Leo (13:22): "AI might be a little bit different, because it gives you superpowers."
- Doc Rock (30:38): "We've seen it all. We're human. Damn it. As far as we know, we're human."
- Jason (26:06): "We’re moving past this one model to rule them all idea. The future is domain-specific intelligence."
- Mike (70:12): "We're cutting budgets on things that probably shouldn't."
- Leo (144:31): "The rich get rich."
- Mike (140:38): "There is a growing class of incentives, like prediction markets and Bitcoin, which benefit nobody except the winners. No value is created."
- Doc Rock (147:46): "Let’s not forget them, let’s not leave [the non-techy] behind in this AI technology — especially at lightspeed things are moving right now."
Important Timestamps
- Anthropic Mythos model debate: [04:31] – [11:05]
- AI access & government controversy: [13:33] – [21:34]
- What is AGI? [21:45] – [29:36]
- OpenAI/Sam Altman ethics and company culture: [38:32] – [52:13]
- Hardware/Chip Profiteering from AI: [62:45] – [69:02]
- US cybersecurity budget cuts amidst attacks: [70:12] – [77:33]
- France ditches Windows; right-to-repair news: [101:03] – [108:21]
- Prediction markets & social harm: [138:54] – [145:29]
- Digital divide anecdotes: [146:13] – [148:06]
Show Notes & Further Reading
- Anthropic System Card: Mythos
- New Yorker: "Can We Trust Sam Altman?"
- OpenAI: Industrial Policy for the Intelligence Age
- France’s move to Linux in government
- Kalshi and Polymarket prediction market controversy (The Guardian)
- Right to Repair: John Deere settlement
Panel Plugs
- Mike Elgin: Machine Society AI newsletter — “A humanistic take on the AI-fueled future; champion for humanity against technology run amok.”
- Doc Rock: YouTube channel, director at Ecamm — “We just released probably our dopest upgrade ever, and now we’re off on the whole team trip to England.”
- Jason Heiner: Editor, The Deep View (an AI-focused newsletter with 500k+ subscribers and growing; offers deep dives, daily news, and a podcast)
- Chatterbox (Mike’s son’s company): HelloChatterbox.com — AI smart speaker for kids, teaches coding, privacy-first, used in schools.
Summary
This episode offers a deep, lively synthesis of the state of AI, battles over ethics, power, and access, as well as the real-world impacts of tech policy and business decisions. The conversation reflects both hope and concern: hope for the promising applications of AI and collaborative human achievement (like Artemis), but clear warning signs about growing inequity, diminished cybersecurity, and the dark side of technology when monopolized or leveraged primarily for speculation and control. Amid the rapid evolution, the panel consistently roots the discussion in the importance of ethical frameworks, broad benefit, and remembering the humans society risks leaving behind.