The Prof G Pod with Scott Galloway — "Why CEOs Are Getting AI Wrong"
Guest: Ethan Mollick
Date: February 12, 2026
Podcast Network: Vox Media Podcast Network
Host: Scott Galloway
Notable Guest: Ethan Mollick, Professor at Wharton School, author of Cointelligence, and writer at "One Useful Thing"
Overview
In this episode, Scott Galloway sits down with Ethan Mollick, a leading thinker on the impact of artificial intelligence (AI) in business, creativity, and education. Galloway and Mollick dissect how CEOs are misunderstanding AI, debate the realities versus rhetoric of existential AI risks, and provide a practical roadmap for individuals and companies to harness these tools effectively. The conversation ranges from real-world workplace disruption and productivity, to the educational fallout from AI, to the wider societal and policy implications.
Key Discussion Points & Insights
1. Existential Risk and CEO Catastrophizing (13:40–17:17)
- Scott Galloway questions the sincerity behind AI CEOs issuing dire warnings (e.g., Dario Amodei, Anthropic), wondering aloud if such gloom is “virtue signaling” to drum up the importance (and valuation) of their companies.
- Ethan Mollick: "I think that there's always debates, right? ... I think Anthropic is fairly sincere about their views about how AI works. You may or may not agree with them. ... It always is a question of weirdness that you're building this thing if you're so worried about it. But I think it is a sincere anxiety." (14:09)
- Mollick emphasizes most AI risks today involve disruption at work, education, and society—not just science fiction “superintelligence.” He wants to guide near-term adoption so AI helps people thrive, rather than cause harm.
2. Are CEOs Inflating AI’s Threat? Adoption Inside Companies (16:02–18:17)
- Galloway proposes that CEOs may catastrophize to boost funding and public perception about their AI's world-changing value.
- Mollick acknowledges some hype exists, but also points to data: many individual workers already use AI to great effect, but “companies are not seeing it” because employees are secretive due to fear (of layoffs, for example).
- “...About 50% of American workers use AI. They report, by the way, three times productivity gains on the tasks they use AI for. They're just not giving that to companies. Right. Because why would you like, you're worried you'll get fired if AI shows that you're more efficient.” — Mollick (17:17)
3. Where Does AI Genuinely Move the Needle? The Jagged Frontier (18:45–22:29)
- Productivity leaps are real but uneven—coding and scientific writing see huge gains, but organizational structure lags.
- “The big picture, overestimation and underestimation is work is complicated and organizations are complicated. ...If that's producing 10 times more PowerPoints than you did before, that's not necessarily going to translate to any actual benefit for the company.” — Mollick (18:45)
- AI is a “jagged frontier”: excels in coding, data analysis, and writing, but can be weak on context, memory, or judgment.
4. Guidance for CEOs: What Works, What Doesn’t (20:10–22:29)
- Successful companies take an "R&D" approach—letting employees experiment and crowdsource AI use cases, backed by clear leadership direction.
- Notable Quotable: “The most successful cases I'm seeing are a combination of what they call leadership, lab, and crowd. The leaders of the company have a clear direction set... they give the crowd, everybody in the organization, access to these tools ... then they have an internal team that is actually thinking about what you build.” — Mollick (21:00)
5. Explaining “Agentic AI” and the Practical Tech Stack (22:29–25:57)
- Galloway asks about “agentic AI,” highlighted by tech leaders like Marc Benioff.
- Mollick breaks it down: “Agents basically can be defined as an AI tool that is given an AI that's given access to tools. So it can do things like write code, search the web... when given a goal, can autonomously try and accomplish that goal on its own and correct its course if it needs to.” (22:52)
- For individuals: Mollick recommends starting with one of the “Big Three”—OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude. “Pay $20/month, and spend 8–10 hours talking to [the AI] like a person and seeing what results you get.” (24:28)
6. The Competitive Landscape: OpenAI, Google Gemini, Anthropic Claude (25:57–30:15)
- All three are leading, and the race is tight. The “scaling laws” mean biggest models tend to win, requiring huge financial and technical resources.
- Differentiation between models is minimal for entry-level users, but they have distinct "personalities" at the edge:
- Anthropic: Best writer, stricter on ethics.
- OpenAI: “Straighter” responses, has both conversational and highly logical work models.
- Gemini: “Weirdly neurotic,” apologetic if you criticize it.
7. Are Models Converging? The Challenge of Sustained Differentiation (30:15–32:29)
- Galloway: Models seem to converge to parity because they copy each other.
- Mollick: “It's not a regression of the mean because there's no dropdown of ability level. The ability levels keep going up. But all of the companies in the space are on roughly the same development curve.” (30:53)
8. Chinese AI “Dumping” and Global Geopolitics (36:31–38:06)
- Galloway raises the idea that China is “AI dumping” by offering competitive models cheaply, similar to past steel industry strategy.
- Mollick: Chinese “open weights” models can be downloaded and used for free, which doesn't always make sense for profitability but may have state-driven logic.
9. True Bottlenecks to AI Progress: Data Centers, Power, Research (38:06–41:06)
- Critical constraint is infrastructure: “data centers are the sort of choke point. ...Can I get enough chips to put in one? ...For a while, data was the bottleneck. ...AI companies have increasingly found that they can make their own data.” (38:51)
- Power and policy are now the major limiters.
10. AI’s Impact on Jobs, Productivity, and Corporate Valuations (41:06–45:14)
- Galloway: AI valuations require either “massive labor force reductions, or valuations will come down.”
- Mollick: Optimistically, AI could expand what companies can do, not just cut costs. But he worries about “the lack of imagination in corporate America, where the model is, ah, great, we could just keep cutting down our number of people because AI does the work as opposed to how does everyone working as a manager. ...And the failure of imagination there makes me very nervous.” (41:06, 42:50)
11. Is Higher Education Obsolete? AI and the College Experience (47:08–50:00)
- Galloway: "AI will replace college" is mostly empty rhetoric; applications and tuition keep rising.
- Mollick: AI is disrupting “how” students work (e.g., causing mass cheating via essays) but isn’t making universities obsolete.
- “All the signaling associated with papers... is scrambled by AI-generated content and it makes it very hard for you to tell whether it's crap or not without a lot of effort.” (52:27)
12. AI in Academia and Research—Personal Experience (50:00–52:11)
- Mollick describes AI as now essential in all aspects of his academic work: writing, grading, research. It makes him dramatically more efficient.
- Notable: “The AI is a better grader than me. But as of yet, I haven't let it do grading because my students expect me to.” (50:19)
- As a researcher: throws full papers into ChatGPT to find errors even he missed.
13. Medicine, Drug Discovery, and Where AI Brings Hope (53:33–56:25)
- Galloway: Could AI make healthcare a universal winner, as with vaccines—for the public, if not for companies?
- Mollick: “Medicine is an incredibly exciting area. ...There’s hope for agentic systems to autonomously do directed research in the near future, which will lead to a flood of new discoveries. ...On the doctor side, second opinions—you obviously should be using an LLM for a second opinion.” (54:15)
- But adoption is slower in highly regulated/litigious fields.
14. Will AI’s Value Go to the Public, Not Just Corporations? (56:25–59:19)
- The biggest winners from AI could yet be the public, not just a small group of firms, especially as “open weights” models (Chinese, Mistral, etc.) put powerful tools in everyone’s hands.
- “Any frontier company... can destroy the market anytime they want, given the condition that they release their models open weights.” — Mollick (57:51)
- Explains “open weights”: once a model’s core math is released, anyone can run it without paying the developer.
15. Parenting and the Next Generation’s Skills (59:19–61:54)
- Galloway asks Mollick, as a parent, if AI changes how he’s prepping kids for the future.
- Mollick: “I want them to pick jobs that are diverse, where they do many different tasks in case AI takes some of them. But I also want them to do what they love... Preparing resilient kids who are self reliant and have some ability to shoot, to improvise is more important than ever.” (60:00)
16. Catastrophizing, Policy, and What’s Next (61:54–66:01)
- Galloway: Are fears of AI apocalypse overblown? Do you have a go-bag?
- Mollick: “I want some people catastrophizing because that's what governments should be doing. ...But I also think that preparing resilient kids who are self reliant and have some ability to shoot, to improvise is more important than ever. ...Our goal should be to guide things in the best direction that we can right now.” (62:59)
17. Advice to Young People and Aspiring Academics (65:34–67:50)
- Careers are long and non-linear; don’t try to overplan.
- “There's no perfect moment, there's no perfect, perfect skill set. ...It's an evolution exploratory process. ...The idea of being flexible, of trying different things, of experimenting, of getting your own skills out there and using your own agency to try and find path forward is the way to go.” — Mollick (66:01)
Memorable Quotes & Moments
-
On hidden AI adoption at work:
“They're not using the corporate AI. ...They're just not giving that to companies. ...You're worried you'll get fired if AI shows that you're more efficient.” — Mollick (17:17) -
On company preparedness:
“Nobody knows what's going on, right?... We're a thousand days into after the release of ChatGPT. ...There’s not a playbook out there.” — Mollick (20:52) -
On the danger of unimaginative cost-cutting:
“The failure of imagination there makes me very nervous.” — Mollick (42:50) -
On the “Jagged Frontier” of AI:
“I coined this great term to describe AI called the jagged Frontier ... encapsulates how AI is really good at certain things, but really bad at others.” — Galloway referencing Mollick (20:10) -
On education:
“The value of education, especially professional education, goes up because I teach people to be generalists ... but that's all broken down ... If you're an intern at a company ... you absolutely were using Claude or ChatGPT and just turning those answers into people because it's better than you at your job.” — Mollick (47:52) -
On building resilient children:
“I'm an anxious parent—who can't be?—but I also think that preparing resilient kids who are self reliant and have some ability to shoot, to improvise is more important than ever.” — Mollick (61:54)
Notable Timed Segments
- Anthropic CEO’s AI warning & risks: 13:40–14:38
- AI at work: employee use, productivity, and secrecy: 17:17–18:17
- The “Jagged Frontier” concept: 20:10–20:52
- Practical AI tools for the average user: 24:02–25:57
- AI model “personalities” and differentiation: 28:39–30:15
- Chinese AI “dumping” question: 36:31–38:06
- Jobs, labor market, and AI’s real impact: 41:06–45:14
- AI in academia and research, future of peer review: 50:00–53:33
- AI in medicine and stakeholder value: 53:33–56:25
- Parenting & preparing for the AI future: 59:19–61:54
- Career advice for young professionals: 65:34–67:50
Takeaways & Closing Thoughts
- AI productivity gains are real and substantial, but often hidden from corporate leadership due to organizational friction and employee self-interest.
- Existential risks are debated, but most disruption is happening in “nitty gritty” ways at work, in education, and processes—not (yet) at the level of sentient AI taking over the world.
- CEOs should prioritize R&D, experiment with broad employee access, and not just focus on layoffs or cost savings—imagination is the real bottleneck.
- For individuals, don’t get lost in the AI hype or jargon—start using the best available tools (ChatGPT, Gemini, Claude) and spend time learning through real work.
- In education, AI is forcing rapid adaptation but is unlikely to replace universities; instead, it may make formal education more important as informal apprenticeship paths erode.
- The value of AI could be broadly distributed to the public, especially as open source/open weights models proliferate.
- Preparing for the AI future = focus on resilience, flexibility, and diverse skills—not just narrowly “AI-proof” jobs.
Final thought from Ethan Mollick on careers:
“It’s never easy, and I’ve been lucky in a lot of these choices, but ... thinking about how you want to take your next step on your own rather than following a predefined path can be very useful.” (67:28)
