Prof G Markets: The AI Job Crisis Andrew Yang Saw Coming
Date: April 24, 2026
Hosts: Scott Galloway, Ed Elson
Guest: Andrew Yang
Main Theme:
Exploring the AI-driven job crisis, labor market disruptions, and potential policy solutions with 2020 presidential candidate Andrew Yang, who first sounded the alarm on automation and the need for Universal Basic Income (UBI).
Episode Overview
This episode confronts the reality of AI's accelerating impact on the labor market, a trend Andrew Yang had warned about years ago. Hosts Scott Galloway and Ed Elson engage Yang in a candid debate about whether job loss is truly spiking, which populations are most at risk, and what American policymakers—and Silicon Valley—should be doing. The conversation covers recent labor data, skepticism around layoffs attributed to AI (“AI washing”), and a detailed back-and-forth on potential solutions, including UBI, negative income tax, and the retraining and future of America's workforce.
Key Discussion Points & Insights
1. AI’s Real-World Impact on Jobs
(05:08 – 14:09)
- Current Layoff Data:
- 55,000 layoffs attributed to AI in the previous year; rising unemployment among recent college graduates (up to 5.6%, the highest outside pandemic years).
- Recent mass layoffs at Oracle (30,000 jobs) and other tech companies (Amazon, Pinterest, Block) are increasingly blamed on AI.
- “AI Washing” Phenomenon:
- Companies use AI as an excuse for layoffs that may also stem from overhiring or other factors.
- Not all layoffs are directly due to AI, but there’s clear executive intent to “do more with less” and eventually automate tasks currently performed by humans.
Andrew Yang (11:25):
“One of the convos I relate to people was with CEO of a publicly traded tech company who told me flat out, ‘we’re going to fire 15% of workers this year, then another 20% two years from now...’ And he's not alone. I've had maybe a dozen CEOs with similar conversations."
- Labor Participation & Entry-Level Crunch:
- Labor force participation declining; computer science grads struggling for jobs that were easy to secure a few years back.
- The “canary in the coal mine” is arguably college grads, especially in white-collar, junior roles.
2. Debate: How Severe is the AI Job Crisis?
(14:09 – 19:23)
- Skepticism from Scott Galloway:
- Historical pattern: Every tech revolution sparks short-term job losses, but eventually yields new jobs and higher productivity (e.g., the PC, automation in auto industry).
- Unemployment is still low; where is the strong data that AI is truly decimating jobs?
Scott Galloway (12:39):
“Where’s the actual evidence that AI is destroying jobs?...It looks so far like it's being used to augment workers' productivity, which ideally would lend itself to higher wages.”
- Yang’s Counterpoints:
- Labor force participation stats obscure those who've simply stopped looking for work.
- Severe drop in entry-level hiring, particularly in fields previously considered secure.
- Companies openly redirecting layoff savings into AI infrastructure (data centers over offices), highlighting a structural shift.
Andrew Yang (16:25):
“The compute infrastructure is the new human being.”
3. Policy Solutions for an AI-Disrupted Labor Market
(19:23 – 26:00)
- Retraining Skepticism:
- Retraining programs for displaced workers have shown little effectiveness (0-15% efficacy). “Learn to code” is no longer relevant in an AI-driven world.
- Yang’s Proposals:
- Universal Basic Income (Freedom Dividend):
- UBI as a backbone policy to handle broad displacement (“just distribute the money as quickly as we can”).
- Also open to negative income tax, child tax credits, and direct youth employment subsidies.
- Targeted Incentives:
- Subsidies for companies hiring young workers.
- Destigmatizing and investing in trades and vocational programs; “no robot plumbers for the foreseeable.”
- International Models:
- Praising German apprentice culture; US needs more apprenticeships and less college degree fixation.
Notable Quote:
Andrew Yang (21:38):
“Trying to train workers who've been laid off in various jobs to compete against AI strikes me as a loser... It's chasing moving goalposts.”
Scott Galloway (24:02):
“I don't think it's fair to say that those efforts (retraining) have failed and to give up on it. My sense is...an apprentice culture and more vocational training could have real ROI.”
- Discussion of Means-Testing vs. Universality:
- Galloway is wary of purely universal benefits ("I don't think anyone on this call should be getting [UBI]") and supports policies targeting the economically strained.
4. Historical Lessons: Did Previous Automation Waves Really Help Everyone?
(33:17 – 38:57)
- Three Lessons:
- The “it all worked out” narrative of past automation is often glossed over; it fueled massive inequality and left many behind, especially in manufacturing and local journalism.
- Yang warns, “AI is to white collar work what the robot arms were to factory work.”
- CEO Confidentials:
- Yang recounts breakfasts with CEOs who are cutting staff dramatically, watching revenues soar, and admitting that “capital displaces labor—with the help of AI.”
Andrew Yang (36:59):
“That’s pretty much where the K shape started...the wealth inequality thing started to get really out of control. The AI equivalent...story’s going to be the same thing, except times a thousand.”
5. Silicon Valley Wakes Up to Inequality—and Their Own Unpopularity
(38:57 – 41:59)
- AI CEOs (Anthropic, OpenAI) Now Calling for Regulation & Taxes:
- Dario Amodei (Anthropic CEO) admitted half of entry-level white collar jobs may disappear in five years, and called to tax AI companies.
- OpenAI proposes a “New Deal” with capital gains tax hikes and redistribution policies.
- Yang is skeptical of the sincerity: “It’s very easy for them to say tax me, knowing full well it’s not going to happen.”
Scott Galloway (41:10):
“You’re talking about a group of people who would fuck their sister for a nickel. And we fall for this shit. Literally every couple years we have a new hero who says, regulate me in hushed tones and a t-shirt (Sam Altman) ... and then they deploy thousands of lawyers to get in the way of any regulation.”
6. What Policy Would Actually Work—And Pass?
(44:45 – 46:55)
7. Society At a Tipping Point: Social Unrest & Elite Self-Interest
(47:48 – 50:33)
- Growing Social Tension:
- Yang references recent high-profile violence and dangerous polarization; predicts social unrest will push elite support for redistribution out of “enlightened self interest.”
- Not all tech elites are sincere; “some are already headed to the bunker.”
- Yang:
“We’re nearing that point...even if you don’t think these are tremendous human beings...Some kind of investment in the general public would not be a terrible move, if only just to make them look better.” (48:55)
Notable Quotes & Memorable Moments
- AI’s Impact on White Collar Work:
- “AI is to knowledge work what the robots were to factory work.”
— Andrew Yang (33:17)
- Cynicism About Silicon Valley’s “Tax Us” Rhetoric:
- “We keep falling for this notion that some Jesus-like figure from the technology sector is going to tax himself. It’s never happened.”
— Scott Galloway (41:10)
- The Case for Negative Income Tax as Policy:
- “If you make less than [$35,000], then we true you up to that level...I’d be thrilled with anything that alleviates poverty.”
— Andrew Yang (45:13)
- On Tech Layoffs and Reinvestment:
- “Oracle is making a $50 billion bet on AI and it needed the money...if we cut these workers, we’re gonna save $8 billion and plow that into data centers.”
— Andrew Yang (16:25)
Important Timestamps
| Topic | Timestamp |
|-------------------------------------- |------------ |
| Show start & setup | 01:50 |
| Framing AI’s labor threat | 05:08–07:58 |
| Yang explains his early warnings | 06:55 |
| “AI washing” vs. real job losses | 09:41–11:52 |
| Debate on real vs. perceived crisis | 12:39–19:23 |
| Policy solutions (UBI, negative income tax, retraining debates) | 20:44–27:07 |
| Data on college grads impacted | 31:10–34:52 |
| Inequality origins: manufacturing & information revolutions | 35:25–38:57 |
| Tech CEOs calling for tax/regulation | 38:57–41:59 |
| Means-tested vs. universal solutions | 44:45–46:55 |
| Social unrest & elite self-preservation | 47:48–50:33 |
| Real talk: Are Silicon Valley elites sincere? | 50:33–51:47 |
| Closing reflections & policy outlook | 53:47–64:21 |
Personal & Social Reflections
- Parenting in the AI Era:
- Yang shares his own challenges: his 13-year-old son tells him it would be easier to have an AI girlfriend; the family struggles with screen time and the psychological effects of digital life.
- “If you show me a household where the kids aren’t on screens, they have a much, much better shot at flourishing.” (60:07)
- Yang’s Current Initiatives:
- Promoting his new company, Noble Mobile (incentivizes less screen time); supporting independent, reform-minded Senate candidates.
- Legacy & Influence:
- Both Galloway and Elson credit Yang for mainstreaming the idea of direct economic support and believe his moment for national policy influence has arrived.
Concluding Thoughts
This episode pits optimism against warning as AI’s effect on the labor market shifts from theory to reality. Andrew Yang’s dire predictions are playing out—especially for young, college-educated workers—while the political establishment and the tech elite struggle to keep up with a crisis that’s no longer hypothetical. The roundtable acknowledges that sweeping solutions like UBI may not be politically feasible yet but sees strong momentum for targeted policies. In the end, the episode calls for policymakers to move fast with creative, large-scale solutions—or risk deeper division, unrest, and long-term socio-economic harm.
Listen if you want:
- A BS-free debate on whether AI is truly destroying jobs—right now.
- Honest discussion about who is getting hurt, who stands to benefit, and what needs to change.
- Policy ideas from the man who brought guaranteed income to the mainstream.
- The inside scoop on how Silicon Valley’s attitudes are shifting as their own popularity nosedives.
“The question is not if AI will cause labor market dislocation, but how we choose to respond—and who we choose to protect.”