Podcast Summary
Latent Space: The AI Engineer Podcast
Episode: ⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex
Date: November 19, 2025
Host: Latent.Space (Swix)
Guests:
- Alex Lieberman (Co-founder, Tenex; formerly Morning Brew)
- Arman Hezarkhani (Co-founder, Tenex)
- Dan (Tenex engineer, brief cameo)
Episode Overview
This engaging episode centers on the emergence of "10x AI Engineers"—software engineers who, by leveraging the latest AI tools and workflows, can outperform traditional teams and even command $1M+ annual compensation. The discussion covers the foundation and operations of Tenex, the startup Alex and Arman co-founded to pioneer output-based engineer compensation. They dig deep into how AI is fundamentally changing software engineering, hiring for "AI leverage," the challenges of measuring engineer output, and novel anecdotes of projects accelerated by AI. The episode also touches on broader industry trends, the future of AI consulting, and cultural shifts in engineering.
Main Discussion Segments
1. The Origin of Tenex and Its "Output-First" Model
[01:29 – 04:44]
-
Backstory:
- Alex and Arman met in 2020, beginning as investor-founder and evolving through discussions on AI's impact on product development.
- Arman recounts how, after downsizing his previous startup’s engineering team by 90%, he rebuilt processes to be "AI-first" to maintain output.
- Alex’s skepticism turned to belief after witnessing a single engineer, equipped with LLMs, achieving remarkable throughput.
-
Core Idea of Tenex:
- Reinvent software engineer compensation for the AI era:
- Move from hourly billing to output-based rewards.
- Compensate engineers at top-of-market rates, comparable to their productivity ("If you say you're $1,000/hr, people will laugh—but the value may demand it").
- Align incentives with output, not time (as with lawyers or consultants).
- Reinvent software engineer compensation for the AI era:
2. Tackling "Measuring Output" in AI-Powered Engineering
[05:41 – 08:26]
-
The Measurement Problem:
- Swix: “What is a unit of output for a software engineer?... It's basically unsolved."
- Arman explains Tenex uses story points, with safeguards:
- Aware that story points are gameable, but long-term client relationships incentivize honesty.
- "This problem gets solved in the hiring process. You need to look for people who are long-term selfish." (Arman, 06:53)
- Also, hiring people who simply love writing code.
-
Checks & Balances:
- Each client engagement involves two roles:
- AI Engineer and a Technical Strategist (incentivized on retention and quality, not just output).
- Technical strategist signs off on all client-facing plans to prevent sandbagging.
- No clients have complained about the fairness of story point allocations so far.
- Each client engagement involves two roles:
3. Bragging Rights — Project Spotlights and Compensation
[08:58 – 12:20]
-
$1M Engineers:
- Arman: “We will probably have more than one engineer make a million dollars cash next year based on this [output-based] model.” (09:03)
-
Flagship AI Projects:
- Retail Camera AI for Store Analytics:
- Retooled off-the-shelf and custom models to run in parallel on consumer hardware.
- Delivered heatmaps, theft detection, stock analysis in two weeks.
- “Previously, that alone... would take several quarters for robust teams of engineers.” (Arman, 10:45)
- Snapback Sports Mobile App:
- Built in a month, reached #20 in the App Store globally—no AI, just rapid development.
- Health/Fitness Influencer App:
- Lost the initial pitch, but delivered a prototype of “ChatGPT for health coaching” in 4 hours to win over the influencer.
- Retail Camera AI for Store Analytics:
4. The Tenex Tech & Agent Stack
[12:20 – 14:50]
-
Technical Stack:
- “High structure allows for agents to work autonomously for longer.”
- Default: TypeScript frontend and backend, shared types, React, Express.
- Why TypeScript? Flexibility + constraint; allows code agents (e.g. Claude Code, Cursor) to better self-correct.
-
AI Coding Agent Evaluation:
- No "favorite" agent; team monitors model performance daily.
- “If I ask our team what model is best right now, they'll say, ‘Well, today at 4:42, Claude Code is better... but yesterday Codex outperformed...’” (Arman, 13:42)
- Decision feels “warrior’s weapon” level—personal fit and speed matter as much as benchmark evals.
5. Scaling Tenex: Human vs. Agent Bottleneck
[14:50 – 16:02]
- Limiting Factor & Recruitment:
- Swix: “Are you human-bound or agent-bound?”
- Alex: “Today it’s human-bound, 100%.” (15:11)
- Challenge is hiring and matching great engineers to business needs.
6. How Tenex Hires AI Engineers
[16:02 – 19:11]
-
Take-home Challenges:
- “Our take-homes are unreasonably difficult.” (Arman, 16:09)
- 50% of candidates drop at this stage.
- Process: two short calls, rigorous take-home, quick decision cycle for qualified candidates.
-
Favorite Interview Question (Arman):
- “If you had infinite resources to build a truly senior AI engineer, what’s the first major bottleneck you’d have to solve?” (17:13)
7. What Makes a "10x AI Engineer"? (Philosophy + Notable Exchange with Dan)
[17:49 – 21:26]
- Sample Answers:
- Swix: “Model intelligence” as an answer—are models truly smart?
- Arman: “Context engineering,” per Andrej Karpathy—the challenge is putting the right info in the model’s window and having models attend to it.
- Dan (Tenex engineer):
- Quote: “Controlling entropy. If there's a 1% error rate, it multiplies at each step, and that entropy builds up, derailing the agent. The bottleneck is making sure the agent can reduce entropy to avoid compounding errors.” (Dan, 19:47 – 20:51)
8. Navigating AI Conferences as a Non-Technical Founder
[22:00 – 25:08]
- Alex asks Swix:
- “As a non-technical person... what would you do to get the most out of this conference?”
- Advice:
- Latch onto emergent keywords; observe new memes/terminology (e.g., ‘context engineering,’ ‘MCP’).
- Swix comments on the sociological phenomenon of new terms gaining traction, fundraising potential, community formation.
9. The MCP (Multi-Context Prompting) Debate (Lighthearted Segment)
[23:37 – 25:57]
- Arman:
- “MCP is a three letter word for API... what bothers me is when people create a new name and use that to raise money.” (24:09)
- Admits MCPs are useful, but dislikes marketing hype.
- Swix:
- Defends the value of new protocols but embraces debate as a way to drive industry forward.
Notable Quotes & Memorable Moments
- On AI Leverage:
“…he had to decrease the size of his engineering team by 90%...rearchitect...to be AI-first...the output of production-ready software had 10x’d after making this shift…” — Alex, [02:23] - On Hiring:
“This problem gets solved in the hiring process. You need to look for people who are long-term selfish.” — Arman, [06:53] - On Compensation:
“We will probably have more than one engineer make a million dollars cash next year based on this model.” — Arman, [09:03] - On “Feel” in Model Evaluation:
“At a certain point, I think a warrior’s weapon becomes something of a feel...for a lot of these things it really is feel.” — Arman, [14:14] - On Engineering Bottlenecks:
“Model intelligence is going to be the main blocker.” — Swix, [17:56]
“It’s about context engineering...” — Arman, [18:28]
“Controlling entropy...even a 1% error rate will multiply and accumulate, derailing the agent more and more...” — Dan, [19:47] - On Industry Meme-making:
“…what bothers me is when people create a new name...because they know three-letter acronyms get investors excited.” — Arman, [24:09]
Timestamps for Key Segments
- Meet the Guests / Tenex Origin: [01:02 – 04:44]
- Output vs. Hourly Compensation: [03:40 – 05:23]
- How They Measure Output/Story Points: [05:41 – 08:26]
- Project Highlights & $1M Engineer Claim: [08:58 – 12:20]
- Technical Stack & Agent Selection: [12:32 – 14:50]
- Hiring Constraints & Talent Bottleneck: [15:11 – 16:02]
- Interview Philosophy: [16:09 – 17:49]
- Advanced AI Bottleneck Discussion: [17:49 – 21:26]
- Conference Navigation Advice (Non-technical Founders): [22:00 – 25:08]
- MCP Light Debate: [23:37 – 25:57]
Episode Tone and Dynamics
- Conversational, Candid, and Sharp:
- Friendly banter about name pronunciation and technical jargon.
- Self-deprecating humor regarding industry lingo (MCP) and take-home interview “sadism.”
- Mission-driven:
- Clear excitement about AI’s transformative power but healthy skepticism toward hype cycles.
- Strong advocacy for meritocratic, output-driven compensation and culture.
Conclusion
This episode offers a revealing look into how leading-edge engineering teams are evolving in the AI era: output over hours, self-imposed “impossible” standards, and the rapid, industry-wide redefinition of productivity and value. Listeners get tactical insights into engineering management, team building, project delivery, and the culture wars brewing underneath new memes like “10x AI engineers” and “MCP.” A must-listen for AI engineers, founders, and anyone tracking the intersection of human and artificial skill in software.
For more show notes and resources, visit: latent.space
