Podcast Summary: Y Combinator – "The Fastest Path To Super Intelligence"
Date: February 27, 2026
Guest: Ian Fisher, Co-Founder and Co-CEO of Poetic
Host(s): Y Combinator Podcast Team
Main Theme
This episode delves into the rapidly evolving landscape of artificial intelligence, with a focus on recursively self-improving AI systems as embodied by Poetic, a YC company founded by Ian Fisher. The conversation explores how startups can leapfrog ahead in performance and cost-effectiveness by building agentic "harnesses" on top of frontier language models, ultimately accelerating the path to superintelligence without the need for massive resources.
Key Discussion Points & Insights
1. The Concept and Promise of Recursive Self-Improvement
-
Ian Fisher introduces Poetic’s core mission: developing recursively self-improving AI harnesses that autonomously optimize and outperform the underlying large language models (LLMs).
-
Unlike traditional approaches requiring expensive and time-consuming retraining of new LLMs, Poetic’s system builds agentic layers (“harnesses”) atop existing models for rapid, cost-effective improvement.
"Recursive self improvement is this... kind of the holy grail of AI, where the AI is making itself smarter. The core insight that we had is that we could do recursive self improvement far faster and cheaper than all of the other ways."
— Ian Fisher [01:06]
2. Cost, Performance, & Compatibility
-
Fine-tuning custom models can cost millions, only to be quickly outdated by new model releases. Poetic’s approach sidesteps this by making harnesses compatible across new models instantly.
-
A single harness can exploit the latest frontier LLMs without intensive retraining or sunk costs.
“When the new model comes out, that same harness is perfectly compatible with it and you don’t need to change anything...”
— Ian Fisher [04:20]
3. Benchmark Results & Breakthroughs
-
Poetic gained industry attention by quickly surpassing top AI labs (like Gemini and Anthropic) on AGI benchmarks—including ARC AGI v2 and Humanities Last Exam—at a fraction of the cost.
“Gemini 3 Deepthink had just come out and they were really quite dramatically at the top... And two days later we released our results where we were showing that we could get a lot higher than that.”
— Ian Fisher [05:19] -
On Humanities Last Exam, Poetic achieved a state-of-the-art 55% (almost 2% higher than Anthropic’s latest Claude Opus 4.6) using under $100k.
"AI hasn't passed it yet, but we got to 55%... previous state of the art... 53.1% and we got 55%."
— Ian Fisher [06:44]
"The optimization costs us less than 100k."
— Ian Fisher [07:31]
4. Automation & The Role of “Harnesses”
-
Poetic automates the creation and optimization of agentic harnesses—code, prompts, and reasoning strategies that sit atop LLMs—to reliably solve specific, hard problems.
-
The system can be used to optimize not just new, but existing agents at founder companies, even targeting specific improvements such as prompt engineering or reasoning logic.
“We can generate these systems in a much more automated manner, which means we can do it much more quickly and cheaply than if you hired a team yourself...”
— Ian Fisher [09:16]
5. A Paradigm Shift Beyond Reinforcement Learning
-
The Poetic Meta System offers a new optimization paradigm, emphasizing reasoning strategies over brute force fine-tuning or RL.
-
Harnesses are optimized to extract more from each LLM call, shifting performance curves (S-curves) higher with each new generation of models and recursive improvements.
“As the Poetic meta system gets better, and as the underlying models get better, you'll find that the S curve... keeps shifting higher and higher until ultimately either you saturate like Reach AGI. Yeah, Reach SuperIntelligences.”
— Ian Fisher [10:47]
6. Outsourcing Data Understanding to AI
-
Historically, ML practitioners hand-crafted data handling; now, Poetic’s AI system autonomously analyzes datasets and devises robust prompting and reasoning paths, displacing the need for manual intervention.
“We don't spend a lot of time looking at the particular data that we're working with. Instead we're letting the Poetic meta system look at that data... the AI's job [is] to understand the data set and figure out where are the failure modes and where are... robust reasoning strategies.”
— Ian Fisher [11:52]
Memorable Quotes & Timestamps
-
On Model Leapfrogging:
“That's what it's like to have stilts, you know, like whatever model comes out, you can be taller than that one with Poetics, which is like. That's so awesome.”
— Podcast Host [05:59] -
On Cost Disruption:
“So they were at 45% and like 70 something dollars and we were at 54% and $32 per problem.”
— Ian Fisher [06:13] -
On Automation vs. Manual Optimization:
“If you already have done that work... you can bring that to us and we can optimize that entire agent or pieces of that agent.”
— Ian Fisher [09:49] -
On Reaching SuperIntelligence:
“As the Poetic meta system gets better... you'll find that the S curve that you're dealing with keeps shifting higher and higher until ultimately either you saturate like Reach AGI. Yeah, Reach SuperIntelligences.”
— Ian Fisher [10:47] -
On Practical Startup Advice:
“The world is changing so quickly. This is probably a little bit obvious, but you should just try things and every day do something with AI.”
— Ian Fisher [18:28]
Timestamps for Important Segments
- [01:06] – Explanation of recursive self-improvement and Poetic’s differentiation from RL/fine-tuning.
- [04:20] – The "bitter lesson" and incompatibility of fine-tuning with rapid model progress.
- [05:05] – Real-world benchmark results: ARC AGI v2, cost, and capability jumps.
- [06:38] – Discussion of Humanities Last Exam results and associated costs.
- [08:37] – Technical depth: Automation of harness generation & optimization.
- [10:26] – Shifting AI paradigms: Beyond RL and pretraining.
- [11:52] – The meta system's autonomous data understanding and optimization versus traditional manual engineering.
- [14:53] – On Poetic’s early access for startups.
- [18:16] – Career trajectory: From mobile dev tools to Google/DeepMind to AGI research.
- [18:28] – Advice for engineers and founders in AI.
Engaging Moments
- The metaphor of Poetic giving agents “stilts”—allowing them to always stand above the frontier model—captures the essence of their approach and recurs throughout the episode.
- Surpassing benchmark leaders within days at half the cost, and outperforming state-of-the-art with a tiny team, underscores a disruptive shift in the industry.
- Ian’s practical journey from mobile development to DeepMind researcher reflects adaptability and the open frontier of AI for technical founders.
Final Advice & Takeaways
-
Experiment Relentlessly:
“You should just try things and every day do something with AI... anything that you imagine, you should just try to use AI and see how far you can get with it and you'll be making the world better.” — Ian Fisher [18:28, 19:03] -
For Startups Seeking an Edge:
Poetic is seeking hard problems from companies ready to put their agents “on stilts”—applications for early access are open.
Summary
Poetic’s recursively self-improving harnesses represent a new paradigm for extracting superhuman performance from LLMs at startup speed and efficiency. By shifting the locus of innovation from simply scaling models to building and optimizing adaptive agentic layers, Ian Fisher and team are enabling startups to outpace even AI giants, democratizing access to "superintelligence"—one benchmark leap at a time.
