Transcript
Ian Fisher (0:00)
The world is changing so quickly. This is probably a little bit obvious, but you should just try things and like every day do something with AI. Last summer I took a weekend and used GPT5 to help me build an iPhone app. I hadn't done that in a decade.
Podcast Host (0:17)
So fast.
Ian Fisher (0:18)
Yeah, it's so fast and so easy. And that was, you know, an age ago. That was like eight months ago. Now it's even faster and easier. Don't limit yourself. Like it's anything that you imagine. You should just try to use AI and see how far you can get with it and you'll be making the world better.
Podcast Host (0:40)
Welcome to another episode of the Light Cone. Ian Fisher is the co founder and co CEO of Poetic, which is building recursively self improving AI reasoning harnesses for LLMs. Previously he spent a decade as a researcher at Google DeepMind and founded a mobile dev tools company through UIC years ago. Welcome Ian.
Ian Fisher (1:00)
Thank you. I'm so happy to be here.
Podcast Host (1:01)
What is Poetic? How's it different than rl, you know, how's it different than context engineering?
Ian Fisher (1:06)
At Poetic, what we're building is a recursively self improving system. And so recursive self improvement is this, you know, kind of the holy grail of AI where the AI is making itself smarter. The core insight that we had is that we could do recursive self improvement far faster and cheaper than all of the other ways that people had been proposing to do this. So obviously I can't go into details about what that is, what our particular approach is, but most of the approaches out there involve they require you to train a new LLM from scratch. And training LLMs from scratch costs hundreds of millions of dollars and takes months of effort.
Podcast Host (1:49)
And then Anthropic or OpenAI will come along and just eat your lunch in the next model release.
Ian Fisher (1:53)
Right, right. And of course anthropic and OpenAI and Google, they're exploring recursive self improvement, but typically at that level of having the, you know, having to train a new model for every step of self improvement
Podcast Host (2:07)
that they do, I mean that seems like actually the like defining thing that a startup really, really wants. Like I know that I want to take advantage of whatever the next model is, but the second year in fine tuning land, I'm spending, you know, millions to hundreds of millions of dollars and then guess what, like I just lit it on fire because, you know, the next version of the frontier model comes out and I'll never catch up. Whereas like working with your systems means that I will always have the Thing that is better than the thing that's out of box and that's sort of like the holy grail.
