Transcript
A (0:00)
AI is continuous and systems are discrete. Humans fundamentally think a little bit more in terms of systems and you know, predictability and reliability, consistency than they do non determinism.
B (0:13)
In the past, large companies that received lots of money were basically naturally rate limited by engineering speed. These frontier labs don't have that problem. They can literally just raise money and build a model based on the money. Like they just throw more computer, more data at it to kind of ask the question like what is going to end up limiting them.
A (0:34)
If you're building an agent and you're using pre trained models or whatever, it's a fool's errand to think that the agent that you're building, the way that you're providing context to it, like all that stuff that is not engineering and it shouldn't be engineered and it should be bitter lesson pilled, meaning you should build that system in a way that you can throw it away tomorrow. Right now we are kind of building, you know, like God. And so it's possible and probably economically viable to keep throwing capital at the problem to make God 1% smarter. But when you can't make God 1% smarter, there is like an insane opportunity to engineer God to be more efficient.
C (1:15)
The AI industry keeps reaching for brute force. Frontier labs throw compute at training runs instead of optimizing. Developers give agents a Unix environment instead of structured tools. Teams chase the latest model instead of engineering the one they have. But the pattern Angkor Goyal keeps seeing is the opposite. The companies shipping AI products that actually work aren't using the smartest model models. They're the ones with the best engineering around the models, the evals, the feedback loops, the testing harnesses. This conversation covers where that discipline matters and where it doesn't. The cycle between open source and closed source models. Why Chinese models show high token volume but low dollar spend and a benchmark. Comparing Bash versus SQL for agents with results Goyal calls comical. Martin Casado speaks with Ankur Goyal, founder and CEO of BrainTrust.
B (2:04)
I think people watching this will know you, but let's just very quickly go through background, mostly just to set the stage because I want to talk a lot about whether AI is actually a systems problem or not.
A (2:12)
Great.
B (2:12)
So do you mind just kind of giving the rough sketch?
A (2:14)
Yes. Nice to meet you, Martin. I'm, I'm ankur. Prior to BrainTrust, I, you know, back in the ancient history used to work on relational databases and like way before LLMs, I saw deep learning come out and become a thing and started to get excited about how the way that we query and work with data, which was primarily SQL is not as powerful as what we could do. And so I started a company almost 10 years ago now called Impera, where we did.
