
Loading summary
A
Foreign.
B
Welcome to the Late in Space podcast. This is Alestio, founder of Kernel Labs, and I'm joined by Swix, editor of Late in Space.
C
And today we have, we're very honored to have the founders of Applied Intuition. Qatar and Peter, welcome.
A
You guys really know how to turn it onto podcast mode. That was, that was. You guys are real, real pros at this. They were just joking around right before this and then they flipped it pretty quick.
B
Oh yeah, it's good to have you guys. Maybe you just want to introduce yourself so people know the voice on the mic.
D
And I'm Peter Ludwig. I'm the co founder and CTO of
A
Applied Intuition and my name is Casser Yunus. I am the CEO and co founder with Peter.
D
Nice.
B
Can you guys give the high level overview of what Applied Intuition is? And I was reading through some of the Congress files when you went out there, Peter. And 18 of the top 20 global non Chinese automakers use you guys. You have customers in agriculture, defense, construction. I think most people have heard of Applied Intuition tied to YC when it was first started and then you were kind of in stealth for a long time. So maybe just give people the high level overview of what it is today and then we'll dive into the different pieces.
D
Yeah. So Applied Intuition, our mission is to build physical AI for a safer, more prosperous world. And so we work on physical AI for all different types of moving systems, everything from cars to trucks to construction and mining equipment to defense technologies. And we're a true technology company. So we build and sell the technology and we sell it to the companies that make the machines, we sell it to the government. Really anyone that wants to buy technology to make machines smart.
A
Yeah. And I think in the broader AI landscape, a lot of the focus, rightfully so in the last three years has been on large language models. And so everything fits in a screen, you know, like whether it's code complete products or things like that. And what's different about us is we're deploying intelligence onto a lot of things that don't have screens. You know, they're physical machines. There are sometimes screens within the cabin or for example of a car or a truck or something like that. But most of the value we provide is putting intelligence that is in safety critical environments. So those two words are really important because learned systems can make mistakes. If you're asking for like, you know, some, you know, something like tell me about these podcast hosts that I'm about to go meet. But you can't do that obviously when you're you know, we run like as an example, we run driverless trucks in Japan right now like as we speak. You can't have errors that those are L4 trucks. Yeah, yeah.
B
Was that always the mission? I remember initially I think people put you and scale AI very similarly for some things about being kind of like on the data infrastructure side of things. What was the evolution of the company?
D
Well, from the very beginning we, we always wanted to really be a technology company that, that helped generally push forward the industrial sector. And so we started off working in Autonomy. Our very first customers were Robotaxi companies and. And we started off doing a lot of work in simulation and data infrastructure. And then over the years we've expanded our portfolios. Now we have over 30 products and it's a pretty broad technology play within the landscape of physical AI.
A
Yeah, I think the scale reason is because we're all YC Universe companies, but it was a very, very different company. Scale was, is more of a services company, data labeling company. Fundamentally we started and still are do a lot of tooling. So you think developer tooling is now in vogue again thanks to the AI boom. But honestly, 10 years ago it was I out of vogue. Like doing a tooling company in 2016, 2017 was not like the thing to do because I don't know if you remember this but VCs generally their views was the toolings are just workflows and workflows ultimately are not really interesting. And we've kind of come full circle of that. But when we started the company, it's kind of like in the periphery of what the company wants to be. And it was like from our earliest days, like we want to deploy software on physical machines, like on cars and on trucks and things like that. Now obviously we didn't know that the transformer boom was going to happen. We didn't know that Autonomy Systems would become end to end those things. We didn't know and why that's important. With Autonomy Systems, end to end it is just now you can. Those models can be generalized to multiple form factors. And so back nine, 10 years ago, tooling was a great way and still is a great way to build the technology and sell technology to our end customers, a lot of them who want to build the stuff themselves. And so we just offer like a spectrum of solutions from you can just use like one part of a development suite of tools all the way to buying the full thing. The way to think about the company or at least the way we think about the company is as Peter said, a Technology provider. It's kind of like, you know, what Nvidia does or what an amd, but we just don't do chips, we don't do silicon. But we're a technology provider fundamentally. And I think even, you know, we used to joke when we started the company like, you know, we're not the guys to build like Instagram. Like that was just not our, this is not us in the most fundamental way.
D
I mean, you have thoughts? Yeah, well, it's, it's, I mean, I
A
think it's just like what. And I mean we worked on Maps and stuff. Google Maps. Consumer products are extremely difficult for a lot of different reasons. It just, I think doesn't scratch the itch. I think we're like Michigan guys who are kind of more of that traditional engineering kind of a realm or lineage. You said.
D
Joe, I got to say that what was clear 10 years ago was that there was so much more that was possible with software and AI in vehicles. And that was generally the space that we started in 10 years ago and the precise path that we've taken over the years. I think we've been strategic and we've adjusted to make sure that we're actually building stuff that's valuable to the market. And the technology has changed so much. Our own technology stack has completely changed, I would say roughly every two years. And so now we've probably done let's say four complete evolutions of our own technology stack. And I sort of see that cadence roughly keeping up. And, and so the way even we think about engineering is almost on this two year horizon. We're preparing ourselves that hey, like we want to invest the appropriate amount but then also be very dynamic as the research gets published and as our research team figures out new advancements and adapting to that.
A
Yeah, one thing that has been consistent is the type of people we've recruited. Frankly speaking, it's engineers who are fall into the, you know, sometimes very traditional like you know, Google, Gensui, but way different from, you know, other companies. We are hiring folks who really know the intersection of hardware and software, who know really low level systems. Obviously traditional ML researchers and folks who actually, you know, put ML systems into production. That's been pretty consistent. I think that like you look at the mix of our engineering, 83% of the company is engineering. So it's like a giant lease, a
C
lot of engineers, which by the way, a thousand engineers. I mean that's on your website. So I imagine it's out of date.
A
It is, it is up to date. Yes, yes.
D
Okay.
C
And then 40 plus founders.
A
Yeah, we would tend to also. This was more luck than strategy. We've recruited a lot of ex founders. It's been a great place for founders YC and non because obviously I know a lot of the YC folks. It's kind of like we recruit a lot of Google people for them to exercise both their technical and non technical skills. Because we're on the applied side. We have a research team that we do fundamental research, we publish and we've had great traction there. But fundamentally the business wants to take this intelligence and deploy it into production. And there's a certain type of person that's more interested in that.
C
Yeah, you mentioned the tick stack, Peter, so I just wanted to give you some reign to just go into it. I'm interested in where applied nutrition starts and ends in some sense. What won't you do? What do you do that's common among all the verticals that you cover?
D
There's a few buckets of work that we do and we've been at this for almost 10 years now. So the technology is pretty broad.
A
But we got started with a thousand engineers. Like you could work on lots of stuff.
D
There's lots of stuff. Yeah. Especially with AI tools now. So we got our start in simulation tooling and infrastructure. And so generally, if you're trying to build a very complex software system that involves moving machines, you need to test that. And the best way to test it is it's a combination of virtual developments, a simulation, and then also obviously real world testing. And then there's a very careful process of that correlation between the simulation results and the real world results and ensuring that the simulator is in fact accurate to that. Simulation's a very deep topic. We have a whole, whole suite of products in that and we could talk for many, many hours about that specifically. But that is one part of what we do as a company. Reinforcement learning as a subpart of that is also super critical. I think a lot of the best advancements happening in a lot of these AI systems right now in some way relate to reinforcement learning. And now we have lots of compute and you can do tons of interesting things in reinforcement learning. The second bucket of work that we do is on operating systems technology. I mean, true operating systems think about schedulers and memory management and middleware and message passing and highly reliable networks, networking and data links. Like the reality is if you want to deploy AI onto vehicles, you need a really good operating system. And when, when we were getting deeper into that space, there wasn't really Anything that we were happy with, like things existed. Absolutely. And we were using what was available in the market. And as an engineering organization, we roughly realized these things aren't great. We think we can do this better, and so let's, let's build something. And that was then the, that was the. To the moment of inspiration that started our operating systems business, which is now a very real business for us. And in order to write and run great AI, you need a great operating system. And so that's what got us into that. And then the third bucket that we work on, it's true fundamental AI technology models. We do a lot of work in, as I mentioned, the foundational research, but then also the world models, the actual autonomy models that are running on these physical machines, as that's across cars, trucks, mining, construction, agriculture and defense. And so that's both land, air and sea.
A
And also a smaller subsector of that third bucket is the interaction of humans with those machines. So that's a multimodal experience. Historically, if you're moving a dirt mover or any of these machines, there are like, you know, buttons you press, whether they're actual physical tactile buttons or something like a touch screen. That's just fundamentally is changing to where you're just talking to the machine and the machine and you're teaming with the machine.
C
Voice.
A
Yeah, voice, Absolutely, yeah. And also the machine just being aware of who's in the cabin, what their state is. You can think from a safety systems perspective. The most simple version of this is like the driver is tired, right. And they're, they're, you know, if you get those alerts when you're driving your car, so maybe take a coffee break that take that times, you know, a couple of order of magnitudes up. But this concept of teaming man and machine is important. When you think about running agents or just running different instances of Claude and doing work for you in the background, you can take that analogy out, almost copy and paste and put it into a farm where you have a farmer who's running a number of machines. So where they interact with the machine is where there's maybe a critical decision or a disengagement or something. But generally speaking, the agent on the physical machine is running and making decisions on the behalf of the farmer until there's something maybe critical. And that's also what we work on. So that's not pure autonomy. It's a little bit of a mix. But it falls under autonomy in the automotive sense. That's typically defined in SAE levels as an L2 system, we have a human in the loop, but just take that idea to other verticals.
B
You have not mentioned hardware at all, like sensors or you know, obviously we, you mentioned you don't do chips. I think even in AV there's like a big, you know, cameras versus lidars. Like what are like in your space. Maybe some of those design decisions that you made and are they driven by the OEM's ability to put things on the machinery? Like how much influence do you guys have on co designing those?
D
Yeah, so we don't make sensors. Like we're not a manufacturer. Obviously we use a lot of sensors in our Autonomy products. In terms of what actually goes on the vehicles, we have a preferred set of sensors that we let's say fully support. And then our customers, they can choose from those. And obviously if there's a very strong opinion on supporting something else, we will add that to the platform as well. And the LiDAR question is at this point sort of the age old topic in Autonomy and the state of the industry right now is lidar is hands down a useful sensor specifically for data collection and the R and D phase of Autonomy development. If you see for example a Tesla R and D vehicle, it actually has LIDAR on it. To this day, right in the Bay Area, we see these, you'll see like model Y's or cybercab that have lidars on them just driving around. So it's useful because it gives you per pixel depth information. So if you can pair a lidar with a camera and you can say that, well, this camera is looking this direction, this lidar is looking this direction, and now for each, each pixel of the camera, I can see how far away is that pixel. You can actually then use that as a part of your model training. And then that depth information then becomes a learned state of the camera data. And then when you're doing the production system, you can now remove the lidar and now you can actually get depth with just the camera. And so that difference between like a highly sensored R and D vehicle and then the down costed production vehicle, we use that across our whole portfolio of products. And of course the end goal is you want super low cost and super reliable. And and then in certain use cases you have some more bespoke things like in, in defense as an example, you do things at night oftentimes and so you care about sensors like infrared more so than. And you don't, you don't want to be putting energy out, so you don't want to use LIDAR or radar, but you still need to be able to see at nighttime. So yeah, we work with the whole gamut.
B
Cool. So that's kind of like on the hardware level then on the OS level, how does that look like? What is like unique? I mean I drive a Tesla. Whenever I drive some other car that has a screen, it always sucks. It's some like cheap Android tablet, it's laggy and all of that. What does the OS of like the autonomy future look like when most people.
D
It's really like what you just described. When you think about operating system in a vehicle, you're thinking about the hmi, right? The human machine interface. And absolutely that's an important part of it, but that's actually only one thin layer on top. So when we talk about operating systems for AI in vehicles, there's many, many layers that go deep into the safety critical realm and embedded systems. And you're talking about the real time control of let's say the electric motors or the engine and the actuators. And you have different redundancies for different, let's say the steering actuation in the vehicle. And all of these things need very core support in the, in the operating system. And then of course for autonomy, you have real time sensor data that's streaming in and the latencies there are really important, right? If you tried to imagine you tried to run Microsoft Windows, streaming your sensor data in or controlling the vehicle, like the latencies are going to be absurd. Like you can never do that. And so what's special about what we do is we really have this system level thinking, right? So we're looking at, we care about every performance characteristics of the entire system. And then we also, because we're doing a lot of the software or all of that software, we can fine tune and control all those things. So we can very, very carefully tune in the latencies for every aspect of the system. We can carefully tune in the memory management. We can have the right fail safes and fallbacks for different things because you have to account for what if there is a critical failure? What if there's a cosmic ray that flips a bit in the middle of the processor that causes some malfunction and you have to have a fail safe to all of that. And so the core operating system is a part of that. And then the one last thing which is a lot less exciting but is actually a very big topic, is reliability of updates. So I have a Tesla and you get updates fairly frequently once a month. Most companies that are making vehicles are basically Never doing updates. And even if they are doing updates, they're usually only updating maybe one module, maybe they're updating the HMI module, but they're not able to update, let's say, the C3 critical parts of the system. You have to go into the dealer for that. And so with our operating system now, we can actually enable highly reliable updates of any system in the vehicle. And that's way easier said than done. There's lots of technically deep stuff in the tech stack to do that in a way that you're not going to accidentally brick a vehicle and imagine you're.
B
That'll be bad. Yeah.
D
Bricking a car is very expensive. And honestly, across the industry, maybe one of the most just pure impactful things that we've done is we've just, we're now enabling the industry to actually do software updates.
C
Just to clarify as well, who is the customer for this? I assume a lot of hardware manufacturers have their own firmware, and I'm sure some of them would just have you write it for them because you're experts and others would have their own. Who pays for this? Who invites you into the house? Is it the end user or is it the manufacturer?
D
Yeah. So let me make an analogy, firstly on the fragmentation of software. So physical machines today are more akin to the state of the phone market before Android and iOS existed. Right. So I worked on Android at Google, by the way, many, many years ago. And part of the reason that Larry at Google decided to get into Android was they, they wanted to run Google products on a bunch of phones, and they bought all of these phones from the industry and it turned out they had like 50 different operating systems on these phones. And it was virtually impossible for Google to make their app run on all 50 devices equally. Well. And so the solution was, well, actually, what if they created a really great operating system and made it attractive to all of these phone makers? And that was sort of the genesis for what Android was and why Android existed. It was a way for Google to get their products onto really wide diversity of devices. The state of the physical industry right now, it's a little bit like that. Yes, these companies have firmware, but they have so many different operating systems. It's so fragmented. And to actually get a modern AI application to run on these vehicles, you first have to consolidate the operating system. And so that's why we've done that. And then your specific question was, who are our customers? It's generally, it's the companies that are making these machines. And so we're selling our technology to them to really simplify the architecture and then enable these applications to run on them.
C
How much is reusable across? Do you have one OS that is just configured for everything or is there some more customization that's needed?
D
Yeah, highly reusable. So the fundamental technology is quite universal. Right. So things that we do to think about though are like chipset support. And so if you're coding, let's say an LLM and you start with an assumption that hey, I'm going to use Cuda and I'm going to run this on an Nvidia chip, then you don't really have to think about the hardware in that sense. You're just, okay, I'm in the Cuda Nvidia ecosystem and I'm going to use that. But the hardware, especially in safety critical systems, it's a lot more diverse. There's not one or two flares, there's that a bunch of different chipsets that we have to support. And so our operating system doesn't just run on like the equivalent of x86, it has to run on a number of different architectures from chips from a bunch of different companies. But again we've been working on this for a long time now, so we have support for all of those chipsets. And then when you want to then run the AI applications, we can then do that reliably across now a variety of providers.
A
And I think that is like heavily inspired by Android. Right. Android has a huge suite of testing and it's a reliable operating system that runs on thousands of devices. And you know, we think we can, we can do the same in all these physical moving machines with a difference that we're really in a safety critical realm. Android isn't.
B
So on Android I don't need to use Gmail, I can use Superhuman. Like what about your machinery? Like can people bring somebody else's automation to it or is it kind of
D
like all in one, you have to use us? No,
A
yeah, yeah, it's totally open. Yeah, yeah.
D
Our, our philosophy is that we are a technology company and so we, we license our technology to customers to use how they want. And so if a customer wants to, if they want to license our Autonomy tech and our operating system, then great, we'll license those. If they just want to license the operating system and then use different Autonomy tech, that's fine also. And we have great documentation when you use developer tooling.
C
Yeah, exactly, yeah, it's like a better together if obviously if you, if they were together, is it all C, I assume is with different compile targets.
D
We use a lot of C. I mean, Rust is sort of the new hot kid on the block for a bunch of things as well. But yeah, the lower level you get, especially when you get to real time constraints, you hit C at some point and at some point maybe you work your way into assembly when needed.
C
Oh, damn.
B
I'm curious about the coding agent adoption. Just like since you're mentioning more esoteric languages like what's the adoption internally? What have you learned?
D
Yeah, we use everything. I mean, so Cursor was I think the hottest tool in the company for a good while now. Claude Code, I think has taken the reign on that. We have an internal leaderboard that we use just to sort of encourage adoption within the company. And yeah, they're phenomenally useful. I mean, it's honestly, we take inspiration from some of those tools also and how we're adapting some of that mindset of thinking to the physical realm. Like if it's so easy to build an app for this or that thing that lives just on a screen, we're taking a lot of those same ideas and applying that to, okay, well if you wanted a physical machine to do something, how easy can we make that using our own tooling and platform as well?
B
Are you changing any of the OS architecture kind of like the way you expose services to be more AI friendly?
D
Yeah, absolutely. In the early days of our tools infrastructure work, it was a lot about. You had engineers that were experts in certain topics, but the things that you're dealing with, they're oftentimes more mathematical or more abstract, where actually GUI tools are very, very useful for certain things. As an example, we have a product we call Sensor Studio, which helps you design the sensor suite for your autonomous vehicle. Whether again it could be a car, could be a drone, could be mining equipment, could be a robot, and you place sensors in different places, there's different library you can understand what are the trade offs that you're making in the design of that system. And that was a very gooey intensive thing because it's a little bit more like a CAD tool in that sense. If you've seen CAD tools nowadays though, we expose all of the underlying APIs for that. And now using AI agents, you can actually configure a sensor suite with just text and likely reach a better result than you could have through the GUI in the past. And we're taking that thinking now through the whole product portfolio.
C
Another thing I was thinking about is just in terms of AI adoption, does that Change your hiring at least a little bit or how do you sort of manage engineers differently?
D
Yeah, absolutely, it does. We, I think like every company in, in the Valley right now are evolving our, our hiring practices because the, the skills required to be effective are changing so fast. Right. I mean you used to really select for just rote implementation ability and, and now it, it is more the AI engineer skill set, right, where it's like, yeah, you know how to implement, but actually just banging out code is no longer the core job. It's actually knowing what questions to ask, knowing how to tie together these different AI tools. And so the interviews that we give now I think are way harder than they've ever been. But we also allow selective use of AI tools to solve the problems. And I think in that you start to see more of a bimodal distribution of engineers, you start to see, wow, there's this subset of, of people that they really get it, like they're all in and they've clearly invested the hours needed to learn these tools and how to be effective. And then there's sort of the group of people that haven't done that and that the productivity gap is just enormous. And so we're trying to obviously select for the people that are really into this.
C
I first wrote my AI engineer piece three years ago and when I first wrote about it I was like, actually not everyone should be an AI engineer because I think there's an extremist stance where, well, every software is an AI engineer is an AI engineer. And my actual example of people who should not be adopting AI was embedded systems and operating systems and database people. Are they adopting AI?
D
I think it's the classic bitter lesson topic, which is six months ago I would have said the same thing, but it's, it's becoming super useful for every domain, I'm sure. Right? Like there was, I think six months ago or maybe maybe a year ago if you tried to use, let's say the latest Claude model for writing shaders, GPU shaders, the results were probably underwhelming. And if you use the latest model now to do that kind of task, you're a little bit blown away, like, wow, that actually worked. That's amazing. And we see the same thing in the embedded realm. The no question though, especially when you get into safety critical systems, the human validation is 100% key. You're not going to trust your life to AI written software that's not been very carefully checked by humans. And so I think now really the challenge is about that appropriate level of Human validation for these safety critical systems.
B
How do you think about touching on the simulation side? I think verifiable reward and reinforcement learning is the hottest thing. What have you done internally to build around that and what makes you sleep at night? If somebody's just web coding something or wants to try something new, you have a good enough system. Because I think the opposite is also true. If it's super easy to write anything, then it puts a lot of work on the verifiable side of it. What does that look like for people?
D
Yeah, verifiability, broader bucket of evaluations. How do you evaluate the results that you're getting? I think this is probably the hardest problem right now because as the models get better, it can be harder and harder to find the faults in the system. And so the problem of doing proper eval to find those faults, that problem also keeps getting harder and as the models get better, but it's no less important than it's ever been. There are still going to be edge cases that are not met and whatnot. And so it's a big area of investment for us on the reinforcement learning topic. I mean, the key thing is there's all of these new requirements that come to be in the latest generation of these technologies. So for example, end to end is the big thing right now in autonomy and physical AI, which is you can now train these models that can effectively take sensor data in and then put control signals out and get really good results out of that. But the way that you train and improve those models is really different from the previous generations. And so to do reinforcement learning on an end to end model, you now need to actually simulate all the sensor data. Right. So then this becomes, we call our work in this neural simulation, but it's think of it like a hybrid of Gaussian splatting and diffusion methods where you really care about performance. Performance is everything. If you can't do do enough simulation fast enough and cheap enough, you actually can't get results that are worthwhile in the end. It also gets to a lot of our work in embedded systems, which is like performance critical work. And that performance optimization, performance criticality, it carries over to a lot of the model training work because the only way to make it affordable is it has to be really fast.
A
I think it's worth a few minutes talking about our own evolving thoughts on verification and validation within kind of traditional simulators which are, you can think of like vehicle dynamics or something like that, which are just taking textbooks and taking those formulas and putting them into software. To now this neural sim world model universe. I think that's an interesting topic. Yeah.
D
So in more traditional development, you oftentimes would have more black and white answers to questions. And so in Europe as an example, there's a regulatory system, it's called Euro ncap, it's the European New Car Assessment Program. And as part of that, the vehicles have to pass a bunch of tests. And those, those tests actually include safety systems. So automatic emergency braking for a child that runs in front of a car, or let's say an occluded child that runs out and you hit it and, and so you have, you end up with sort of these binary answers of like, well, did, did the car under test pass this specific test? And there's a very, this very well known set of test cases that the vehicle has to pass. And that was how the industry worked, let's say until 10ish years ago. But what's changed now is with these models, everything is statistics, right? You no longer have a black and white answer, but it's like, well, how many orders of magnitude or how many nines of reliability can I get in the system and how can I prove that to be true? And, and the big unlock honestly for physical A as an industry is that these models are just becoming much more reliable. Things like, things actually work a lot better. It's like the number of nines you can get out of these systems are now good enough that it actually becomes cost effective to really deploy these things. And so the big shift in verification and validation has been from a little bit more of a, again, in the past it was strictly requirements and are you meeting or not? And now it's more of a statistical verification and validation case where it's all about how many nines are reliability and meantime between failures, that sort of thing.
C
And is the target audience regulators or even the customers? I imagine the customers are bought in and it's mostly regulators that need to be satisfied.
D
We do work with the US government, we do work of course with the European governments and the government of Japan. And the government is not an AI lab by any means.
C
So they just care about the outcome.
D
They care about the outcome. And so we do education in that regard and sort of teaching about, hey, this is how we think validation should be done and this is an approach that we think is reasonable and how to think about when is a driverless system actually safe enough to go on the roads and that sort of thing. But I wouldn't say that the government is asking for it. We're more Teaching the government in that sense, it's honestly, it's more so for our own comfort.
A
Right.
D
We want to build very safe systems. And then of course our customers care deeply about that as well. But in that context, we're also typically educating our customers.
A
Yeah, first. I mean our first core value is on run safety. So I think we can't underline enough that us also verifying and validating that the systems that we're deploying are safe to us is probably as important as like some regulator or a customer saying,
C
you know, of course, yeah, you have to satisfy yourselves.
A
Yeah.
D
As a whole, across the world, regulation oftentimes it's like a. Almost lowest common denominator, but you really have to substantially exceed what the regulators are expecting to make good products.
C
One thing I often talk about, and I try to make this relatable to the audience also is Cruise, where they had an accident that basically ended the company. I wonder if people overreact to single incidents because incidents are going to happen regardless because it's a statistical thing. But I don't know if regulators understand that you cannot extrapolate from a single incident. But we do because that's all we have to go on. And your sample sizes aren't necessarily going to be lower than consumer driving.
A
I think the Cruise example wasn't a technology failure. There was the real compounding issue. There was just how did the company talk to the regulators and what was their kind of behavior? And I think that became more of the issue if you look, you know, it is.
D
And it definitely was a technology failure, but it was made much worse by.
A
Yeah, yeah, yeah, yeah. Let me put it another way. There is a version where CREW still exists. Right, right, right.
C
It was like the last straw. It like a long chain of.
A
Yeah, like ATG had that horrific accident or someone actually dying because, you know, that was a homeless person crossing the street. So yeah, I think, I think we can't understate enough that ultimately like statistical validation of something that's one part of it, but it's not the only part of it. Like consumer. And let's say mainstream adoption of these technologies is also going to be part of that conversation. I think companies like Waymo are doing a lot of service positively to the industry in the sense of they're setting a high benchmark and they're showing kind of in a very responsible way how to deal with these. There have been way more incidences as well. They've just not been as significant as the Cruise one that you mentioned. But yeah, so I Think you'll just continue to see that. I think probably the long term question is really going to be again around like it is very clear. Humans are way worse drivers statistically. Like there's no debate. And so at what point. We're emotional animals.
C
Yeah. So my thing is like we have to get to a point as a society where we accept horrific accidents that would never happen by a human because statistically we understand that it is safer overall. In the same way that planes, they're safer than. I think they're the safest mode of transport that we have.
D
Yeah.
A
I mean it's more dangerous to drive to the airport than it is to
D
get on the flight. So if you're ever.
C
Yeah.
A
If you're ever getting nervous about getting on a plane, just think I just
B
got to get to the airport.
A
If I get to the airport, I'll be good.
C
But then planes also concentrated tail risk. And if planes. Yeah.
D
And I don't think we honestly have to worry about there ever being accidents from these systems that are much worse than what humans would cause. Because humans do terrible things. People fall asleep at the wheel all the time.
C
I have, I'd be the drowsy driver,
D
kind of like drunk drivers. And that's the extreme end of the example. But these AI systems, you have redundancies, you have fallbacks, there's many, many things have to go wrong for there to actually be something catastrophic. Because there's so many fallbacks that these systems have.
A
Yeah.
B
I mean your simulation is so vast because there's so many use cases. What are maybe things that worked in a simulation? And then you put it out and it's like, fuck, this just, this just did not work at all. Yes.
D
Maybe a bit of a misconception about simulation there. So let me go a little bit more technical on this. So at first go. No simulation is going to represent the real world. There's always a process of this sim to real matching where you actually, you need the real world feedback to basically feed into the parameters that are being used in the simulator. And you have to do that. It's like this validation flow a number of times until you've can get some confidence that I think the simulator is now accurately representing what's going to happen in the real world. Now if you have a situation where you've done that full validation and you thought that it was accurate and then there's something different, those are much trickier cases and that absolutely can happen. But really the validation process is a really important part. You can never skip the Simulation validation process where you're actually ensuring that hey, actually my sims real gap here is, is small enough that I can trust these, these simulation results. And there's so many fun things that you can do when you get into it. Like I'll give one, one fun example that came up recently is like in these, these humanoid robotics systems, overheating actuators is a real problem, right? So obviously phenomenal demos. The most amazing, the most amazing I can. Guys, I love, I love watching robots do acrobatics like everybody, but these systems actually overheat, right? And one of the ways you can use simulation though is you can actually have the temperature of those actuators be one of the parameters that's represented in the simulation. And then if you're doing reinforcement learning over a certain task, then the robot can actually adjust its motions in the simulation to account for the fact that, oh, it knows that as it's moving it's actually beginning to overheat this motor. But if you didn't have that parameter of, let's say the heat of that motor represented in the simulation initially, then your RL policy might, it will disregard that. And now you run that on the robot and the robot will overheat and fail.
B
I guess the question is like, how do you have all of these parameters taken care of while also understanding the deployment environment like temperature is like a great example, right? Well, why did you make my robot worse when it runs in like a freezer? So it actually shouldn't worry about that. It's like, yeah, how do you design these simulations?
D
This is honestly the, the, this is what makes simulation so hard, right? It's because you, simulation is fundamentally about. You're trying to optimize the development of a system, right? How can I build the system faster and better and cheaper and what are all the levers that I have to actually accomplish that? And because simulation's just a software program, you can change it a lot more easily than you can hardware systems. And then what's particularly awesome about the, let's say world models and using that as a part of simulation is now the simulation doesn't just scale with let's say adding new math equations in, but we can actually scale the simulation environment now with additional real world data. And that also unlocks a whole new field of robotics.
A
There is a meniscus line where you cross where still doing real world testing is better. In the center real gap, you can reproduce reality at exceedingly expensive costs and nothing is free. So really you're finding that line where you're getting great performance, you're getting great feedback, whether it's on the training side or on the eval side. But it's way cheaper than doing in the real world. At some point, that doesn't make sense. And so even from our earliest days in Autonomy, our view was you're still going to do real world testing. There's not this magical land where you're not going to do that and maybe even a more nuanced version of this, like traditional software development is. You know, most of your testing for software on a vehicle, 95% of that can be like traditional CI CD kind of flows that you'd have in traditional web development. But once you have. Now let's say you have a truck, you can do like 4% of those in like a rig, which has all the components, the electrical electronics of a truck, but doesn't have. It doesn't have the tires and it doesn't have the physics. And then you have the 1%, which is actually the vehicle. There's a similar analogy in terms of using simulation. For intelligent systems. You can do a lot in a simulator and using world models, but ultimately it's physical AI, so you're going to deploy it on physical machines. And the freezer example comes to light.
B
The world model thing has been, to me, the hardest thing to wrap my head around. We have faith, Leon, on the.
C
I'm going in a small series with like, another intuition company. Genuine intuition as well. Yeah. I mean, lots of coverage on NERFs,
B
and yeah, it feels like we talk about the heliocentric system. Right. It's like in a world model, if you just feed visual data, the model might learn that the sun spins around the Earth. It makes sense. Right? And it's like, well, not really. And I think what are like some of these other things that like. Like hydroplaning is one thing I think about. It's like, can a world model understand hydroplaning and what amount of water causes it to happen? And it's like, yeah, to me, it's like, I don't understand how you guys do it. I guess it's like the real thing is when you're doing both cars and the highway in Japan versus the excavator in a mine in Arizona, wherever you're deploying them, how much of it are you relying on the world models to generate the simulations for you and then try and close the gap after versus giving the world models as a tool to your engineers to curate the simulations, if that makes sense.
D
Yeah, totally. So I can say at a pure engineering level, I think if you're hoping to do real world deploys and you're purely relying on a world model approach, you probably won't get to something that works before you go bankrupt. So there is just a very practical mindset of like world models are amazing and they're extremely useful for a lot of use cases. But there are a lot of other things that you need to do to actually get something started and something deployed and working. Most fundamentally, role models are all about it's understanding the world but also understanding what's going to happen. It's like the cause effect relationship. Right. And so if you have take some sort of construction tool and that construction tool is going to be doing some work on the earth in some way, it's going to be moving earth. The world model needs to understand that cause effect relationship. Like, okay, when I, when I take this material from here and put it over there, now I have things that are over here and not over there anymore. And that cause effect relationship data obviously is a, is a big problem. The hydroplane one is, is actually a really great example because it's actually quite non obvious sometimes. Right. It's like, well, it's, it's raining and, and well this, this road has, let's say the, the appropriate curvature to it. So the, the water is running off the road and cars are driving faster here. And then you approach a road that's very flat and water is now puddling on that road and, and all of a sudden cars are driving slower because when they were driving faster they were starting to lose control. And there are a lot of visual nuance, very nuanced visual cues in the scene. And so I do think in the world model concept there's a good chance that the model actually would learn that you should just drive slower when these visual cues exist. And that's obviously the beauty of these kinds of models where they learn these non obvious things.
B
It doesn't need to know about hydroplaning to know that it needs to drive slower. I guess it's.
D
Yeah.
C
When I ask questions about also the deploying models, I presume you use a lot of these world models for training, data and simulation. But what about deploying it onto the systems in production? Presumably you have GPUs on device, but they're, I keep saying on device. What's the right term for this?
D
On machine or embedded.
A
Yeah, yeah.
C
What is the embedded world like? Because for people who are not used to that world, this is very al.
D
Yeah. So actually we call it onboard and offboard. Onboard software and offboard software. And the great thing about offboard software is you don't have to care about time. And you can run really large models, right. So you can say, well, this model, I don't care if it takes one second for it to give me a result or 10 seconds for it to give me a result, because we have time and the models can be really big and they can run in data center on a huge GPU and you can obviously have distributed compute, et cetera. But onboard you don't have any of those benefits. You're like, well, I have this many milliseconds where I need an answer from this model. And so a lot more of the energy then is about. Think of it more like distillation and it's truly efficiency. And literally every fraction of a millisecond counts. And you can't have a situation where the model takes too long because then the vehicle can't actually function. And so you can still use a lot of the same techniques. And the models themselves you can think of as like a derivative of larger models that you can run offline. And then you're trying to just get a model that still performs really well, but it's small enough version that you can then run on this embedded system where you care about latency and power.
A
Yeah, yeah. And I think like the broader point I think, which maybe is not obvious but it's worth saying is in physical AI world, we're not really constrained right now by the intelligence of the models. It's actually what Peter's talking about is actually deploying them. The hardware they give you. Yeah, and the hardware you give you. And there's just a reality is of safety critical systems. So those end up being your limiting factors rather than, let's say a limiting factor for a foundation model company is going to be just capital maybe, or researchers. So we're in that way dealing with, for us as people who kind of come in that realm of a very interesting. Those constraints force creativity.
C
And I imagine nobody was deploying or giving you the hardware for transformers back in 2018, whatever, but now they are. What's the evolution like? Just peel back the curtains a little bit.
D
Yeah. Transformers. First off, I think the paper was Originally published in 2017, so there's no time and.
C
But I'm just saying, I guess I'm saying embedded ML systems, usually a lot less parameters, a lot less compute, and now orders of magnitude more.
D
Yeah, absolutely. What I was going to say though was I think in the original paper in 2017. Maybe it's in the last paragraph, somewhere in the paper they talk about. By the way, this technique might be useful for images and videos as well. And it took a few years for that impact to really hit. But now we're seeing transformers are everywhere.
C
Yeah, transformers.
D
And then the computers keeps getting better and better. But you do have this fundamental trade off, right. It's like you have power, you have cost and performance and getting the right mix of those things in an embedded package that can also be like shaken and baked in all the conditions that these things have to operate in. But yeah, I think that they're only going to keep getting better. And so we also try to plan our strategy, understanding that we know the rate of improvements of these systems.
C
Yeah. So Google just released the Gemma 2B model. Effective 2B model. Is that useful to you guys or is that too big?
D
You can run that model on an embedded system, definitely. So yes, it's useful in that regard. The bigger question is like what do you use it for an embedded system? You actually need to customize it quite a bit to make it useful for something. But yeah, you could run a 2 billion parameter model. Definitely.
C
It's also interesting what percent is a custom ML model that only does that thing versus a generalized LLM, which probably is not that useful actually for your context.
D
You can imagine different use cases, right?
C
The voice stuff.
D
Yes, totally. Yes. So for the, the actual autonomy elements, I mean that's 100% in house we do every bit of that. The data simulation, the model, everything. But when you get into the more generic use cases like voice or voice assistant kind of thing, that's where these more generalist models like Gamma actually can be quite useful.
C
Yeah. And then there's also obviously a trade off between what percent must you do on machine versus just call home.
D
Yeah, it's all about latency. It's all about latency.
C
Yeah. Well, you know, I think actually in a lot of contexts, especially in the us you can just have a connection to the web.
A
Yeah. I think, I think though most of our universe is everything has to be fairly, you know, embedded and local because just the nature of. Even in the US there's a lot of cash have coverage. Right. And if you look at like the old world of autonomy within mining, which is like long before transformers and kind of neural networks in the CNN and kind of a universe, they were really just hand coded systems. They were just like this machine is going to run to that place with
D
this towers RT has like very accurate GPS yeah.
A
And so that worked and that works for 20 years. So why would we actually need to use transformers or kind of more modern end to end systems? But mainly because you can only really run a path and run backwards. That provided a lot of value, but not as much as you get when the machine is actually intelligent. It's seeing, it's perceiving, it's acting in a dynamic world.
C
I looked up RTK, real time kinematic, 1 to 2 centimeter accuracy.
A
Yeah, yeah, fantastic. But you know, and fantastic in faraway lands where there's not going to be cell phone coverage.
D
It's widely used on the legacy mining and agriculture autonomy systems today. So like for example, a combine that can be precise within 1 or 2 centimeters as it's driving down the field. They use RTK. It's expensive. Yeah.
A
And it's autonomy, but it's not intelligent in the way that I think all of us in 2026 would be talking about intelligence.
B
In one of your blog posts you mentioned research on large scale transformers that are similar to those doing modern generative AI. What are the big differences other than you're absolutely right, I should steer the car, so I don't. You probably want to remove that.
D
We have a diversified BET strategy internally and the reason we, we've done that is because we operate in now a bunch of industries, a bunch of geographies and, and each of the approaches has honestly a different risk to them. And so like we're not going to put all of our eggs in, in a single, in a single basket for a single approach because that approach may not work out. And so that's one of the bets that we have. And it has certain advantages in certain scenarios and then, but the way that these things play out in practice is it has certain benefits, it also has certain drawbacks. And then the research team tries to then work on the situations where that's actually worse than these other approaches and to ultimately arrive at a really great solution for all of these things.
B
Is there a plan mode for physical autonomy? The idea of planning to happen then action step.
D
So short answer is yes. Right. So just like you can use Claude code to plan out some complex coding task and you get some almost specification written out. Those similar, similar approaches absolutely can be applied to physical systems. Because imagine you're trying to accomplish some task. The easiest to think about is robotaxi. But I think things get more interesting, let's say in the defense context or in the mining context, you actually do have to think about many steps in advance. It's not just this one thing, but to accomplish the goal there's a hundred steps and then this concept of the plan mode, it's very applicable.
B
Yeah, I was going to say to me, driving feels like a great next token prediction thing because you're kind of like on a path and it doesn't really matter what you've done before, you can always turn around. Complaining versus mining. It's like, oh man, I took a scoop out of this thing. It's like now we can't really, you know, I can't really go there anymore. You know, it's like, is there like a huge difference? Like how would you, I guess, like do you have like a taxonomy of like these different types of just kind of like driving, excavating, like flying.
D
So the interesting thing is I think probably everything in the world can actually be boiled down to like a next token prediction problem. And in any workflow anything can be thought of almost as like there's this, the sequence of steps or the sequence of trajectories or whatever you want to call it. And it can be boiled down actually to that sort of thing. And in the mining case, you can imagine taking that scoop, okay, that was that set of tokens and now the model is now understanding that, okay, that the state space is different and now the next time I do token prediction, it's going to be modified by that. But yeah, the remarkable thing about these techniques is just how universally applicable they are. Right. I mean it truly is incredible what
B
else is underrated about what you guys are building on the physical side, I think there, I mean we were talking about it before the episode. There's a lot of human owned companies that do these great demos and then I can't buy it. So obviously it can all be there. In your case you're like in production on real streets with a lot of customers. What are the things people are underestimating? The same way the Waymo demos seven years ago were great and then took seven years to actually get them on the street. Can you share about maybe the last 1%? That was really hard to get done technically.
D
Yeah. So certainly productionizing stuff is really challenging no matter what. So I would split the answer maybe into research and then also into production first on the production side, there's just so many problems that you find when you actually get the stuff to go in the real world. And so the classic problem in Humanoids right now is these systems are actually pretty brittle. And so I'm not talking about any one company but just as an industry, these Systems are pretty brittle. Interestingly, I saw this thing the other day that I think China is doing a marathon with humanoids. Yeah. So in government, and not China specifically, but in any government, there's a concept called prize policy, which is so that there's different ways of influencing an industry to go a certain direction. Like, you can regulate it. Right. You can do mandates, or you can actually just do these competitions. So the US version of this was the DARPA Grand Challenge. It really worked.
C
That's really worked.
D
But I think China is literally doing this marathon because they know that reliability of humanoids is a problem. And so what cooler way to solve that than to have a competition where humanoids need to run 26 miles? Right.
C
Are we there? Can robots run a marathon?
D
I think it's happening any day now. Yeah.
C
So we're there.
A
By the way, also automotive, there's a version of this which is like 24 hours at Le Mans. Right. It's like Porsche wins 24 hours at Luba, literally puts those products into production. I would actually break it down. You talk about research and you talk about production. There's actually a step in the middle which is like advanced engineering. And I think a lot of the industry is moving into advanced engineering, where it's like, it's not fundamental research. Like, we're coming with novel techniques. It really is advanced engineering for production. So what are the subcomponents that are going to limit to getting into production once you have production, deal with another set of problems, which is like the deployment, maintenance of those machines that exist? So I would say, at least in our field, were mostly in advanced engineering in the, like, automotive pilots.
D
Honestly, every. Every step is hard, though.
B
That's why you're worth $15 billion.
A
You bleed every step. Yeah, it's fun. I mean, I think. I mean, it's like. I don't know, I find it really enjoyable.
D
Yeah. But what it was also fun is like, so we've. We've been doing this now for almost 10 years, and we've just seen. We've seen so much. And so right now we can look at any company in this space and get a demo and I can write down a list of. I know exactly the next 20 problems they're going to hit. And I can guess also what they're going to try to solve each of those. And I can guess which one's going to actually work.
A
Yeah, these were particularly geniuses.
D
We've seen this stuff.
A
Yeah, we've seen enough of this stuff. We lived enough of this stuff. Our own Mental models of the world as leads in the company. We've tried so many things in the many. We're talking about the wins here. There's plenty of losses among that many people doing that many different things. And so that kind of like get baked into your mental model of the world.
D
But I would say in general we're excited about robotics for sure and like the massive opportunity. Massive opportunity. And what's happening now in the industry is I think none of these concepts are new. Right. What's new is like this stuff is actually working now. People have wanted to use neural nets robotics for a long time, but now again we now have the data sets, we have the simulation technologies where stuff is actually starting to really work. And yeah, we want to be part, we're going to be part of that for sure.
B
Do you have requests for startups or advice against starting certain startups? There's a lot of like scale app or whatx new companies. It's like, what do you think are things.
A
A lot of, a lot of applied intuitions for other things. I think you, you hit a, you hit a certain, what is it? You know, badge when Y6 right. You become like a, you know or, or literally the same, similar names. Like, you know. I, I mean, I think my biggest advice, you know, in this, like almost like commercialization of technology is I think often that constraint. So we talked about like hardware constraints and we talked about there's also like on the commercial side there's constraints which is we're going to only do things that fit in this box. That is I think very good for founders. The reason I think it's not often focused on is because you have plenty of access to capital and the technical problems are so hard. You're like, I already have a constraint which is just getting this, you know, this technical problem solved. And I think the venture community generally speaking tends to be not very technical for them. If you just say if we solve this thing, it's going to be a lot of money, that's kind of enough for them. But you as a founder, I'm not giving you advice on how to pitch VCs that'll work for VCs. You still got to run a sustainable business. And I think we're really in. That question you asked earlier about kind of what's maybe not obvious about our company. It's like this is truly compounding technology. A lot of the work that we do just compounds. We don't throw it away. It gets better. The operating system work gets better. The dev tooling Gets better, the models get better. And so we're really going to get a. I think you see it in Waymo as an example. Like Waymo is a company that is, I would say very interesting for a long time, but not worth $126 billion. Right. So what happens is that the human brain just doesn't emotionally understand the compounding effects. So that's going to happen in our universe. So now if you're a founder, you're at the beginning of that, you know, that long, you know, walk. If you can put a little constraint on, on commercials, that has a small ability for you to more likely see the other end of that, that walk. Because you get to the other end, you will get the big return from compounding technologies. Just a lot of people just don't make it. So yeah, summarize, like think a little bit about the equation of how you use money and where you use the limited resources and limited engineers that you have. I think sometimes in founders falsely kind of take very mature companies strategies and then apply to their like nascent. They're like, oh well Steve Jobs says be completely vertical. Yeah. In 2007, Apple is very different than 1978 and 1982. Those companies were deferred. They were literally just taking electronics from other manufacturers and putting it in enclosure. And so just be a bit more like, I don't know, bit, a bit more nuanced in your, in your commercial approach as it informs your technical approach.
B
Do you feel differently today? Like, I mean you just join X, right. You've been building this company, you've been building this company in stealth and now you're like, well, I should probably be talking about what I'm doing. I think a lot of founders are in a similar way where they want to raise a lot of money to signal they're strong and you raise a lot of money without spending and to hire. Yeah, you obviously like that. Do you think that's still possible to like have a very narrow approach of like hey, we're kind of like building a compounding thing without a grand vision
D
right away versus it's very difficult to
A
answer very general questions that. So maybe like maybe I reframe it. As in is it possible to build a product that has a small, let's say problem space and hope that the problem space will grow? Maybe that's like a different way of asking the same question, but more answerable. I think always yes, that is the old yc like go really deep and then you know, rather than very broad and shallow, very broad and shallow. Unfortunately there's just too many tech, especially in hard tech companies, there's just too many problems and you can't, you're going to do all of them in a very mediocre way. And so the full product is actually fairly mediocre. So yeah, I'm still in the camp of find a small problem space. The other question you're asking is tangential is like should you build in stealth and anonymity? Well yeah, if you're a yccoo, yeah you can. Because we worked together at Google. We have a long history and we don't and which means, which is another way of saying we have big networks. I mean Our first of 400 people majority were Googlers. Like a majority of the company came from, you know, this giant company we worked at. And that's just very different. You're a founder who doesn't have that experience. You have to do these things. And I think it's, that's so it's like just don't take my version of the world or whatever other founder Jensen's version of the world. They are in a different time and space and most importantly their companies are in a different phase. And so then if you want to take inspiration from other really young companies, that's also bad because most of market a fail, right. So the only, the only solution you really have is use first principle thinking and say based on my skills, my co founder skills, the skills of my early team members and what I'm hearing from customers, what's the product space that I should, I should build in You.
D
Yeah.
A
Does that make sense?
D
Yeah.
B
Yeah. I mean Sam, Sam Allman, he said he regrets a lot of the advice that he's given at yt. So I'm always curious to ask founders like you who not been if you
C
learn who leaves yc? That's the opposite.
A
Sam was president, I was coo and we'd have a CEO. So we work together extremely closely. Would be an understatement because the firm was also small. The YC wasn't as big as an OpenAI is. I directionally agree with that. But I would say that's not more of a YC function, it's more of the market has changed. It is a different world. The AI industry is different. The AI companies I should say more specifically and how they relate to the other YC companies and Mark, it's just so fundamentally different. The amount of money raised is different. The amount of investors, the sheer number of seed funds. One of our early investors is floodgate and they did some analysis in the late 2000 like double O's where they were like there's like single digit number of funds that were like floodgate which were like writing sub 1 million dollar checks first checks and they were not accelerating incubator. And Ann, who's one of the co founders there with Mike, they said that today they try to do or like today as in like three, four years ago they tried to do this analysis and they lost count at like 350 funds or something like that. So we're just in a different environment. So the YC advice from 2014 just would not apply in 2026. But Sam is like way better at saying these things than me. He says it in the shorter, most more interesting than me. I can just give you. If you ask me what is the purpose of a car, open the owner's manual, I say number one, there's a steering wheel and know instead of like it can change your life and we'll be there.
B
Yeah, gets your autonomy and free.
D
Yeah, exactly.
C
And then for Peter, I was just kind of curious if there's any particular tech or research problem that you would call out as very meaningful for you guys if it was solved and unsolved and if anyone is working on it, they should get in touch with you.
D
Yeah, I think generally making models very efficient because we have to run on actual vehicles. Physical AI is literally, it's taking very large AI and now making it very small and very efficient. So we're constantly just at that boundary of these limitations of like, well you can have a great model but now we need to make it faster and smaller and so that in general as a field. And then I would say also folks that are just really passionate about evaluating this technology as in like model evals. It's a hugely difficult topic, especially in safety critical systems. And we have a really great engineering team that works on this now and researchers. But it's a big area of investment. And so yeah, folks that are passionate about performance. I'd say model performance both in terms of capability and literally latency and then evaluation of models.
B
Awesome. Yes. Any specific engineering roles that you're hiring for and especially like who are people that succeed at your company as engineers? I think that's always the most important thing.
A
Yeah, I mean apply co careers. I think there's literally hundreds of roles. We're looking at all the topics we talked about from you know, dev tooling and physical AI to operating systems to autonomy and AI within physical machines. The types of engineers that's a Great question. That's actually more interesting than the roles because we're, you know, we're a larger company where everything. Yeah, I think, you know, we're a Sunnyvale company and I think just from this conversation and kind of our backgrounds, you can kind of predict a little bit of what that means. You know, we tend to hire fairly serious people who are, who understand low level systems, not just, just like a superficial understanding of technology. Like engineers. Engineers almost. We definitely hire folks who are like, have some diverse skill sets. We hire tons of specialists as well, to be very, very clear. But they've seen production and I think that. Because that really informs how you, how you build technology.
D
Yeah, people that really appreciate the hardware software boundaries. Yeah, exactly. Definitely. In the vibe coding era, there are a crop of engineers that they don't think about hardware at all and we don't have that luxury. And so people that are a little more passionate about going a little bit deeper.
C
Yeah.
A
If you're to contrast us versus like a, you know, AI lab or something, that's where you're going to get the biggest contrast, which is like we're just dealing with reality. I mean, what are the things. All of the classic stuff, you know, you want, you want folks who work hard and who are, who love the technology and like, like a podcast like
D
this,
A
like if you made it to this part of the podcast, you probably qualified for. You're interested in this.
C
And Peter said that he likes the podcast as well, which is like.
D
Yeah,
C
specifically on the hardware software boundary part. It's something I think about of our education system in the States, but also maybe just in generally, I feel like there is that retreat away from that classical computer science or EE education, computer engineering or. Yeah, yeah. And is there a point where you just do it yourself? Like, you know, because at this point you guys are the world experts on this and actually you shouldn't wait for some college system to spit them out for you.
D
You mean that in terms of education and upskilling kind of thing?
B
Yeah, just, just grab like General Motors already did it.
D
Yeah. Smart GMIs.
C
Literally they're out of university.
A
Yeah, that's where I went for undergrad. I went to the General Motors Institute. Okay.
C
That did not come out. I saw hbs.
D
Yeah, yeah, everyone sees hbs.
A
The Harvard brand. Lewis is high.
C
What's General Motors Institute like?
A
It started 100 years to answer this exact question. Literally the question you just said, which is like not enough engineers in Michigan. You know, you're talking about the early days of the modern corporation General Motors being. There's a great book, Alfred P. Sloan's My Years with General Motors that is highly recommended which basically talks about what becomes a modern corporation. But a part of that is they're like, we're basically buffering on engineers. So they started a school and actually even Google, as most, as recent as probably 10 years ago, was thinking of starting a university in term. There were discussions on it. So yeah, we definitely upscale folks as well. The amount of training we do in turn is actually surprising. Yeah, but it's a luxury you have when you're at our size. When you're like 25 engineers.
D
No, just got to survive.
A
So again, take advice that's relevant to for your company rather than immediately start
D
trying to take high schoolers and make it.
C
I didn't go to a class that you taught because it sounds like you can teach a lot.
A
Yeah.
D
I think honestly one of the most amazing use cases of these large models now is education. Right. I've taken an engineer who, very good engineer, aerospace engineering background. And in a relatively short time, time span, like he's doing very confident front end work, very confident backend work with the help of these models. And not only can you do implementation with them, but you can also just learn. Right. It's like you ask questions and you don't feel embarrassed because the model's not going to call you out on anything.
A
Yeah. I think the thing you probably need more than an engineering degree though. Engineering degrees are like very important. Like I don't know if there's a way to shortcut like fluid dynamics or heat transfer.
D
The fundamental stuff.
A
The fundamental stuff, at least on the mechanical side is you need an engineering mindset. And that sometimes is actually. Not everybody actually has that. Some people are emotionally drawn towards the arts or something else. And it's completely fine, there's no judgment there. But I think the engineering mindset maybe in a more usable way is like wanting to understand a lower level and the lower level and the lower, like, like how do photons move?
D
That is a extreme curiosity.
A
Extreme curiosity. Like what is light? What is a radio wave? Like these really fundamental questions.
D
Right. And if you get curious enough about software, you ultimately end up in hardware.
C
Right. And so that's the LNK quote.
A
Yeah, yeah, exactly.
C
So I'm trying to make analogies and then do all these things. Like, you know, you're kind of a blend between new General Motors and Tesla Autonomy division for everyone else.
A
I mean, you know, we're in all these other fields. I think if you talk to our Trucking customers, they wouldn't even think, perceive, you know, they like some sense like oh, you guys did some automotive stuff, you're really helping us.
C
So automotive is not trucking?
A
No, no, no, that's, it's like a whole. Yeah, yeah, it's, it's, it's separate. There's different problems. The masses you have, you have the general categories of on road and off road. I think that's what you're thinking. So there's on road and off road, but within on road there's all these sub subclass machines. Especially when you talk about, you know, you know, you look at the delivery robot that doesn't have a human in it. That's actually very different because now you're not concerned with like the actual feeling that you have when you're in a self driving system. You have to account for that. You can just, you break hard and you don't care about jerk and all of these metrics become, become insane.
D
The way to think about it honestly is a little bit like any system that you as an, as a human would need special training to operate. You can think of a little bit differently. So like the license to operate a truck is different from the license to operate a car, which is different from the license to fly a plane. It's different from. You get it right.
B
Awesome guys, thank you for taking the time.
A
Yeah, thanks for having me.
D
Thanks for having us. Thank you.
Guests: Qasar Younis (CEO) & Peter Ludwig (CTO), Founders of Applied Intuition
Host: Latent.Space
Date: April 27, 2026
In this episode, Latent Space dives deep into the world of "physical AI"—AI that powers and controls real-world moving machines—with the founders of Applied Intuition, Qasar Younis and Peter Ludwig. The conversation covers the company's evolution, the technical challenges and solutions in deploying AI in safety-critical, physical environments, how simulation, operating systems, and foundational models intersect, and the unique requirements and culture of engineering for the physical world.
"We want to deploy software on physical machines… we used to joke when we started the company, 'We’re not the guys to build Instagram.'"
— Qasar (04:27)
(Peter, 08:10)
"We’re a Sunnyvale company… we tend to hire fairly serious people who understand low-level systems..."
— Qasar (65:50)
"Our first core value is on-run safety. We can't underline enough that verifying and validating the systems we deploy are safe to us is probably as important as a regulator or customer saying so."
— Qasar (32:01)
"Deploying intelligence onto a lot of things that don't have screens. You know, they're physical machines… most of the value we provide is putting intelligence that is in safety critical environments."
— Qasar (01:37)
"The technology has changed so much. Our own technology stack has completely changed, I would say roughly every two years."
— Peter (05:41)
"Simulation's a very deep topic. We have a whole suite of products in that and we could talk for many hours about that specifically..."
— Peter (08:19)
"When we talk about operating systems for AI in vehicles, there’s many, many layers that go deep into the safety critical realm... you have to have a fail safe to all of that."
— Peter (14:47)
"There is a meniscus line where you cross where still doing real world testing is better… At some point, that doesn't make sense."
— Qasar (38:45)
"Physical AI is literally, it’s taking very large AI and now making it very small and very efficient."
— Peter (64:40)
"If you get curious enough about software, you ultimately end up in hardware."
— Peter (70:49)
This discussion with Applied Intuition’s founders offers a candid, highly technical, and practical look at what it takes to build, validate, and run AI safely in safety-critical, real-world environments. The episode is a must-listen for anyone interested in autonomy, robotics, embedded systems, or the future of applied AI. The team’s hybrid philosophy (great research + pragmatic engineering), evolving talent models, and focus on compounding advantages serve as a guidepost for AI engineers and founders alike.
For more resources and show notes, visit latent.space.