Transcript
A (0:05)
Nobody want to do their dishes, nobody want to do their laundry. People will love to spend more time with their family, with their loved ones. So what we believe in is that if the robot is cheap, safe and capable, everyone will want our robot. And we see a future where we have more than 1 billion of these robots in people's homes within the decades.
B (0:29)
Thanks, Memo. Hi listeners. Welcome back to no Priors Today. We're here with Tony Zhao and Chang Chi, co founders of Sundae and makers of memo, the first general home robot. We'll talk about AI and robotics, data collection, building a full stack robotics company and a world beyond toil. Welcome Chang, Tony, thanks for being here.
A (0:53)
Thanks for having us. Yeah.
B (0:55)
Okay, first I want to ask like, why are we here? Because classical robotics has not been an area of great optimism over time or like massive velocity of work. And now people are talking about a foundation model for robotics or a ChatGPT moment. Can you just contextualize like the state of AI robotics and why we should be excited?
A (1:15)
I would say I think we're kind of in between the GPT moment and the chatgpt moment. Like in the context of LLMs, what it means is that it seems like we have a recipe that can be scaled, but we haven't scaled it up yet. And we haven't scaled up so much so that we can have a great consumer product out of it. So this is what I mean. Like GPT which is like a technology and ChatGPT which is a product.
C (1:41)
Yeah. I'm sorry, we're seeing across academia there's consensus around what's the method for manipulation, but everybody's talking about scaling up. It's like we know there's sign of life for the algorithms people are picking, but people don't know if we have more data. Like what happened to GPT2, GPT3, what will happen. But we see a clear trend that there's no reason to believe that robotic doesn't follow the trajectory of other AI fields, that scaling up is going to improve performance.
B (2:11)
Maybe even if you took a step back, what was the process for deploying a robot into the world 10 years ago? Pre set of generalizable AI algorithms. Why was it so slow as a field?
C (2:24)
Yeah. So previously classical robotics have this sense, plan, act, modular approach where there's a human designing interface between each of the modules and. And those only need to be designed for each specific task and each specific environment. In academia that means for every task that means a paper. So a paper is you design a task, design an environment and you design interfaces, and then you produce engineered work for that specific task. But once you move on to the next task, you throw away all your code, all your work, and you start over again. And that's also kind of what happened to industry. So for each application, people build a very specific software and hardware system around it, but it's not really generalizable, and therefore it feels like we're just running in loops. We build one system, and then we build the next one, but there's no synergy between them. And as a result, the progress has been somewhat slow.
