Transcript
A (0:00)
Foreign.
B (0:05)
Welcome to reshaping workflows with Dell Pro Precision and Nvidia, where innovation meets real world impact in high performance computing.
A (0:19)
Hey, welcome. We are live at GTC 2026. We're actually outside in the GTC park. It's actually beautiful out here today. It's not super hot at this time of day. And guess what I got the man, the myth, the robot legend, the man himself, Mitch from xpt. Mitch, tell everyone what you do and why you love robots so much.
C (0:40)
Hello, everybody. My name is Mitch Chiat. I have a film degree and over the past eight months, somehow have found myself among the burgeoning humanoid robot industry. With my background in film and audio, I've found that the combination of microphones and cameras and LIDAR and mocap and VR and simulation and physical computing are sort of all rolled into one physical device, which is a robot. I grew up playing guitar and then started building my own guitars and then started making my own music on a computer and building my own instruments like MIDI controllers, which was a lot of fun. Which got me into things like Arduino, where I could start to code and build my own musical hardware and software. Fast forward five, 10 years. Was blessed with two Unitree G1 humanoid robots and a Dell T2 workstation, which has a fantastic Nvidia graphics card and suite of robotic simulation software, which has then allowed me to take motion capture data of a dancer and somebody doing karate. And I've just released a pack of G1 moves, which is 60 dance and karate moves trained from motion capture data all the way to a robot policy, which means the robot can actually do those moves. And so the beauty of that is you can film somebody or you can use an actual motion capture system. And that just gives you, like, data like you would use in a video game. The hard part is the robot has to perform that in real life, right? So much like a dancer learning a new choreography, you take that data, and then using some Nvidia tooling, you take that data and then you simulate the robot doing that dance move like 100,000 times. So cumulatively, you have like 700 years of GPU time of where a robot gets up, tries to do the dance move, and then falls over. And every one of those falling over starts to create a machine learning model called a policy. And that policy ends up becoming something that the robot can play back while it's balancing and then do that movement. So in the case of the dance moves we're seeing here at gtc, those are all policies trained from motion Capture data that then become a little file that you can load onto the robot and have it do that dance move in real life. So that's how I've taken my background in pure creative, gone all the way to playing with the best in the forefront of humanoid robotics software from my garage. So thanks to Dell and Nvidia for all the gear to make that happen.
