Transcript
A (0:05)
Thank you so much.
B (0:07)
Hey everyone. Welcome to the lit and Space podcast. This is Alessio, partner and CTO at Decibel and I'm joined by my co host, Swix, founder of Small AI.
C (0:16)
Hey. And today we have a returning guest as well as a new friends. Welcome, Michelle and Josh.
A (0:21)
Hey there.
C (0:23)
Both of you work on the. I guess Michelle, I think used to introduce you as manager on the API team. It seems like you've changed your role since we last talked on the podcast.
A (0:33)
Yeah, now I lead a team on the research side, specifically in post training.
C (0:38)
Yeah. And Josh, you are also on post training.
D (0:40)
Yep. I'm a researcher on Michelle's team.
C (0:43)
Yeah. And I just found an interesting commonality you guys have. You're also both from Waterloo, continuing the tradition of extremely cracked engineers.
A (0:50)
Oh yeah, we talked about that last time. That's right.
C (0:54)
Okay, so we're gathering to talk about GPT 4.1. You launched it. I mean we got a little preview and it was a little bit roomy rumored. Right. It was pre released, I guess with open router as Quasar Alpha and then there was also an Optimus version. And I think people are trying to figure out why are we going back from 4.5 to 4.1. There's a whole bunch of other things, but what are the headline facts? I guess you guys want to emphasize about 4.1?
A (1:20)
Yeah, I'll just say we released three new models today. GPT 4.1, GPT 4.1 Mini and GPT 4.1 Data. And the real focus on these were just making models that were great for developers. So we improved instruction following coding and shipped our first 1 million context models.
C (1:38)
Josh, anything to add? I don't know if there's anything else that people should really that are like sort of in the fine print.
D (1:45)
No, I think the only thing that I would touch on maybe twice is that there's actually a new model in the lineup, Nano, which is even faster for developers that are making, you know, low latency applications.
