Transcript
A (0:00)
Welcome to the podcast. I'm your host, Jayden Schafer. Today on the show, Anthropic has just announced Opus 4.7, the latest model. And there's a lot of craziness that is coming is officially live and I've been testing it all day after that. There is a startup called Antioch that just raised $8.5 million to do for physical AI what cursor did for software development. It's basically a simulation to real play for robotics and I think the framing they're using is super smart. So I want to get into all of that and then, and this one I think is also really interesting. There is a company called Upscale AI. It's reportedly in talks to raise a $2 billion valuation round. It's only seven months old, there is no product yet. And I think what that tells us about AI infrastructure investments right now is definitely really important to understand. And then of course we have the fact that Google Cloud and Avid just announced a multi year partnership to bring Gemini AI Cloud directly into the tools that professional media and video editors are using every day. So we're going to get into what that actually means. Google also dropped an annual ad safety report, which inside of it they showed that they blocked 8.3 billion ads last year using their AI powered enforcement. And the way that they did it I think is a big shift in how this kind of automation enforcement is working. We'll go over the numbers of before. If you're trying to keep up with everything happening in AI right now, one of the most practical things you can do is get access to more than one model. I think the problem is if you're subscribing to every everything separately. ChatGPT, Claude, Gemini, whatever it is, you're looking at $20 on basically all of them and it can stack up very fast. AI Box is my own software platform. It's how I solve this. It's one subscription, it's 8.99amonth and you get access to over 80 AI models in one place. So Gemini, Claude, Gro, Chat, GPT11 Labs for audio, all of the top image models, VO3 for video, Sora for video, all of the top models in one place. So. So what I keep coming back to on top of all of that is that we built an automation builder. If you describe a tool or a workflow that you want in plain English, you don't have to be a developer. I am not a developer. It will build it all for you. And this has saved me so much time and I think for A lot of people that are running any kind of team or even a solo operation, this is one of the best ways. I recommend people automate the tasks you do to give yourself more time. There's a link in the description to AI Box. AI. I hope you love it as much as I do. All right, the first thing I want to talk about is this Avid and Google Cloud story. So. So they have announced today that they're entering a multi year strategic partnership. They're going to embed Google Gemini models and Vertex AI into Avid's core tools. So if you're not deep in the media world, Avid is basically, I think, the backbone of how a lot of professional video and audio productions are functioning today. Film studios, news organizations, post production houses. There's just a huge chunk of the industry that basically runs on Avid. So this isn't like, I guess like a niche software company. When Avid is, is doing something, the professional media world is basically coming along with it. So I think the vision of these AI assisted production workflows, you have, you know, intelligent content search that's, you know, happening across these massive media libraries. You have automated tagging, transcription, scene analysis. I think it's a lot of the time consuming work that currently takes human editors hours to do manually. What's interesting to me with this story is less the specific technology and it's more about who the customers are because, so I mean they're, they're basically targeting, you know, professional creatives. So like editors, producers, cinematographers, all of these people have historically been, I would say, pretty resistant to automation. And maybe there's some good reason. Right? Craft is kind of part of their job. So I think watching how this actually lands with some of these people using the tool is going to be really interesting. A Google Cloud partnership is easy to announce, but actually getting all of these, you know, seasoned film editors to trust AI when they've been doing, you know, this their entire life without AI is going to be a different conversation. I'm going to be watching what kind of feedback looks like after they kind of make a big announcement here at NAB soon. Okay, the next thing I want to talk about is Antioch. So this is a startup that has raised $8.5 million today. This is their seed round. They have a $60 million valuation. It was led by Category Ventures, they had Mac C Venture Capital and Box Group and all of this. And basically what they're doing is they're basically saying they're going to do for robotics what Cursor did for software development. So I think the problem that they're solving is called sim to real gap, which is basically a technical bottleneck in physical AI right now. And the core problem that they're solving is that if you're building a robot or basically any sort of, you know, like an autonomous system, you want to train and test it in a virtual environment before you put it into the physical world, right? You have a humanoid robot. You don't want to throw into someone's house to go clean their house, to train it when it's around people and stuff that it could break or harm. You want to be able to test this robot in a, you know, a simulated environment. And, you know, a humanoid robot in someone's home is kind of like a crazy example that actually I don't think is too far away. But even more importantly, you could imagine like an Amazon warehouse with these robots with ton, you know, millions and millions of dollars of inventory. You can't have it running around smashing stuff. So these things have to work quite well. And it's really hard to simulate the environment. So the problem is that all of the virtual environments are often not very realistic, right? So, I mean, you can imagine like when Mark Zuckerberg announced the Metaverse and he didn't have legs. Those are the virtual environments we're working with for robotics trainings here. So I think with all of that performing inside of a simulation can completely fall apart when it hits an actual real world condition, right? So like lighting variations, sensor noise, unexpected surfaces, the real world is way messier than any sort of simulation. And, you know, I was actually just watching a YouTube video where I saw a pretty good example of this. It was Mark Rober built a, you know, like a robot that could, could. It was a goalie for a soccer net and Cristiano Ronaldo, he was going to have him like, try to, to beat it in the soccer match. And in this video that he made, they, they had it all working in California. They flew out to, I think like Spain or Portugal, I think it was Portugal to, to kind of shoot this video with him. And while they were there, all of a sudden the robot wasn't working. It couldn't work and they couldn't figure out why they troubleshooting all day long. Finally they realized that I think it was like the lawnmower guy at the, you know, field that they were at. The vibration of the mower was messing up the sensors and was basically causing this whole thing to break. So things like this in the real world actually happen for these robots. And what Antioch is Trying to do right here is let's robot builders spin up digital instances of their hardware and then connect them to simulated sensors that actually replicate the data that the robot is going to be experiencing in reality. So right now they're focusing on sensor and perception systems, which is kind of where I think a lot of the complexity lives for autonomous cars and trucks. And they're also going to be looking at, you know, drones, agricultural machinery, a lot of construction equipment. And I think one thing that I do think they're doing really well is like in all of their branding, they're kind of comparing themselves to Cursor. I think the positioning is pretty good in that because Cursor, like we all know, basically changed the software development experience for AI. You kind of had this tightening the feedback loop. And if I'm being honest, I think Claude Code is probably pushing Cursor's usage down. But Cursor has a lot of corporate partnerships and deals built out. But I think that great positioning, a great brand, they've raised $8.5 million, which isn't huge in today's environment. But you know, this kind of simulation to Real Gap is a real world problem. And I think if they can actually crack it, they're going to have a lot of leverage. The next thing is upscale AI. They have a $2 billion valuation, they have no product, and they've been around for seven months. They're reportedly in talks to raise somewhere between 180 to 200 million dollars. And I think right now what they're building and they're focusing on is AI chips. And I think probably the key part of this is the infrastructure that lets those chips communicate with each other efficiently at scale. So basically the thesis of this company is that the real bottleneck in AI compute isn't just building the chips faster. What they're saying is that it's essentially building a better system for how chips talk to each other. Now we've seen companies do, you know, really well in this and kind of crush it early on. You saw Elon Musk's XAI originally was kind of one of the first people that was able to get 200,000 GPUs from Nvidia all linked together. And you know, by getting this system all linked together and talking to each other, they will, they were able to train a model much faster than a lot of the competitors. Now eventually I think a lot of people figured out how to build those kind of like super hyper massive GPU clusters. But that was kind of their competitive advantage in the early Days. And I think we'll see some version of that strategy playing out in a lot of other areas. But evidently, upscale AI is working on. Okay. Obviously chips and GPUs. Like, it's not like this is apples to apples comparison, but I think that the concept here is something we'll see in the industry more and more in the future. Basically, what Upscale is betting on is that there's kind of this open standards, and they're arguing that the industry eventually is going to move away from proprietary stacks. I think the fact that they're seven months old and they have no products and they're, you know, Talking about a $2 billion valuation is absolutely incredible. I also think it is logical, given a lot of things in the environment. Right. So I think the conviction in the AI infrastructure layer is really big right now. Investors believe that whoever controls the next gen of AI compute infrastructure is going to capture, I think, just so much value. So I think a lot of VCs, a lot of investors are placing bets before there's a product because they basically think that the window to get in is, you know, pretty small before there's a smaller valuation. I think they think it's closing fast and just to, like, I guess I validate that idea there. We've also got reports today that Anthropic is turning down investors that are begging them to let them invest into anthropic at an $800 billion valuation, which, by the way, is double what their last valuation was. So, honestly, some of these companies that are on an absolute tear right now, and I think people can see where a lot of, like, the infrastructure is going to be in a similar place. Some of these companies that are growing super fast are just saying no to investors, and they're just not raising money right now because maybe they think they can do better in the future. Yeah, there's a whole lot of craziness going on, but I think because of that, people are ready to dump some money into this company. Okay. The big deep dive story that I wanted to talk about today was Anthropic's release of Claude Opus 4.7, a model I've been testing today. So the thing that I wanted to just clear up at the beginning of this is that this is. This is not like the big model that they have been kind of doom talking about for a while. The Mythos model that's going to, like, you know, it's really good at cybersecurity. This is a completely different model. They're still not releasing that this model in Particular is really good at agentic coding, a lot of reasoning, scaling tools, computer use. Right. So where it takes over control of your whole computer and is able to accomplish tasks for you. Which I think they're just seeing so much growth right now with Claude Cowork in this space. And it's available on basically all of Anthropic's platforms. Whether you're, you know, on their website or you have the app, or you are, you know, using this on their API like you get it everywhere. What I will say Opus 4.7 is below. Like I mentioned, the Mythos, I think they're doing that intentionally. Anthropic says that they actually worked during training to quote, differentially reduce opus 4.5, 4.7 cyber capabilities. So I think basically they specifically tried to make it less dangerous on security from, you know, I think right now they, they gave this out to like Microsoft and Amazon and Google the, the Mythos model and said like, go fix all of your code and all your security vulnerabilities. So while that's happening, they are releasing a model which is better than 4.6, but they intentionally nerfed its security capabilities. Now, pros and cons, pros being okay, maybe if it was really good at finding security vulnerabilities, bad actors aren't going to be able to use that. But cons are. I mean it's just less smart in that particular area. I guess they just probably pulled back that training set. But also if you wanted to fix the security on your software stack and on your tools, companies that aren't included in the special, Amazon, Microsoft, Apple, whatever, top security clubs don't get access to something that can also help protect. So I don't know pros and cons to that they're building and safeguards that are gonna automatically detect and block requests that indicate prohibited or high risk cybersecurity uses. If you're a security professional who actually needs those capabilities for legitimate work, there's a formal verification program that you can apply through, which is fascinating. Right? They're like becoming the police of how good your model can be. I might just have to get into cybersecurity and become a accredited professional so that I can get access to the coolest models. I think that there are a couple things worth kind of pulling apart here. The first is just like the model itself, Opus 4. 7 is better at agentic coding and computer use. This is amazing for everyone using Claude Code and Claude Cowork. So personally I'm super thrilled. I think there's also a lot of capabilities that actually matter for getting work done that they've added inside of all of this. It's really good apparently at taking multi step actions inside of software. So it's not just answering the questions, it's like, you know, getting, making a plan for how to accomplish your task and it breaks it down and it does it using all the tools on your computer. And I think even more importantly is kind of where Anthropic is as a company, right? They have this incredible capability. They're on an absolute tear, but they have this model Mythos that apparently is too dangerous to release. And so they're, they're sending out incremental updates and nerfing certain capabilities in this model and holding it back. I felt like for so long we've been pushing every model to the absolute limits to see how good it is and right now we are kind of pulling back on this model. So it's not too powerful. I think that's a pretty big signal about where we are in technology and right now. So that's super interesting. All right everyone, that is the show for today. Thank you so much for tuning in. If you got something out of the episode, a quick review on Apple podcasts or some stars over on Spotify on the about tab genuinely help the show out a ton. It helps essentially reach more people in the algorithm. So I appreciate it more than you know. If you want to access over 80 plus AI models in one place and start building automations without touching any code, head over to AI box AI There's a link in the description. All right, I'll catch you in the next episode.
