Transcript
A (0:00)
Fiscally responsible financial geniuses, monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and save hundreds because Progressive offers discounts for paying in full, owning a home and more. Plus, you can count on their great customer service to help when you need it. So your dollar goes a long way. Visit progressive.com to see if you could save on car insurance, Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states or situations.
B (0:33)
My name is Mackenzie and I started a GoFundMe for the adoptive mother of a nonverbal autistic child. The mother had lost her job because she wasn't able to find adequate care for this autistic child. So she really needed some help with living expenses, paying some back bills. So I launched a GoFundMe to help support them during this crisis and we raised about $10,000 within just a couple of months. I think that the surprising thing was by telling a clear story and just like really being very clear about what we needed, we had some really generous donations from people who were really moved by the situation that this family was struggling with.
C (1:20)
GoFundMe is the world's number one fundraising platform, trusted by over 200 million people. Start Start your GoFundMe today at gofundme.com that's gofundme.com gofundme.com this podcast is supported by GoFundMe.
D (1:33)
Welcome to the podcast. I'm your host Jaden Schaefer. Today on the podcast we have a number of interesting stories. Meta has just launched Manus for desktop in this kind of AI agent on your computer craze. We have Anthropic, which is currently now officially flipped a switch and is beating OpenAI and enterprise spending. A new startup called Memories AI is building a visual memory layer for robotics. OpenAI is expanding their government footprint with a brand new AWS contract. We're going to dive into all of those today. In the state of what I think we're kind of watching these three different layers collide. Infrastructure, enterprise, spend and agents. I'm also super excited to announce that AI Box, my own startup, has officially added video to our platform. You can now create AI tools with video and you can chat with over eight of the top AI video generation models. We have ByteDance's seed dance, we we have Google VO models, we have OpenAI's Sora models and we have Pixaverse models on there. This is super exciting for us. It's been a big push and we hope to see what incredible tools you Guys build with video on the platform if you don't already have a subscription. It's 8.99amonth and you get access to over 70 of the top AI models all in one platform for less than 20 bucks a month. I hope this saves you a ton of money and you get access to all of these interesting new models to test out, try and talk with. You can try out the platform with the link in the description or typing in AI box. AI. All right, let's get into the episode today. The first thing I wanted to cover is just the fact that we have a huge story from Meta's Manus. This is a company they recently acquired. And what's interesting is Manus was sort of going viral. It was a Chinese firm that kind of had to pull itself out of China before this acquisition, but they've just launched a desktop app that is basically bringing their AI agent directly onto your computer. I think they've seen all of the hype around openclaw and realized that having an A, you know, they probably had a lot of these AI capabilities. But bringing it beyond just having it on, you know, a website and into your computer, I think they've seen the value of that. It's a really big shift. I think these agents were living in the cloud before now. They're going to be able to access your files, run apps, organize data, even build software locally. And I think this is a lot closer to how people actually work, which is why OpenClaw went so viral. I think it's basically the beginning of AI agents becoming our operating system layer. I think we're going to see a huge shift here. This isn't just answering our questions, but they're actually going to be doing the work inside of our machines. I think the trade off is obviously obvious. I think more power means that there's going to be more risks, security wise. You, you know, can now give AI access to your local environment, which is going to have a lot of privacy concerns as well. I think for some people, right, Meta doesn't have the greatest track record on privacy and on your data. And so for some people, they might be a little, a little concerned about having to access all of your computer files. But at the end of the day, these are really powerful tools. So I'm going to, I'm interested to see what sort of uptake Manus has. This is already a product that's been doing quite well, and I think this makes it a lot more useful. There's also a new startup called NIV AI. They just Raised funding to solve a problem that I think a lot of people are not talking about enough, which is power. AI data centers right now are using tons of power, tons of electricity, because GPU workloads spike really unpredictably. And I think that forces operators to throttle usage or they have to overpay for backup capacity. So what NIV is doing is they're building a system right now to monitor and optimize power usage in real time. Essentially they're acting as a co pilot for data center energy. I think the reason why this matters is because AI isn't just a software problem right now, it's an energy problem. Companies that figure out how to squeeze more output from the same hardware and power constraints are going to have a massive advantage. I think especially when you look at the state of the world today with everything happening in Iran and the energy shock that we've seen over the last few weeks, I think energy is more important now than ever. A lot of people are talking about the fact that AI companies are going to be, be very severely negatively impacted if these energy shocks, these high, you know, oil prices continue. Because a lot of this was powering data centers, a lot of this was powering energy. And AI is literally just a direct pipeline from energy to what we are, you know, all using, like all of this stuff has to be run. It takes insane amounts of energy. A lot of the data centers, a lot of the AI training facilities that we're building, they're, you know, they're, they're told like they should be building power plants basically attached to them because they, they use so much energy and a lot of people are seeing their local energy bills increase due to these kind of data center projects. So I think this is a really fantastic startup and I'm excited to follow along with them. There's a new startup I've been looking into called Memories AI. They're building what might become kind of this like foundational layer for physical AI, and that is visual memory. So instead of just Remembering text like ChatGPT does, they're basically building a system that's going to help AI remember what it actually sees. So that means wearable devices, robotics, and real world AI systems that can recall visual experiences over time. Right, because right now if you have a conversation with ChatGPT and a month later I'm like, hey, you know, for this project, can you help me write a new, you know, some sort of new document or some sort of new file? It can go and look at my, you know, my history, my contacts and Remember everything about that specific project from two months ago. Now it's a completely different situation when we have robots running around in the real world with cameras on them that are learning and figuring out how to do things in warehouses and eventually in all of our homes with something like the Optimus robots or the figure robot that, you know, these things are going to cost like 20 or $30,000, they'll be in our home to do things. And I mean there's a whole nother conversation of people, if people will want that or trust that. And I think inevitably they will once these things are, have improved, just like self driving cars. But I think right now most of these AI systems are living in really a digital world. And if AI is going to operate in the physical world, right, once we start moving from just having AI on our phone that we talk to to AI in a robot that's walking around or in our home or in our warehouse, it needs to have memory the same way humans do. So I think this is very early, but it points to where things are going in the future. AI that doesn't just respond, but it actually remembers, it learns, it builds context from real world experiences just like a human would. And something that I would actually expect to see from an optimus robot or a figure robot is they would have these, this kind of memory built in. So it's learning and understand. And let's say your robot breaks. I would expect that you can transfer memory from one robot model to the next when you upgrade. So let's say you've had a robot in your family for, you know, 10 years that's been helping out and it understands how to do everything inside of your home. I think you'll be able to transfer those memories to the next iteration of the robot, which is really fascinating and I think a really strong moat between switching between robot companies in the future. Now, I know it sounds crazy in the future, but I think these are the problems that people are starting to solve now. So I'm excited to see what memory AI does with this. Memories AI, okay. Anthropic is now capturing over 70% of new enterprise AI spend. This is according to ramp data. And I actually love these types of reports from Mercury or Ramp or even a lot of different banks will put out these, these types of reports. But I mean, basically they have access to what companies are actually spending because they have, you know, insights into their financials. And so, you know, this is really solid data. Coming out of Ramp just a couple months ago, it was a really tight race between OpenAI. But now anthrop is pulling way ahead and they're actually moving ahead fast. If you look at the charts, it's phenomenal. Their business is playing. I mean, their, their businesses are basically paying all this money for their AI coding tools, I think, primarily. But a lot of people are just using, paying for, Claude, for, for regular chat tools. At the same time, OpenAI is reportedly rethinking their strategy, so they're shifting more focus towards enterprise after they've been heavily investing in consumer products. I think right now the AI race isn't just about who has the coolest demos, it's about really who makes money. Enterprise adoption is a real scoreboard. And I think right now, like people, everyone's like, wow, OpenAI has this, you know, massive user base, which is true. Almost, you know, 900, almost a billion weekly active users. Last I checked, it was 900 million weekly active users. I think Sam Altman, I mean, I know he was just throwing some shade, but he said something recently which was like, there was more free chat GPT users in Texas than like all of Anthropic's users in the US Come buy in or something like that, which is sort of crazy. But if you look at the revenue numbers, they're not that far apart. OpenAI said that they're on pace to generate about $25 billion in revenue this year, and Anthropic is on pace to generate about $19 billion. So these companies are much closer than you'd think when it comes to revenue. Okay. There's a massive story unfolding with OpenAI right now. They're expanding their government footprints through a new deal with aws. I think on the surface it just looks like another partnership, but I don't think that is actually the case. So OpenAI has signed a deal to distribute AI products in the US government through AWS. And that includes access inside highly secure environments like gov, cloud and even classified regions handling really sensitive workloads. Now, we've seen that OpenAI and the federal government, particularly the Pentagon, have had a huge falling out or, sorry, anthropic in the Pentagon. And OpenAI kind of stepped in and took a lot of those contracts. And so I think what is actually happening is that OpenAI is trying to insert themselves directly into the most important distribution channel for government AI, which is aws. Because AWS already has really deep relationships across all federal agencies. They are already, you know, compliant. Their infrastructure is something that's already trusted. And so by OpenAI plugging directly into AWS in this new partnership. OpenAI is not just, you know, selling their model. They're becoming part of the default procurement pipeline for government AI. I think where it gets really interesting for me is that AWS is already heavily tied to Anthropic. Right? They have, you know, Amazon has invested billions into them. Many of the early investment rounds of Anthropic were led by Amazon and aws. You know, famously when AWS was trying to compete in the early days with ChatGPT, Amazon put in $4 billion. And since they've done, you know, many multiples and much more than that. But Claude is deeply ingrained into AWS's AI platform because of that. And so this was supposed to be Anthropic's kind of like home turf, you know, but now OpenAI is stepping directly into that system. I don't think it's just a competition, it's really a platform level battle happening inside that same infrastructure. Government adoption right now I think is kind of acting as a bit of a signal to the entire market, because if your model is trusted for classified and sensitive workloads, then that credibility is really going to be able to spill over into a lot of enterprise deals specifically. Right. Like for users. I don't think users are like, oh my gosh, the government uses OpenAI, like that should be my default model. But for enterprise, I think that's 100% the signal that they see. Right. It's like if the government, the most classified, you know, quote unquote organization in the world is using this, this is probably something that's great for enterprise. So I think it really is expanding right now, OpenAI's reach into federal agencies and it's also strengthening their position with enterprise customers who care about security, compliance and long term stability. Right. Let's be honest, I think a lot of people saw the anthropic deal falling through and felt like the company perhaps was a little bit less stable. Now, is that true or not? I'm not, you know, speculating on that. I just think that's what a lot of enterprises I heard saying. So at the same time, OpenAI is keeping control. They decide which models get deployed, they coordinate directly with customers and they can enforce additional safeguards. So this isn't them handing over the keys, you know, to AWS and saying, you know, you're the distribution layer, you, you let people do whatever you want with the product. I think right now the winners are not just the companies with the best models, it's the companies that control distribution, infrastructure and trust at scale. And so if OpenAI is making this deal with AWS. I think that's kind of the first step forward here. Guys. Thank you so much for tuning into the podcast today. If you enjoyed the episode, make sure to leave a rating and review wherever you get your podcasts. And as always, make sure to go check out AI Box AI We've just launched eight new video models into the platform, so if you want to try those all out for only 8.99amonth, you can check out all of the latest from OpenAI's Sora, Google's Vo Seed dance, and a ton of other incredible models all in the one platform. There's a link in the description. I'll catch you guys all in the next episode.
