Loading summary
A
Google's Threat intel team says that they have caught the very first AI built zero day exploit out in the wild. And how they found it was some hallucinated CVSS scores. We'll get into all of that before though, we gotta talk about Satya Nadella who just took the stand in the Elon Musk vs OpenAI lawsuit. And we get some numbers that we've never seen before about OpenAI, which are fascinating. In addition to all of that, we also have the Robinhood co founder that just raised $275 million to put rockets in data centers in orbit by 2020. Direct competition to what SpaceX is doing. And OpenAI is dropping $4 billion on a dedicated enterprise unit. There's so much to get into. Let's dive in. First of all, the European Commission has just opened direct talks with OpenAI and Anthropic about how the AI act applies to frontier models. Routers and AI chat daily. Both were talking about this earlier today. Brussels basically is talking to the two US labs and it's kind of interesting. There's obviously a ton of different AI companies, but if you want to really talk about where the usage of AI is today, I mean, you could throw Gemini in there, maybe Grok's got a little bit, but it's basically anthropic OpenAI and Gemini kind of behind both of those two. So if Brussels wants to talk to someone, they're going to talk to the two biggest firms and that's where they're going to get the biggest impact basically in, you know, with all of these conversations. The craziest part to me, which is something that I think a lot of people that are kind of anti regulation have been saying for a very long time, and it's a very good point, is that regulation is very slow and the AI act from the EU is just to like put it into context how old this thing is. This was written before chat GPT5 and Claude Opus 4.6 existed. I think most of the regulators know this. Hannah Verkhuin, the, the EU's tech chief, has actually been talking about this as well. So I think they're all very aware of it. Basically the problem is with a lot of these top of the line, a lot of the frontier models, right? I think the rules need to be changed a little bit, otherwise they're going to be directed at the wrong things. They actually need input from OpenAI and Anthropic when they're writing the spec and they're writing like, how do these rules apply? Which before it felt a lot more like a bunch of bureaucrats in a room with some ideas. So the good spin on this is like, look, they're actually talking to the industry, which I do think is important to, you know, understand what needs to be regulated. What a lot of people are complaining about specifically, you have people like Casey Newton who's basically just saying that this is regulatory capture. The two biggest labs are going to start writing all of the rules that's going to make it for the. But it's harder for the smaller labs and the open source teams to actually follow all of them. So it'll be interesting to see how that shakes out. The next thing I want to talk about is Cowboy Space. This is a new company, it's kind of fun. It's Baju Bot, it's the Robinhood co founder and he just closed a $275 million Series B for his company, Cowboy Space. And they did it at a $2 billion post money valuation. Their plan is essentially to build and own rockets to put data centers in orbit. Exactly what Elon Musk is going to be doing with SpaceX and what he's been talk is crazy though, because for them they have to actually SpaceX is obviously very far along I guess is what I'm trying to say. SpaceX has been doing rockets. They perfected this for a very long time. And of course, yeah, $275 million sounds like a lot of money, but these are insanely expensive infrastructure to build. Building the rockets, building the data centers. They had Index Ventures who led the round and then they had Breakthrough Energy and construct. IVP and SAIC were all in this. The numbers on all this is pretty crazy. Each satellite is supposed to mass 20 to 25,000 kilogr and that delivers 1 megawatt of power and it's going to carry just under 800 onboard GPUs, which is basically a small data center integrated into the rocket's second stage. So the first launch they do is going to be at the end of 2028. That's what they're kind of targeting. They've already hired the former Blue Origin and SpaceX leads and they are building their own engine, which is basically the hardest thing you can attempt in this business. That building the actual engine is very hard. But they've already hired people from Blue Origin and SpaceX. And to be fair, you know, Blue Origin is kind of is kind of the Amazon spinoff, or not really the Amazon, but the Jeff Bezos spinoff of SpaceX. And I think it's kind of cool. The more of these companies that come out, you have to basically grab employees from people that worked at the major companies. Those employees will move over to your company because you're able to offer them really great incentives, really great compensation. And beyond just the compensation, but you're going to actually give them, you know, equity in the business. And so people that are, Maybe they joined SpaceX later on, they didn't get a lot of equity, they can go move over to one of these new startups, build something similar, use a lot of the information that they Learned over at SpaceX. But now they have a bigger chunk of equity. This is how these markets get more mature. I'm actually really excited for this and I think this is a great thing. The bet that's happening right now is that there is not going to be enough commercial launch capacity to scale orbital compute any other way by 2028. So they need these new companies to come out. That's what bot is kind of sane here. Starship, of course, is going to be tied up with Starlink Blue Origins, New Glenn just failed a payload in April and ULA is backed up. So if you think that the compute demand curve is going to be real, then basically a failed payload is a big deal. And if you've been watching kind of the new orbital rocket programs for the last five years and the billions of dollars spent on them, you know that a lot of people are also skeptical of this. Either way, I'm really excited to have more competition in the industry, which is how these industries grow and we get some incredible products. Okay, the next thing I want to talk about is OpenAI's $4 billion enterprise unit that they have just put together. It's entirely focused on corporate AI development. It's basically its own group and they're going to handle enterprise sales, integrations, professional services, all of this kind of, you know, deployment engineering that big customers actually need. I think this is OpenAI kind of saying that look, you know, ChatGPT Enterprise isn't good enough as kind of just this side thing. We need an actual product organization for it. And what I think is great here is we've actually seen some other really, you know, other big multi billion dollar deals that they've been doing with PE firms to kind of do something very similar where they're, you know, the PE firms are essentially the sales org. They have this big or big kind of org chart that they can go and deploy all of OpenAI's products into. And I heard a lot of people saying like, hey, they need to do this because they don't have a huge sales team. They don't have this big, you know, enterprise sales structure. So if they partner with the PE firms, they can skip that and just get the products directly in. And yes, I think that distribution is important, but it doesn't, you know, negate from the fact that they also need to build their own sales and kind of their own structure on how to actually deploy this. So that's what I think this is. This is where all of the money lives. And the thing that's interesting is the enterprise contracts that they'll be able, that they'll be able to get through this organization. They're much stickier, they last for longer, they're higher margin than any of these kind of consumer subscriptions that you or I might be paying for for 20 bucks a month. You know, when you have Microsoft and Salesforce and now and Google, all of these companies have all been fighting for the same kind of market share as well. So it'll be interesting to see how they compete there. Microsoft, obviously, with Azure has been, has been fighting for a lot of these same clients. So OpenAI feels like they're now getting out of just being the tool and using other people to sell them to. Now they're going to go directly to the customer and sell it, which will be interesting. Anthropic has already been winning a lot of those deals from their kind of safety position and other things for the last couple years. I think OpenAI is trying to basically, you know, build up their war chest and go heavy into this area. There's also a lot of tension with Microsoft. Nadella's Copilot stack runs on OpenAI models and they also sell to the exact same Fortune 500 buyers. Sam Altman just stood up a unit that is basically going to compete directly with that channel. Dylan Patel on X called it the year. Microsoft and OpenAI stop pretending the relationship is for frictionless. Honestly, I think that's a great tweet. Okay, let's get to the next story. Satya and Nadella just took the witness stand on The Elon Musk versus Sam Altman or really Elon Musk versus OpenAI trial. This is the week. This is week three of the trial. And this is kind of something, I think that will decide a lot of whether OpenAI is going to keep operating as a public benefit corporation or if it's going to, you know, have to be a fully nonprofit like it was started. Elon Musk is seeking up to $150 billion in damages and the removal of Sam Altman and Greg Brockman from the company, which obviously he's got some big beef with them there. Those are some big claims which he is is kind of gunning for. Before Nadella came on the stand, all of the jurors watched a deposition from Michael Wetter, who is Microsoft's VP of Corporate development, who put public numbers on some of the stuff we actually haven't really seen before from Microsoft and OpenAI's relationship. And one of the big things here is that Microsoft recognized that $9.5 billion of total revenue from OpenAI Life to date. So the total amount of money that they have pulled in, which is fantastic if you think about the $10 billion investment that Microsoft made a couple of years ago. I mean, most of that has already been paid back in revenue as of September 2025. So even from then till now, there's probably even more. And there was about 13 billion in committed investments and Azure Compute. So I think we all saw the big headline, 10 billion. But they had done a billion before and they've done some after, and they've done a bunch of, you know, Azure Compute. So I think all in all, they were 13 billion in, and they pulled 9 billion back. So Microsoft is almost paid off on that investment in not, you know, not a crazy amount of time. I think we've never really had the clean ratio of exactly how much money was in all of that. I think that's roughly $0.73 of revenue on every dollar of commitment. And this is kind of just where they're at today, because they still get royalties and they still get other things out of OpenAI going into the future. So, you know, they should make good on that investment. The judge, who is Yvonne Gonzalez Roger, told the Elon Musk team this morning that, quote, we're in mud on the legal theory which is tying the 2025 OpenAI recapitalization to a breach of Musk's original donation. That's what he's trying to, you know, this lawsuit's all about, is he's like, look, they breached my original donation. I think that's not the language that Elon Musk was hoping to hear in week three. It'll be interesting to see the verdict on this case, but in my opinion, there was so much information that was revealed in discovery and all the court docum text that I'm happy the. The legal battle happened either way because we got a lot of good insights into what's happening at, you know, the number one AI firm in the country. Okay, if you're already paying for ChatGPT or Claude or Gemini or Grok or 11 Labs for audio, I would love for you to check out my software startup, which is AI Box. AI. It's something that I have personally built. It's what I recommend to my friends and it essentially gives you access to over 80 of the top AI models all in one place. You get all of the top text models, you get soar, you get VO, you get 11 labs, basically everything for creating images, video, audio, even music, all of it. You can compare the prompts side by side to see what they generate. And you get it all in one place for 8,99amonth. And you also get 20% off if you get the annual plan. But either way, you get all of the top models in one place. You can consolidate subscriptions, consolidate your platforms, your logins, Everything is in one place. It's amazing. So go check it out. I'll leave a link in the description to AI Box. AI. Okay, the big story we got to get into today is the Google Threat Intelligence Group. They've just disclosed today that they have intercepted what they believe is the first zero day exploit in the wild with the help of their LLM Gemini, of course. Everything they do is a plug for how amazing Gemini is. It's funny because we had, you know, Claude Mythos, which found all of these exploits and it was so dangerous we had to, you know, brief the White House on it and then opening eyes like, hey, like we got one of those two that's super dangerous. And now apparently Gemini is using the, the playbook of look, we're good at finding zero day exploits as well. This one in particular was interesting. It was a Python script targeting two factor authentication on an unnamed open source web based system administration tool. So they didn't tell us exactly what it was. It wasn't like, hey, we found this crazy exploit in like Mozilla Firefox or something. They didn't tell us what it was, but they said how they found it. So gtig, which is the Google Threat Intelligence Group, I'm going to remember that. But GTIG is describing the attackers as prominent cybercrime threat actors and they are preparing what Google called a quote, mass exploitation event. So there was a huge like heist about to happen of data, a huge exploitation event that was going to happen. I mean they had basically hacked the system administration tools of, of two factor authentication, right? So something that everyone uses. They think their, their websites, their crypto, their bank accounts are all, you know, they Got two factor authentication. They should be good. Well, if the two factor authentication platforms get hacked, we're all doomed. And Google just stopped that with their latest model. They said they disrupted the campaign before it happened. Which is probably less exciting, right? I mean, okay, people, people think I'm ridiculous for saying this, but it's gonna be less in the news if they're like, hey, there's something really bad that was gonna happen and we, like, stopped it. Everyone's like, oh, yeah, whatever. Meanwhile, like, if something really bad does happen, like a huge exploit, everyone talks about it, and then whoever fixes it is like the hero. I think we should definitely give Google some, some hero flowers here for stopping this thing before it ever happened, though. The way they found this, though, is actually kind of funny to me. Apparently there is AI that wrote a lot of the code for the exploit. And the Google researchers flagged a quote hallucinated CVSS score inside of the script, which is an invented severity rating. It's kind of like a wrong citation that chatbots make all the time. And it was sitting alongside what they called a structured textbook formatting. So the malware was written the way that ChatGPT writes. And it's going to have comments in the right places, variable names that look like a tutorial and a fake reference to something that doesn't exist. Once you've spent any sort of time reading what ChatGPT actually spits out at you, you know that you kind of know what that looks like. I think for security researchers in particular, we all kind of get it when there's people sending us a message and it's been written by ChatGPT. Obviously we're like, ah, there's like all these aesthetic kind of giveaways. Security researchers see those as well. Charles Kamakal at Mandiant said on X Quote, it reads like a script kitty copy pasted stack overflow that hallucinated Google said it Quote does not believe Gemini was used. So Google's like, look, we found this exploit. You know, our AI tools are awesome for finding them. We don't think it was generated with Gemini. Everyone's kind of like pointing at ChatGPT for what actually built this. There was one line in particular I wanted to highlight from Google and all of this they wrote, quote, adversaries increasingly target the integrated components that grant AI systems their utility, such as autonomous skills and third party data connections. I think this is exactly what shipping really fast in 2026. You know, when we're talking about like agentic skills and browser using assistance And MCP servers and retrieval connectors, everything that personally use every single day. Everyone is a new attack surface. Like I think we do have to admit that. And they're getting mapped by attackers at the same time. Vendors are trying to put all of these things into production. Everyone wants to put on mcp. I'm working on one for my startup AI box. So we're all trying to build them. Yuchin Jin posted this over on X. He said the same week we ship the connectors is a week somebody will publish the exploit for it, which is terrifying. What I will say though, if you are vibe coding, if you're building tools with all of this, anything that touches LLMs in production, what I would do is spend the weekend re auditing going through every third party connector that your agents call. Look at the trust boundaries, look at the prompt injection services, log everything that hits a tool call. I think this is going to become a category and not just like a single incident that happens. We will see this a lot. The people that are like any of us, if we want to survive here, if we want to be able to be successful and keep our vibe coded stuff up there and safe, we need to treat connector security as kind of a first class concern. So this is a very important thing for us. All right guys, this is everything for the show today. Thank you so much for tuning in. If you enjoyed the episode, it would mean the world to me if you could leave a rating and review wherever you get your podcasts. I hope you all have an incredible rest of your day and make sure to go check out AI box. AI if you want to get access to all of the AI models in the world in one place, have a great day.
Latent Space AI
Episode: Google Stops AI Exploit, Nadella Testifies, OpenAI's New $4B Unit
Date: May 11, 2026
This episode of Latent Space AI dives into several groundbreaking developments in AI, cybersecurity, corporate alliances, and regulatory challenges. Major highlights include Google's interception of the first known AI-built zero-day exploit, Satya Nadella’s testimony in the high-stakes Elon Musk vs. OpenAI trial (with the release of previously unknown financial data), OpenAI's $4 billion commitment to a dedicated enterprise unit, updates on the European Union’s direct regulation talks with AI leaders, and a look at ambitious new ventures bringing orbital data centers to the fore.
[00:00–05:11]
[05:12–10:08]
[10:09–15:25]
[15:26–20:30]
[20:31–27:00]
This episode offers an incisive, up-to-the-minute look at the interplay between AI innovation, regulation, corporate competition, and cybersecurity. Essential listening for anyone following the front lines of the AI revolution.