
Loading summary
A
Google and SpaceX are both in talks right now to put AI data centers into orbit. And SpaceX is pitching it as basically a big core part of their $1.75 trillion valuation for their IPO planned later this year. This is wild when you consider things like Anthropic are at a $1 trillion valuation. They are almost double that now, to be fair. They have Xai, they have SpaceX, they have X the, you know, formerly Twitter. They have a lot of assets inside of this company, but it feels like they're really pushing to get the highest, biggest VAL possible and make this a blockbuster ipo. We're getting into all of this. We also get into Project Suncatcher, which is the XAI acquisition, the Anthropic Memphis deal, and some really crazy counter cases for all of this. Also, Anthropic just opened a direct attack into Harvey, who is valued at $11 billion, and Legora, who's valued at $600 million. Both of them are raising more money. Anthropic is putting out more AI tools for. For legal space, which is directly competing with them. Amazon employees are doing what people are calling token maxing. It's a funny term, sure, but basically what's happened is they have AI quotas that they have to hit for what percentage of their code is written by AI. And Amazon employees, rather than actually just doing that, are doing silly workarounds where it just makes it look like they're generating a ton of AI code. This is something I hope we don't see as a trend in the rest of the industry, although Anthropic is probably thrilled because they're making a lot of money off of this. We also have George Clooney, Tom Hanks and Meryl Strep who are backing something called Human Consent Standard. This is for likeness and voice. And this is something that Hollywood and other places are very concerned about it. And Anthropic is publicly disavowing eight different platforms selling unauthorized access to their stock at the. As they are getting close to their $1 trillion round of funding, we'll kick this off with the Anthropic investment deal. So essentially, Anthropic is publicly disavowing, they're calling out and they're naming name by the way of eight different investment platforms that are marketing access to their shares. Basically they're like, hey, you want to get access to Anthropic shares? We have some. Usually these are companies that have went and purchased them on the secondary market or through other ways where like technically they control the shares, but it was like an employee that sold it to them wasn't really from Anthropic. Anthropic doesn't like that because they're like, hey, if you want to get access to our shares, come buy it through us. The notice that they were just putting out came out today. They named Open Door Partners, Unicorn Exchange, Patromana Capital, Lionheart Ventures, Hive, Forge Global, Sidecar and Upmarket and all of them. They actually put out a whole quote and they said any sale or transfer of Anthropic stock or any interest in Anthropic stock offered by these firms is void and will not be recognized on our books and records. They are specifically banning SVPs as a vehicle. And obviously the timing of this is really important as they have their IPO coming up and Anthropic is getting ready to close their $900 million valuation round. They don't want, you know, people to sell shares for more or less than that and take any sort of appetite away from the investors that they're trying to get. Forge Global specifically kind of pushed back on this. They said that they don't quote intermediate trades a company hasn't blessed, and they're asking to be removed from the alert that Anthropic put out. So at least one of these eight has a legitimate complaint. I think the bigger problem is what nobody is talking about and that, that, that and that is there's a bunch of crypto exchanges, one of them being okx, there's a bunch of others, and they've basically seen pre IPO perpetual futures. Basically, they're just an imaginary fake stock that track Anthropic's valuation without ever touching the actual stock price. And right now, when they're doing these private rounds of funding, they'll just peg it to whatever the most recent private round of funding is. But in the future, as it's even publicly traded, you can imagine it could just keep tracking the stock price there. Those all sit outside of this kind of disavowal because they actually never need to transfer any sort of shares. Right. Someone could sell their secondary shares to someone else and Anthropic's like, we won't, you know, we're not going to like, honor that. I don't really know how legal any of that is, although Anthropic seems to think they can just, you know, ignore those. But what they can't do is they can't stop these quote unquote synthetic traders, these crypto platforms, these other platforms that are just making synthetic Anthropic Pegged prices and you're just buying and selling at that price from other people that are interested in it. I mean, it's kind of a crazy concept, but there's not much they can do about it. So that's also an interesting story. Okay, the next thing I want to talk about is what is called the human consent standard for AI. So RSL Media launched this today and a bunch of, you know, big celebrities got on board with this. George Clooney, Tom Hanks, Meryl Strep, Viola Davis, Kristen Stewart. Anyways, a whole bunch of people, a bunch of different creative artists and you know, people. So in the movie industry, in the music industry. This is a registry that is going to go live in June where you can verify your identity and you can publish what AI systems are allowed to do with your likeness, your voice, your character and your work. So this is kind of like a discovering mechanism. It's kind of like robots Txt, it's the same file and websites are already using this to gate scrapers. So like Google, you can say you get put in your robot txt on your website like hey Google don't scrape my website. And even AI models don't scrape my website. And it will kind of stop them, I mean, assuming they actually listen to you. So this is kind of a similar idea. So RLS Media co founder Eric Walter framed this and said basically, quote, the human consent standard applies to the underlying work identity, character or mark itself wherever it appears. So it's not, you know, like gatekeeping a URL like a robot Txt file would do. It's basically gatekeeping the person. Their pitch is that it's going to work for Long Tail too, not just famous people. So you know anyone, even if you're not famous, you can kind of gatekeep your stuff, your likeness from being used by AI. I think if you're being honest with this concept, I'm trying to be a realist on this and maybe a little bit skeptical, but. But robot Txt really doesn't have any teeth. Any well behaved web crawler like Google is going to honor it, but bad ones don't, right? Every AI model of China is going to scrape the entire Internet. They don't care if you have a do not index, right? They're just going to get their stuff. There's no consequences. They're going to get all the data they can. I think the leverage right now comes when major model providers are going to kind of commit to checking the registry at training time and zero have publicly committed yet that they're actually going to do that, right? Like, if they're like, okay, there's a million people that have put their likeness and image inside of this thing, make sure that none of your data coincides with any of these people's, you know, likeness. And if anthropic and OpenAI and 11 labs and Google, all of them are like, okay, we're going to do this, that'd be fantastic. But right now, no one's really opted into that. It's just more compliance. It's not legally mandated, I don't think. I think if basically none of them do it, then it's just kind of a paper trail for the next round of lawsuits. And so anyways, a lot of people are basically saying that this is a big nothing burger, that nothing will happen, but you have a lot of big celebrities backing, so we'll see if this gains any traction. And perhaps just launching it and talking about it and putting it out there is the first step. And then you could try to go get the commitments from the big AI firms. But so far, none of them are signing up for it. Okay, let's talk about Amazon. Their engineers right now have basically been given a target to have more than 80% of developer code written by AI. Now, theoretically, this sounds awesome, but whenever you give people incentives, they're always going to use. They're going to find a ridiculous way to hit those targets to reach their incentives. Anyways, the way that this is happening right now is that Amazon recently rolled out something called Meshclaw. It's basically a clone of OpenCloud, connects to Slack, email and code deployments. They give this to all of their developers and they also set up a dashboard that tracks how many tokens each engineer is burning. By the way, I'll also, like, as a caveat, say that over at Nvidia, Jensen Huang, the CEO said, hey, look, if I'm hiring a developer and I'm paying them $250,000 a year, they better be using at least $250,000 a year in tokens. Otherwise, I don't know why I'm paying them. So we kind of have this as a backdrop from the top people in tech. And so all of these developers are now incentivized to run up the biggest bills possible. So what's happening? According to the Financial Times, they got some source documents today. Employees are now running meshcloth, this new thing that Amazon rolled out in loops to inflate their token counts. They're calling it token maxing. So one Amazon employee quoted in this report quote, some people are just using meshclaw to maximize their token usage. Managers are looking at it when they track usage. It creates perverse incentives. And some people are very competitive about it. About it. So Amazon says that token stats aren't supposed to factor into performance reviews. I don't think anyone actually believes that, especially when you have, you know, comments like that coming out of Nvidia Meta. Employees apparently are doing the exact same thing. I think the reason that that's happening is because Amazon is expected to spend 200 billion in capex in 2026. Most of it is going to be on AI infrastructure. And I think a lot of the executives need, you know, kind of a number to point to when the board asks if you know, their spend is producing change. So I think the lesson I'm getting from all of this, and this is something that I think has been true for many, many years, when you measure something, it becomes a target and it stops becoming a measure. So if you're at a company and you know AI usage is a mandate, you can watch and everyone's going to start, and you know you're measuring that everyone's going to start using more AI usage. Are they going to use it for useful things? Are you going to run it in loops? I think token count is a pretty bad signal. Much worse than shipped code maybe, or maybe like closed tickets or resolved customer issues. But anyways, any sort of metric you put out there could be gamed in some way and people will game it, which is ridiculous. I think the companies that are getting really solid productivity right now, they're kind of measuring outputs and then you get the companies that are just measuring your token maxing. So probably not a metric I would encourage people to rely on. Okay, Anthropic has just opened a direct attack to Harvey and Lagora, two people in the AI space. And I actually have some friends high up at some big AI, some big software, legal AI firms and they're pretty, honestly pretty rattled by a lot of the stuff that Anthropic is putting out. So just today Anthropic dropped a big expansion of cloud for legal. They're going straight for the two best funded legal AI startups. Harvey has raised $200 million in March at an $11 billion valuation. Lagora closed a $600 million Series D last month and they had a big huge ad campaign which was fronted by Jude law. That's about $1 billion in fresh capital just in this one particular industry. Anthropic's response to all of this, I mean Anthropic and everyone else is seeing, look, so much money is going into this industry. There's probably a lot of money in this industry. And so they are basically going to stack a new MCP connector that plugs Claude directly into DocuSign for document management, into Box for file search and into Thompson routers Westlaw for case law research. All paying Claude tiers are going to get this. So it's not just for lawyers, but the plugin suite covers commercial privacy, corporate employment, product and AI governance work. And I think this, the strategy here is really interesting because Harvey and Lagoa are basically selling vertically integrated platforms built specifically for law firms and Anthropic is selling Claude plus some connectors so firms can basically keep their existing documents, their existing kind of research stack. And this is kind of interesting because it's kind of like a model versus platform fight. And I don't think any startup really wants to be fighting with Claude or with OpenAI right now. But it's not kind of like a feature versus feature one. Right. There's a spokesperson over at Anthropic that kind of talked about the legal sector and they said that it is quote facing mounting pressure to adopt AI and the firms and in house teams that move are pulling ahead fast. I think the reason right now that the market is so important for Anthropic, this legal market, is because it's number one really text heavy. They have the billable hour, hour work which is kind of the exactly the exact kind of surface area that general is really good at. It's also where hallucination has been, you know, the most costly. Like you can't have anything hallucinate. So California recently find a lawyer who used ChatGPT to draft an appeal full. It had a bunch of fake quotes in it and so they actually got a lot, you know, they got a fine there. So Anthropic pitch right now is they're like, look, we have our model, it's great, but it's also grounded in, you know, the Westlaw kind of data set. And they, you can also use your own firm's documents and between the two of that you should reduce your errors significantly. I think it's going to be interesting to see if they're able to really get some strong usage out of a lot of the legal firms. From what I'm hearing from friends in this industry, Anthropic is probably going to mop the floor with this one. I think a lot of people love it. Now, is it going to kill Every other AI legal software? No, absolutely not. I think people are going to be able to go way deeper than Anthropic because Anthropic is trying to hit 100 verticals. And if Anthropic focuses too heavily on this, I think it probably will distract them from other areas. Like you saw with OpenAI getting distracted with Sora and 100 other little products that let Anthropic run past them. So I think they should focus on the core product of making Claude better for everybody. But these side quests of just going to try to smash all of the startups in the legal AI space is pretty interesting. If you're still paying for ChatGPT, Claude, Gemini, Grok, 11 Labs or any other AI models, I would love for you to check out my startup which is AI box AI. I basically put in 80 of the top AI models into one place and you can chat with them all in the same chat thread. So I use Claude because I think it has the best tone and really good reasoning for a lot of my text and knowledge base work. But it doesn't have image generation. So sometimes I'll be creating an article and I need a cover image for it and I gotta go over to ChatGPT and then maybe I wanna have the article read and I gotta go over to 11 labs. You got to switch between all these different tabs. I've created AI Box to solve that problem in one thread. You can hit 11 labs for audio, you can hit OpenAI for image and dozens of other image models, dozens of other audio models. You have music, you have video in there from Google, VO3, so a lot of cool stuff in there all in one place. And it's only $8.99 a month. So I hope this saves you a ton of money. You can get hands on with basically every AI model that you should be using right now. And honestly, even if you do pay for let's say Claude and Gemini and Grok, I would say cancel all but your your favorite one, right? Claude's kind of hard to cancel when you have it running on Cloud Cowork and it can control your whole computer. Cancel them all, but use keep AI Box to be able to access everything all in once and maybe with your favorite one. So I'll leave a link in the description to AI Box AI if you want to go check that out. Okay, let's talk about what's going on with Google and SpaceX. This is probably one of the wildest stories today. The Wall Street Journal has reportedly said that Google and SpaceX are in active talks to launch AI data centers into orbit. The timing of this obviously is trying to coincide with the IPO. SpaceX is prepping their $1.75 trillion IPO, which is going to happen later this year. And I mean, people have been talking about this IPO for, for many, many years because SpaceX has just been such a growing, it's been basically the, the most valuable private company Stripe was given it a run for the money a while ago and now they've just made all these acquisitions. They got XAI and, and X in there and all these different assets and Starlink. So it's, you know, balloon to close to $2 trillion. Okay. But a big part of this is this kind of orbital compute piece to it. And so it feels like Anthropic just leased a bunch of, you know, a bunch of compute from SpaceX over in Memphis. And in addition to all of the Memphis compute, they're also saying, hey look, maybe we'll get some of these, this orbital compute. And I think maybe SpaceX, I mean the big strategy here is that the orbital compute is a big part of their pitch. And so if they're saying, look, Anthropic wants to buy it from us, Google wants to buy it from us, they can start lining up all of these, these different people. Google has something called Project suncatcher, which is a compute satellite program that Google announced late last year. And the prototype satellites are slated to fly next year in 2027. And the SpaceX talk is going to give this Sun Catcher basically dedicated launch capacity at scale. So this is interesting. I think Google's also talking to some other launch providers to kind of hedge this. But SpaceX is kind of the big one because they've been able to launch the most satellites into space and you know, out of the most advanced space program, yada yada. If you're building anything that relies on the cost of compute for AI, then I mean the thing we all, we're all going to have to start thinking about here pretty quick is the cost of launching satellites into space. That's going to be a variable that will have to be included in thought about during when you're doing your capex planning. It's super crazy. But anyways, that's everything for the show today, guys. If you enjoyed it, it would really help the show out so much if you'd be able to leave a rating or review wherever you get your podcast. If you're on Apple, drop a comment. I read them all, I really appreciate them. If you're on Spotify, it's the about tab and I think you have to listen to like three episodes. So if you've listened to three episodes before on Spotify, you hit the about tab. You could drop drop me some stars. I'd really appreciate it. And as always, make sure to go check out AI box AI if you want to get access to all of the different AI models in one place for 8.99amonth, it's a killer deal and I hope it helps you and with your productivity in AI. Alright, I'll catch you guys all in the next episode.
Latent Space AI – Episode Summary
Amazon Devs "tokenmaxxing," SpaceX & Google Collab, Anthropic Legal Fight
May 12, 2026
Episode Overview
This episode of Latent Space AI explores a whirlwind of recent happenings in the artificial intelligence world. Host Latent Space AI covers a high-stakes Anthropic legal battle and pre-IPO maneuvering, Amazon’s “tokenmaxxing” phenomenon among developers, celebrity-driven moves to protect human likeness in AI, and a visionary collaboration between Google and SpaceX to launch AI data centers into orbit. The show offers critical analysis, industry insider perspectives, and sharp skepticism, highlighting both the opportunities and the pitfalls of the current AI gold rush.
Key Discussion Points & Insights
[00:45 – 08:22]
Anthropic’s Disavowal of Investment Platforms
Broader Implications
[08:23 – 13:38]
Celebrity-Backed Human Consent Registry
Skepticism About Effectiveness
[13:39 – 19:45]
Amazon’s AI Code Quotas
Industry Context
[19:46 – 24:52]
Anthropic’s Expansion Into Legal AI
Strategic Dynamics
Claims & Industry Impact
[24:53 – 29:25]
Industry Bombshell
Strategic Implications
Notable Quotes & Memorable Moments
Important Timestamps
Tone and Style
The episode maintains a skeptical, insider's tone—balancing informed industry reporting with a clear-eyed critique of hype, legal skirmishes, and misaligned incentives. The host amplifies voices from inside tech and legal circles, offers trenchant commentary, and rounds out the analysis with where the AI winds may blow next.
Summary Takeaway
This episode underscores the current era of AI: one characterized by trillion-dollar valuations, fierce rivalry, and momentous decisions—often clouded by hype and regulatory ambiguity. The stakes are rising not just economically, but ethically and infrastructurally, as the space race for AI moves (literally) off-planet. Whether you’re tracking legal drama, technical arm-wrestling, or the next big leap for AI compute, this episode distills the latest twists with clarity and insight.