Loading summary
A
When you manage procurement for multiple facilities, every order matters, but when it's for a hospital system, they matter even more. Grainger gets it and knows there's no time for managing multiple suppliers and no room for shipping delays. That's why Grainger offers millions of products in fast, dependable delivery so you can keep your facility stocked, safe and running smoothly. Call 1-800-GRAINGER Click grainger.com or just stop by Grange Ranger for the ones who get it done.
B
You're listening to a podcast right now. Driving, working out, Walking the Dog. If you're into podcasts, chances are you have something to say too. With RSS.com, starting your own is free and easy. Upload an episode and we distribute it to Apple Podcasts, Spotify, Amazon Music, and hundreds more. Track your listeners, see where they're from, and start earning from ads like this. Even with just 10 listeners a month, if you've been thinking about starting a podcast, this is your sign. Start free@rss.com if you want to get
C
access to this episode and my next 30 episodes all AD free so there'll be no ads on them, go check out my podcast AI Chat. You can go search for that on Spotify or Apple. It's AI chat. I'm going to post all of these news episodes and I'm also posting interviews like I just interviewed the CEO of Cohere. They've raised over a billion dollars for their AI model, talking about what they're going to be spending the money on and the direction of the AI industry, along with all of this new stuff. So if you want to go check it out with no ads for free, it is AI chat. Google and SpaceX are both in talks right now to put AI data centers into orbit, and SpaceX is pitching it as basically a big core part of their $1.75 trillion valuation for their IPO planned later this year. This is wild when you consider things like Anthropic are at a $1 trillion valuation, they are almost double that now to be fair, they have Xai, they have SpaceX X the, you know, formerly Twitter. They have a lot of assets inside of this company, but it feels like they're really pushing to get the highest, biggest valuation possible and make this a blockbuster ipo. We're getting into all of this. We also get into Project Suncatcher, which is the XAI acquisition, the Anthropic Memphis deal, and some really crazy counter cases for all of this. Also, Anthropic just opened a direct attack into Harvey, who is valued at $11 billion, and Legora, who's valued at $600 million, both of them are raising more money. Anthropic is putting out more AI tools for legal space, which is directly competing with them. Amazon employees are doing what people are calling token maxing. It's a funny term, sure, but basically what's happened is they have AI quotas that they have to hit for what percentage of their code is written by AI. And Amazon employees, rather than actually just doing that, are doing silly workarounds where it just makes it look like they're generating a ton of AI code. This is something I hope we don't see as a trend in the rest of the industry, although Anthropic is probably thrilled because they're making a lot of money off of this. We also have George Clooney, Tom Hanks, and Meryl Strep who are backing something called Human Consent Standard. This is for likeness and voice, and this is something that Hollywood and other places are very concerned about it. And Anthropic is publicly disavowing eight different platforms selling unauthorized access to their stock at the. As they are getting close to their $1 trillion round of funds funding. We'll kick this off with the Anthropic investment deal. So essentially, Anthropic is publicly disavowing. They're calling out and they're naming names, by the way of eight different investment platforms that are marketing access to their shares. Basically, they're like, hey, you want to get access to Anthropic shares? We have some. Usually these are companies that have went and purchased them on the secondary market or through other ways where, like, technically they control the shares, but it was like an employee that sold it to them wasn't really from Anthropic. Anthropic doesn't like that because they're like, hey, if you want to get access to our shares, come buy it through us. The notice that they were just putting out came out today. They named Open Door Partners, Unicorn Exchange, Pachamana Capital, Lionheart Ventures, Hive Forge, Global, Sidecar, and Upmarket, and all of them. They actually put out a whole quote and they said, any sale or transfer of Anthropic stock or any interest in Anthropic stock offered by these firms is void and will not be recognized on our books and records. They are specifically banning SVPs as a vehicle.
B
And.
C
And obviously the timing of this is really important as they have their IPO coming up. Anthropic is getting ready to close their $900 million valuation round. They don't want, you know, people to sell shares for more or less than that and take any sort of appetite away from the investors that they're trying to get forged Global specifically kind of pushed back on this. They said that they don't quote intermediate trades a company hasn't blessed and they're asking to be removed from the alert that Anthropic put out. So at least one of these eight has a legitimate complaint. I think the bigger problem is what nobody is talking about and that, that, that and that is there is a bunch of crypto exchanges, one of them being OK X, there's a bunch of others and they've basically been listing pre IPO perpetual futures. Basically they're just an imaginary fake stock that track Anthropic's valuation without ever touching the actual stock price. And right now when they're doing these private rounds of funding, they'll just peg it to whatever the most recent private round of funding is. But in the future, as it's even publicly traded, you can imagine if it could just keep tracking the stock price there. Those all sit outside of this kind of disavowal because they actually never need to transfer any sort of shares. Right. Someone could sell their secondary shares to someone else. And Anthropic's like, we won't, you know, we're not going to like honor that. I don't really know how legal any of that is. Although Anthropic seems to think they can just, you know, ignore those. But what they can't do is they can't stop these quote unquote synthetic traders, these crypto platforms, these other platforms that are just making synthetic anthropic pegged prices and you're just buying and selling at that price from other people that are interested in it. I mean, it's kind of a crazy concept, but there's not much they can do about it. So that's also an interesting story. Okay, the next thing I want to talk about is what is called the human consent standard for AI. So RSL Media launched this today and a bunch of, you know, big celebrities have gotten on board with this. George Clooney, Tom Hanks, Meryl Strep, Viola Davis, Kristen Stewart. Anyways, a whole bunch of people, a bunch of different creative artists and, and you know, people so in, in the movie industry, in the music industry. This is a registry that is going to go live in June where you can verify your identity and you can publish what AI systems are allowed to do with your likeness, your voice, your character and your work. So this is kind of like a discovering mechanism. It's kind of like robots txt it's the same file and websites are already using this to gate scrapers. So like Google, you can say you get put in your robot txt on your website, like hey Google, don't scrape my website. And even AI models don't scrape my website. And it will kind of stop them, I mean, assuming they actually listen to you. So this is kind of a similar idea. So RLS Media co founder Eric Walter framed this and said basically quote, the human consent standard applies to the underlying work identity, character or mark itself wherever it appears. So it's not, you know, like gatekeeping a URL like a robot txt file would do. It's basically gatekeeping the person. Their pitch is that it's going to work for Long Tail too, not just famous people. So you know anyone, even if you're not famous, you can kind of gatekeep your stuff, your likeness from being used by AI. I think if you're being honest with this concept, I'm trying to be a realist on this and maybe a little bit skeptical, but robot Txt really doesn't have any teeth. Any well behaved web crawler like Google is going to honor it, but bad ones don't, right? Every AI model of China is going to scrape the entire Internet. They don't care if you have a do not index, right? They're just going to get their stuff. There's no consequences. They're going to get all the data they can. I think the leverage right now comes when major model providers are going to kind of commit to checking the registry at training time and zero have publicly committed yet that they're actually going to do that, right? Like if they're like, okay, there's a million people that have put their likeness and image inside of this thing. Make sure that none of your data coincides with any of these people's, you know, likeness. And if anthropic and OpenAI and 11 labs and Google, all of them are like, okay, we're going to do this, that'd be fantastic. But right now no one's really opted into that. It's just more compliance. It's not legally mandated, I don't think. I think if basically none of them do it, then it's just kind of a paper trail for the next round of lawsuits. And so anyways, a lot of people are basically saying that this is a big nothing burger, that nothing will happen, but you have a lot of big celebrities backing it. So we'll see if this gains any traction and perhaps just launching it and talking about it and putting it out there is the first step. And then you could try to go get the commitments from the big AI firms, but so far none of them are signing up for it. Okay, let's talk about Amazon. Their engineers right now have basically been given a target to have more than 80% of developer code written by AI. Now, theoretically, this sounds awesome, but whenever you give people incentives, they're always going to use. They're going to find a ridiculous way to hit those targets to reach their incentives. Anyways, the way that this is happening right now is that Amazon recently rolled out something called meshclaw. It's basically a clone of openclaw. It connects to Slack email and Google code deployments. They give this to all of their developers and they also set up a dashboard that tracks how many tokens each engineer is burning. By the way, I'll also, like as a caveat, say that over at Nvidia, Jensen Huang, the CEO said, hey, look, if I'm hiring a developer and I'm paying them $250,000 a year, they better be using at least $250,000 a year in tokens. Otherwise I don't know why I'm paying them. So we kind of have this as a backdrop from, you know, the top people in tech. And so all of these developers are now incentivized to run up the biggest bills possible. So what's. According to the Financial Times, they got some source documents today. Employees are now running meshcloth, this new thing that Amazon rolled out in loops to inflate their token counts. They're calling it token maxing. So one Amazon employee quoted in this report, quote, some people are just using meshclaw to maximize their token usage. Managers are looking at it. When they track usage, it creates perverse incentives. And some people are very competitive about it. About it. So Amazon says that token stats aren't supposed to factor into performance reviews. I don't think anyone actually believes that, especially when you have, you know, comments like that coming out of Nvidia Meta. Employees apparently are doing the exact same thing. I think the reason that that's happening is because Amazon is expected to spend 200 billion in capex in 2026. Most of it is going to be on AI infrastructure. And I think a lot of the executives need, you know, kind of a number to point to when the board asks if, you know, their spend is producing change. So I think the lesson I'm getting from all of this, and this is Something that I think has been true for many, many years, which when you measure something, it becomes a target and it stops becoming a measure. So if you're at a company and you know AI usage is a mandate, you can watch and everyone's going to start, and you know you're measuring that everyone's going to start using more AI usage. Are they going to use it for useful things? Are they going to run it in loops? I think token count is a pretty bad signal. Much worse than shipped code maybe, or maybe like closed tickets or resolved customer issues. But anyways, any sort of metric you put out there could be gamed in some way and people will game it, which is ridiculous. I think the companies that are getting really solid productivity right now, they're kind of measuring outputs and then you get the companies that are just measuring your token maxing. So probably not a metric I would encourage people to rely on. Okay, Anthropic has just opened a direct attack to Harvey and Lagora, two people in the AI space. And I actually have some friends high up at some big AI, some big software legal AI firms. And they're pretty, honestly pretty rattled by a lot of the stuff that Anthropic is putting out. So just today Anthropic dropped a big expansion of cloud for legal. They're going straight for the two best funded legal AI startups. Harvey has raised $200 million in March at an $11 billion valuation. Logora closed a $600 million Series D last month and they had a big huge ad campaign which was fronted by Jude law. That's about $1 billion in fresh capital just in this one particular industry. Anthropic's response to all of this, I mean, Anthropic and everyone else is seeing, look, so much money is going into this industry. There's probably a lot of money in this industry. And so they are basically going to stack a new MCP connector that plugs Claude directly into DocuSign for document management, into Box for file search and into Thompson routers Westlaw for case law research. All paying cloud tiers are going to get this. So it's not just for lawyers, but the plugin suite covers commercial privacy, corporate employment, product and AI governance work. And I think this, the strategy here is really interesting because Harvey and Lagora are basically selling vertically integrated platforms built specifically for law firms and Anthropic is selling Claude plus some connectors so firms can basically keep their existing documents, their existing kind of research stack. And this is kind of interesting because it's Kind of like a model versus platform fight. And I don't think any startup really wants to be fighting with Claude or with OpenAI right now. But it's not kind of like a feature versus feature one, right? There's a spokesperson over at Anthropic that kind of talked about the legal sector and they said that it is quote, facing mounting pressure to adopt AI and the firms and in house teams that move are pulling ahead fast. I think the reason right now that the market is so important for Anthropic, this legal market, is because it's number one really tech next heavy. They have the billable hour, hour work, which is kind of the exactly the exact kind of surface area that generative AI is really good at. It's also where hallucination has been, you know, the most costly. Like you can't have anything hallucinate. So California recently find a lawyer who used ChatGPT to draft an appeal full. It had a bunch of fake quotes in it and so they actually got a lot, you know, they got a fine there. So Anthropic's pitch right now is they're like, look, we have our model, it's great, but it's also grounded in, you know, the Westlaw kind of data set and they, you can also use your own firm's documents and between the two of that, you should reduce your errors significantly. I think it's going to be interesting to see if they're able to really get some strong usage out of a lot of the legal firms. From what I'm hearing from friends in this industry, Anthropic is probably going to mop the floor with this one. I think a lot of people love it. Now, is it going to kill every other AI legal software? No, absolutely not. I think people are going to be able to go way deeper than Anthropic because Anthropic is trying to hit 100 verticals. And if Anthropic focuses too heavily on this, I think it probably will distract them from other areas like you saw with OpenAI getting distracted with Sora and 100 other little products that let Anthropic run past them. So I think they should focus on the core product of making Claude better for everybody. But these side quests of just going and trying to smash all of the startups in the legal AI space is pretty interesting. If you're still paying for ChatGPT, Claude, Gemini, Grok 11 Labs or any other AI models, I would love for you to check out my startup, which is AI Box AI. I basically put in 80 of the top AI models into one place and you can chat with them all in the same chat thread. So I use Claude because I think it has the best tone and really good reasoning for a lot of my text and knowledge base work. But it doesn't have image generation. So sometimes I'll be creating an article and I need a cover image for it and I gotta go over to Chat GPT and then maybe I want to have the article read and I got to go over to 11 labs and you got to switch between all these different tabs. So I've created AI Box to solve that problem in one thread. You can hit 11 labs for audio, you can hit OpenAI for image and dozens of other image models, dozens of other audio models. You have music, you have video in there from Google VO3, so a lot of cool stuff in there all in one place. And it's only $8.99 a month. So I hope this saves you a ton of money. You can get hands on with basically every AI model that you should be using right now. And honestly, even if you do pay for, let's say Claude and Gemini and Grok, I would say cancel all but your. Your favorite one. Right? Claude's kind of hard to cancel when you have it running on Cloud Cowork and it can control your whole computer. Cancel them all. But use keep AI Box to be able to access everything all in once and maybe with your favorite one. So I'll leave a link in the description to AI Box AI if you want to go check that out. Okay, let's talk about what's going on with Google and SpaceX. This is probably one of the wildest stories today. The Wall Street Journal has reportedly said that Google and SpaceX are in active talks to launch AI data centers into orbit. The timing of this obviously is trying to coincide with the IPO. SpaceX is prepping their $1.75 trillion IPO, which is going to happen later this year. And I mean, people have been talking about this IPO for, for many, many years because SpaceX has just been such a growing. It's been basically the, the most valuable private company. Stripe was given it a run for the money a while ago and now they've just made all these acquisitions. They got XAI and, and X in there and all these different assets and Starlink. So it's, you know, ballooned to close to $2 trillion. Okay. But a big part of this is this kind of orbital compute piece to it. And so it feels like anthropic just leased a bunch of, you know, a bunch of compute from SpaceX over in Memphis. And in addition to all of the Memphis compute, they're also saying, hey look, maybe we'll get some of these, this orbital compute. And I think maybe SpaceX, I mean, the big strategy here is that the orbital compute is a big part of their pitch. And so if they're saying, look, Anthropic wants to buy it from us, Google wants to buy it from us, they can start lining up all of these, these different people. Google has something called Project Suncatcher, which is a compute satellite program that Google announced late last year. And the prototype satellites are slated to fly next year in 2027. And the SpaceX talk is going to give this Sun Catcher basically dedicated launch capacity at scale. So this is interesting. I think Google's also talking to some other launch providers to kind of hedge this. But SpaceX is kind of the big one because they've been able to launch the most satellites into space, you know, out of the most advanced space program, yada yada. If you're building anything that relies on the cost of compute for AI, then I mean the thing we are we're all going to have to start thinking about here pretty quick is the cost of launching satellites into space. That's going to be a variable that will have to be included in thought about during when you're doing your capex planning. It's super crazy. But anyways, that's everything for the show today, guys. If you enjoyed it, it would really help the show out so much if you'd be able to leave a rating or review wherever you get your podcast. If you're on Apple, drop a comment. I read them all. I really appreciate them. If you're on Spotify, it's the about tab and I think you have to listen to like three episodes. So if you've listened to three episodes before on Spotify, you hit the about tab. You could drop drop me some stars. I'd really appreciate it. And as always, make sure to go check out AI box AI if you want to get access to all of the different AI models in one place for 8.99amonth. It's a killer deal and I hope it helps you and with your productivity in AI. All right, I'll catch you guys all in the next episode.
A
When you manage procurement for multiple facilities, every order matters. But when it's for a hospital system, they matter even more. Grainger gets it and knows there's no time for managing multiple suppliers and no room for shipping delays. That's why Grainger offers millions of products in fast, dependable delivery so you can keep your facility stocked, safe, and running smoothly. Call 1-800-GRAINGER Click grainger.com or just stop by Grainger for the ones who get it done.
D
You're listening to a podcast right now. Driving, Working out, Walking the dog. If you're into podcasts, chances are you have something to say, too. With RSS.com, starting your own podcast is free and easy. Upload an episode and we distribute it to Apple Podcasts, Spotify, Amazon Music, and more. Track your listeners, see where they're from, and start earning from ads just like this. If you've been thinking about starting a podcast, this is your sign. Start your new podcast for free today@rss.com.
The AI Podcast – Episode Summary
Amazon Devs "tokenmaxxing", SpaceX & Google Collab, Anthropic Legal Fight
May 12, 2026
This episode of The AI Podcast explores several of the biggest news stories shaking up the AI industry right now. The host dives into Amazon's "tokenmaxxing" phenomenon among developers, the brewing legal and investment battles around Anthropic’s sky-high valuation, Google and SpaceX’s audacious plans for orbital AI data centers, as well as the industry’s response to AI’s usage of personal likenesses with a new “Human Consent Standard.” The episode blends frontline news, critical analysis, and candid observations on the incentives, opportunities, and pitfalls facing major tech players.
[03:06 – 08:15]
Disavowing Unofficial Share Sales
Mixed Battlefront: Real vs. Synthetic Ownership
The host points out pre-IPO perpetual futures are being traded on crypto exchanges (e.g., OKX). These synthetic stocks simply track Anthropic’s notional valuation, completely outside Anthropic’s control or legal reach.
Host commentary: “They actually never need to transfer any sort of shares. It’s kind of a crazy concept, but there’s not much they can do about it.” ([04:54])
Strategic Timing
[05:36 – 08:15]
RSL Media, with high-profile backing (George Clooney, Tom Hanks, Meryl Streep, Viola Davis, Kristen Stewart, and others), is launching a voluntary “Human Consent Standard.”
It's a digital registry (launching June) for creators to assert if and how their voice, likeness, character, or work can be used in AI systems.
Host analogy: “It’s kind of like robots.txt… Same as a file websites use to stop web scrapers, but for people’s identities.” ([06:19])
Critique: The host is skeptical of its effectiveness: “Robot.txt really doesn’t have any teeth... every AI model in China is going to scrape the entire internet. They don’t care if you have a do not index.” ([06:54])
Opportunity: If major model providers commit to referencing the registry during training, it “would be fantastic.” As of now, none have.
Host’s bottom line: “A lot of people are basically saying this is a big nothing burger, that nothing will happen, but you have a lot of big celebrities backing it. So we’ll see if this gains traction.” ([07:41])
[08:15 – 12:10]
AI Quotas, Perverse Incentives, and “Meshclaw”
Amazon set a target: over 80% of developer code should be written by AI.
To track this, Amazon rolled out a tool called “Meshclaw” (a clone of OpenClaw) that logs and tracks AI token usage across Slack, email, and code deployments.
Host explanation: “All of these developers are now incentivized to run up the biggest bills possible.” ([09:45])
Employees are exploiting this system—running Meshclaw in loops to inflate token usage (“token maxing”).
Amazon dev (quoting Financial Times): “Some people are just using Meshclaw to maximize their token usage. Managers are looking at it. When they track usage, it creates perverse incentives. And some people are very competitive about it.” ([10:27])
Industry Context
Analysis & Critique
[12:10 – 15:15]
Cloud for Legal: Expansion and Integration
Anthropic is rolling out an expanded legal AI offering ("Claude for legal"), directly challenging Harvey (valued at $11B) and Logora (valued at $600M).
Integrations: Direct connectors to DocuSign, Box, and Thomson Reuters Westlaw unlock document management, file search, and case law research for paying Claude users.
Host’s industry take: “I actually have some friends high up at some big AI software legal firms, and they're... pretty rattled.” ([12:29])
Anthropic’s positioning: Model + connectors, letting legal firms use existing tools rather than starting from scratch with a new platform.
Market Importance & Model vs. Platform Fight
The legal sector is highly tech-forward and billable-hour heavy—perfect for AI, but high risk if models hallucinate.
Host’s take: “Anthropic and everyone else is seeing, look, so much money is going into this industry. There’s probably a lot of money in this industry.” ([13:19])
Anthropic’s connectors vs. fully integrated vertical platforms signal a broader model vs. platform strategy battle.
Risks: While this move could allow Anthropic to “mop the floor” with competitors, a focus on too many verticals could distract from core product excellence.
[15:41 – 17:57]
The Next Compute Frontier
The Wall Street Journal reports advanced talks between Google and SpaceX to send AI data centers into orbit.
This move is intertwined with SpaceX’s upcoming $1.75T IPO—orbital compute is pitched as a differentiator.
Host explanation: “If you’re building anything that relies on the cost of compute for AI, then…the cost of launching satellites into space…that’s going to be a variable.” ([17:28])
Project Suncatcher: Google’s prototype compute satellite initiative, with launches coming in 2027. SpaceX to provide dedicated lift.
Industry dynamics: SpaceX uses Starlink and XAI assets to justify a valuation close to $2T. Partnerships with Anthropic (Memphis region) and Google highlight demand for cutting-edge compute.
Market Implications
On Anthropic stock disavowals:
“Any sale or transfer of Anthropic stock or any interest in Anthropic stock offered by these firms is void and will not be recognized on our books and records.” — Host quoting Anthropic’s statement ([03:52])
On tokenmaxxing:
“Some people are just using Meshclaw to maximize their token usage. Managers are looking at it... creates perverse incentives. And some people are very competitive about it.” — Reported Amazon employee ([10:27])
Jensen Huang’s philosophy:
“If I’m hiring a developer and I’m paying them $250,000 a year, they better be using at least $250,000 a year in tokens.” — Jensen Huang, Nvidia CEO ([09:35])
On AI productivity incentives:
“When you measure something, it becomes a target and it stops becoming a measure.” — Host ([11:23])
Cynicism on digital likeness registries:
“Robot.Txt really doesn’t have any teeth... every AI model in China is going to scrape the entire internet. They don’t care if you have a do not index.” — Host ([06:54])
Direct, analytical, and occasionally skeptical. The host blends breaking news with deeper analysis, personal experience, and pointed critique—especially about tech industry fads, incentive engineering mishaps, and the practical realities behind big announcements.
This episode provides a rich and dynamic snapshot of AI’s current business, legal, and technical battlegrounds, underscoring the outsized ambitions and risks facing the space. Listeners walk away with a nuanced understanding of why news like “tokenmaxxing,” orbital data centers, and celebrity-backed standards matter, and how real-world incentives continue to shape AI’s evolution in surprising—and sometimes absurd—ways.