Transcript
A (0:00)
Today on the AI Daily Brief, OpenAI proposes a new deal. Meanwhile, in the headlines, Anthropics revenue has surged yet again to 30 billion annualized. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG Blitzy assembly and zencoder. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. If you are interested in sponsoring the show, send us a Note@ SponsorsIDailyBrief AI lastly, two other quick announcements before we move on. As I mentioned yesterday, Cohort 2 of our Enterprise Claw program is now open. You can find out about that at EnterpriseClaw AI and the latest AI Pulse survey is out. This is all about how you used AI in March. This will now be the third month that we are doing this. We're starting to get really good longitudinal results from this. You can find the link at aidaily Brief AI. It's a big blinking banner right under the menu items. This will be open for a few days. I would so appreciate it if you would go tell us how you used AI. And of course the people who contribute to the survey will get access to the results first. Now with that out of the way, let's talk some turkey. We kick off today with a big update in the competition between the labs as Anthropic has announced that they've now reached 30 billion in ARR. It was actually tucked into a blog post about their new deal with Google and Broadcom, which we'll cover in just a minute, but that is a 3x increase since the end of last year and up 58% since the end of February. Now, according to the latest numbers that we have from OpenAI, that suggests that Anthropic has flipped them to have a higher annualized run rate, although we've also heard in the past that they don't calculate things exactly the same way. And you better believe that if they haven't actually gone ahead of OpenAI in revenue, we will hear from OpenAI about it very soon. Now, this all comes as the financials for both of these companies come under much greater scrutiny as they head towards an eventual IPO at the end of this year or the beginning of next. On Monday, the Wall Street Journal published a deep dive into OpenAI and Anthropic's numbers, sourced from financial disclosures around each company's recent fundraising. The key focus was on training costs, which are sky high for both companies. OpenAI expects to spend around 30 billion on model training this year, which is triple what they spent last year. Anthropic's projected training costs are relatively more modest, but still almost triple, to reach 28 billion by 2028. Now, while both training budgets are massive, it's notable that OpenAI is forecasting costs to go up on a completely different level than Anthropic. Because training costs are so high, both companies are providing an alternate accounting of profitability that excludes them without training costs. Both OpenAI and Anthropic are on track to eke out a small profit this year, with that profit accelerating moving forward. Not everyone loves this financial engineering Rahm Aliwalia sums up the feeling of many investors when he writes OpenAI and Anthropic are incredibly profitable if you just strip out the training and inference costs. This business model is equivalent to running a passenger airline, except you need to replace your jets every six months. Bizarre to have another definition of earnings simply because we don't like the costs. Now, in terms of top line revenue, both firms expect double revenue this year and are forecasting further doublings over the next few years. Notably, Anthropic's revenue is almost entirely from enterprise customers and they forecast that to continue effectively indefinitely. OpenAI's revenue is more balanced than it used to be, but still skews towards the consumer. They do expect enterprise and consumer revenue to balance out over time, however. For the moment this means OpenAI is spending money on inference for a ton of free users that Anthropic doesn't have to carry. OpenAI expects it to take until 2030 for them to turn cash flow positive, while Anthropic is forecasting a profit of the old, well understood variety by 2028. Now, the Wall Street Journal's analysis here is not particularly novel. We've had the rough contours of these financials from other sources already. What's more notable is that Wall street is starting to analyze these companies as public market behemoths rather than growth stage startups. The Journal had a very clear spin on the analysis summed up by this closing line. Both OpenAI and Anthropic will burn through a giant amount of cash in the coming years and are counting on their IPO investors to help buoy their businesses tldr that is going to be the default narrative these companies fight against during their IPO and over the next few years. Still, for many people, the big story here is this massive new Anthropic Number Fleeting Bits points out Anthropic is growing at an annualized 9,700%. This is the fastest revenue growth at this scale in history. I don't know how to communicate the significance of Anthropic's growth rate at this scale without sounding hyperbolic. I asked Claude and the best comparison that I could find was in video, which grew at a 1240% annualized rate during its best individual quarter growth ever, which was Q2 in fiscal year 24. As John Arnold puts it hard to believe that just 18 months ago anthropic was broadly considered the odd man out of the AI race with an ambiguous business plan and no clear funding model. Not so anymore. Now, on the back of soaring usage, Anthropic has signed a massive new compute partnership with Google and Broadcom. Anthropic announced on Monday that they've expanded their existing partnership to add multiple gigawatts of capacity set to come online from 2027. The Wall Street Journal added that the Precise number is 3.5 gigawatts alongside the reveal that revenue had tripled to a 30 billion run rate. Anthropic also noted that enterprise spend specifically is skyrocketing. During their fundraising announcement in February, anthropic boasted that 500 enterprise customers had annual spends above a million dollars. Less than two months later, that figure has doubled to 1,000 customers. Regarding the compute plans, Anthropic will build the majority of their new data centers in the U.S. the deal will expand Anthropic's commitment to deploying Google's TPUs, which are manufactured by Broadcom. Anthropic already began deploying TPUs in the fall and uses them exclusively for inference. Their training clusters are exclusively developed and operated by AWS, and that partnership remains ongoing. For Anthropic, the deal is obviously necessary. Their capacity constraints have become a huge problem this year, so they need pretty much every chip they can get their hand on. Yet for Google and Broadcom, this is arguably even more important. Google set out to build a new business around external TPU sales. Last year, many argued that they didn't have the sales or support staff to compete with Nvidia or AMD and would face a hard slog to set up a new business line. Yet now, in a single deal, Google has built a multi billion dollar chip business around a solo customer. Broadcom, meanwhile, has guaranteed demand as long as Anthropic keeps growing. Mohammed Hasan sums up the AI arms race just turned into a full on power plant competition. Speaking of Google, after releasing their new open Source Small Model Gemma 4 last week. The company has wasted no time in productizing it. On Monday they released an AI dictation app called Google AI Edge Eloquent. The product competes with things like Whisper Flow, allowing users to do live AI assisted dictation on their phone. Edge Eloquent can filter out filler words, clean up phrasing to convey the intended message, and store custom jargon and keywords, much like the other AI dictation apps. The big twist is that everything is run completely locally on device. Users download the app with a packaged small language model and then can operate everything without an Internet connection. Now, although Edge Eloquent probably isn't all that exciting, it does demonstrate a few interesting things about Google's Gemma 4 family. First, unlike previous small models, this doesn't seem to be a research project. It is a commercially viable model for certain use cases, and Google seems intent on building products around it. In addition, this could be the kind of local model Apple has been looking for to drive Siri, which is expected to use Gemini family models when it relaunches in the summer. Gemma4 doesn't seem to be quite there yet for driving a full offline version of Siri, but you can see where Google is heading now. Aside from commercial applications, Gemma4 has seen a hugely positive response from the developer community. Gemma 4 was downloaded 2 million times in its first week. In contrast, Gemma 3 receives 6.7 million downloads over the past year, while Alibaba's Quinn 3.5 achieved 27 million downloads since its release in mid February. Something that went a little under the radar is that the entire family of models, right down to the to be version, have strong agentic performance that could push the frontier for mobile agents. Philip Schmidt, a developer experience liaison at DeepMind, showed the model can query Wikipedia using agent skills while running on an iPhone. Obviously still very early innings, but it feels like Gemma 4 could lead to a breakout moment for local models, especially once the Open Claw folks start tinkering with it. Over in Metaland, the company is preparing to release their new model and plans to offer an open source version in the future. Axios published new information about the model release on Monday, citing sources familiar with the views of AI CEO Alexander Wang. They wrote that Meta wants to keep some part of the model proprietary during the initial release to ensure it doesn't introduce new levels of safety risk. And this reporting contradicts prior speculation that Meta would abandon their commitment to open source models. As part of this new release, Axios added that an open source model aligns with how Wang sees Meta's position in the AI race. Wang reportedly views Meta as a democratizing force that can ensure there is a US trained option for open source developers. Sources suggest that wang believes that OpenAI and Anthropic are increasingly focused on developing AI systems for governments and the enterprise, while Meta is focused on the consumer, writes Axios. Meta wants its models distributed as widely and as broadly as possible around the world now. This is the first news we've had on the forthcoming model, codenamed Avocado, in several weeks. In early March, the New York Times reported that Avocado had been delayed and couldn't match Gemini 3 on benchmarks. Talk of safety concerns could imply that model performance has improved with another month of post training. Still, sources say that Meta knows that its models won't be competitive across the board, but believes they will have certain strengths that drive consumer appeal. Meanwhile, while Meta's own model is getting close to release, Meta engineers are still using Clot a A whole lot of Claude the Information reports that Meta employees have set up an internal leaderboard to see who is churning through the most tokens. The leaderboard is dubbed Clawdonomics and aggregates the top 250 token users among Meta's 85,000 employees. Top ranking token users can earn the rank of session immortal or token legend, the Information argue. This is a new type of conspicuous consumption in Silicon Valley known as token maxing. The thought is that token consumption is a good proxy for AI enhanced productivity, so engineers want to climb the leaderboard. Now the flaw in thinking is immediately obvious, with the Information also reporting that Summit Meta are running large numbers of agents in parallel with the goal to rip through as many tokens as possible, not necessarily be as productive as possible. And while token maxing could be the new version of judging engineers by counting how many lines of code they write, the culture is being driven from the top. Last month, Nvidia CEO Jensen Huang said he would be deeply alarmed if an engineer on a $500,000 salary wasn't using $250,000 worth of tokens annually. That's also the view at Meta, with CTO Andrew Bosworth boasting in February that one of his top engineers is spending the equivalent of his salary on tokens to generate a 10x efficiency boost. Bosworth commented, this is easy money. Keep doing it, no limit. Now there is a lot of chatter on this, with many feeling like Joe Weisenthal, who writes, how does measuring productivity by total token consumption make any sense at all? Comparing it to Chairman Mao requiring peasants to smelt steel in their backyards during the Great Leap Forward, which of course led to tons of useless low grade steel. Zhou continues. Real backyard steel furnaces vibe, in my opinion. Interestingly, Metacritic Capital on Twitter makes a different China comparison to argue why this actually makes sense. He wrote, Very early in its development, the Party would set GDP growth goals for each province in China. As you know, you can do lots of silly things to boost GDP growth, and Party officials certainly did, but the opportunities for China's development were so vast that simply putting a GDP growth target was enough. It took decades for Goodhart's Law to catch up with them. Same goes for tokens. Meta is spending 90 million tokens per developer per day at Opus 4.6 rates. Meta would be spending in the zip code of 4 to 5 billion dollars per year. I think all but five corporations on earth can spend that much on AI. It's a massive feat of engineering from wall to wall to be capable of spending that many tokens. Tldr the cost of token maxing is small because token maxing is extremely hard. You can safely expect that over the next 18 months, 98% of corporations would be better off token token maxing. Interesting thoughts there, but for now that is going to do it for today's headlines. Next up, the main episode. All right folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client zero. They embedded AI and agents across the enterprise. How work gets done, how teams collaborate, how decisions move not as a tech initiative, but as a total operating model shift. And here's the real unlock that shift raised the ceiling on what people could do. Humans stayed firmly at the center while AI reduced friction, surfaced insight, and accelerated momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that actually looks like in the real world, go to www.kpmg.us AI. That's www.kpmg.usa you've tried in IDE copilots. They're fast, but they only see local silos of your code. Leverage these tools across a large enterprise code base and they quickly become less effective. The fundamental constraint Context blitzi solves this with infinite code context. Understanding your code base down to the line level dependency across millions of lines of code. While copilots help developers write code faster, Blitzy orchestrates thousands of agents that reason across your full code base. Allow Blitzy to do the heavy lifting, delivering over 80% of every sprint autonomously with rigorously validated code, Blitzi provides a granular list of the remaining work for humans to complete with their copilots tackle feature additions, large scale refactors, legacy modernization, greenfield initiatives all 5x faster. See the Blitzi difference at blitzi.com, that's B L I T Z Y.com one of the trends that I follow most closely when it comes to AI is around voice Today's episode is brought to you by assembly AI, the best way to build voice AI apps. The company has been moving with extreme velocity lately, shipping major improvements to their speech to text models that go way beyond just better transcription. Specifically, they are getting to an accuracy level that can reliably capture the type of things that used to break every other speech to text model. Think credit card numbers, read aloud, email addresses spelled out, complex medical terminology, financial figures. All of these things, in other words, that it really matters to get right. So for anyone who's building in fintech, healthcare, sales, intelligence, customer support, getting those things wrong isn't just annoying, it's a liability. Their speech understanding models are also really good at things like identifying speakers, surfacing key moments, and uncovering insights from voice data. And all of that happens in a single API call. The proof is in the pudding, and assembly powers some of the top voice AI products in the market today, like Granola, Dovetail and Ashby. Getting started is free. Head to AssemblyAI.combrief to test it live and get $50 in free credits. No contract, no upfront commitments. That's AssemblyAI.combrief. if you're using AI to code, ask yourself, are you building software or are you just playing prompt roulette? We know that unstructured prompting works at first, but eventually it leads to AI slop and technical debt. Enter zenflow. Zenflow takes you from vibe coding to AI First Engineering. It's the first AI orchestration layer that brings discipline to the chaos. It transforms freeform prompting into spec driven workflows and multi agent verification where agents actually cross check each other to prevent drift. You can even command a fleet of parallel agents to implement features and fix bugs simultaneously. We've seen teams accelerate delivery 2x to 10x, stop gambling with prompts, start orchestrating your AI. Turn raw speed into reliable production. Grade output at Zenflow free. Welcome back to the AI Daily Brief. Today we are looking at a policy document from OpenAI and it comes at the convergence of two moments in and around the industry. The first moment is what we were discussing on yesterday's show this growing indication from the labs that the next jump, the one that we are on the verge of with the next set of models, represents a really big one. Remember at the end of March we got the leak about Anthropic's Mythos model which it said represented a step change their words and capabilities. In fact, what we got with the leak was a blog post saying that the model was so powerful that they were going to slow roll it a little bit rather than a full announcement and a release of the model as we've gotten in the past. On the OpenAI side, the company has been heavily teasing their new Spud model, actually doing more to hype it up than to tamp down expectations, reversing the trend that they've had all the way since back when GPT5 underperformed. So on the one side we have this moment of precipice where the next set of models could represent a very big jump. Then on the other side we have the continued and frankly increasing reality of dreary American sentiment when it comes to AI. A new poll from Quinnipiac suggests that sentiment is going from bad to worse. 55% of Americans now believe that AI will do more harm than good in their day to day lives. That's up 11 percentage points from a year ago and tips to the majority. For the first time, 70% believe that AI will reduce job opportunities, which is up 14 percentage points. A mere 7% of respondents believe that AI will increase job opportunities. In other words, Americans believe by a 10 to 1 ratio that I will reduce rather than increase jobs. 30% said that they were either very or somewhat concerned about AI making their job obsolete. And yet this is all despite adoption rocketing forward, the majority of people are now using AI to research topics they're curious about, rising from 37 to 51% over the past year. Analyzing data and creating images. Each increased significantly as use cases as well, both rising from around 16 to around 25%. The number of Americans who said they had never used AI was down from 33% last year to 27% this year. Tamila Triantoro, an associate professor at the Quinnipiac School of Business, noted, younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions. This is also not just one poll. We're seeing AI being blamed for increasing electricity prices, opposition to data centers growing, and in one dramatic example of just how negative the perception around AI is, it has worse PR right now than the extremely controversial ice into that environment. OpenAI released the new document Industrial Policy for the Intelligence Age. The document is framed not as some complete policy statement or comprehensive anything, but instead a way to try to nudge the conversation around important policy topics forward. They divide their policy discussions into two areas. First, building an open economy and second, building a resilient society. And I think that the document needs to be judged in two different ways. One is from a PR lens and and what it does for OpenAI and the AI industry in general when it comes to public perception. And second, in terms of what one might think about the policies themselves. Now, to be fair to OpenAI on the first way of judging this as a PR document, it obviously isn't intended to be that primarily it feels like it's much more designed for perhaps a Washington insider audience, and that if it was a document for general public consumption, maybe it would look a little bit different. At the same time, the reason I'm not interested in giving OpenAI a pass on that front is that at this point with where they sit in the industry, and especially when they pair this with big premier interviews with the founders of media companies like the one Sam Altman did with Axios, they clearly recognize that everything that they say is, whether they would like it to be or not, a public relations statement, as well as whatever it is supposed to be, to be completely transparent. I very, very, very much dislike this document. It exists in this strange, uncanny valley where it is so technocratic, down to the narcolepsy inducing name industrial policy for the intelligence age, that it is inevitably going to fail in any sort of PR goal, but at the same time not robust enough from a policy perspective that it feels likely to do a particularly good job at advancing any of these policies as well. It is a document, in other words, without a clear home or purpose, or one where its home and purpose is so confused that it makes it, at least in this current form, not all that useful to anyone. Now, we are going to go through the policy proposals because there are some interesting and important discussions that are started there. And I want to take this idea of being a conversation starter in good faith. But I do have to say a couple more things about the PR impact right now. I don't know that I've ever seen an industry that is so fundamentally unwilling to spend any time at all articulating why it deserves to exist as the AI industry. Every single document like this, every single statement that comes out of Dario or Sam's mouths is so focused on AFFIRMING the negative and validating people's concerns that literally no time is spent actually explaining how this is going to make the world better. Every discussion is this incredible quick pass through where a bunch of theoretical benefits in the future are listed in short order without actually articulating how we get there or what the impact of those changes will be on people's lives all along the way to getting to what seems to be the core point, which again is validating all the bad things, we get these hand wavy statements like this one, we strongly believe that AI's benefits will far outweigh its challenges. Then only to have the next three lines be all about how clear eyed about the risks they are. This does not come off as being reasonable, it does not come off as being sober or thoughtful. What it does is make people ask why the hell are we doing this in the first place? Then you you know how when you see an ad for some new miracle drug on TV, the last 10 or 15 seconds of the 60 second spot is always them disclosing all the risks and side effects. The way the AI industry communicates, it's as if they flip that ratio around and spend three quarters of the ad talking about all the side effects and negatives and only a tiny little bit on why the things should actually exist in the first place. And what all of these risk descriptions, these sober, thoughtful risk descriptions fail to engage with is, is the thing that seems incredibly obvious to most average people, which is that AI doesn't have some mandate from heaven to exist. When OpenAI or anthropic or anyone else in the AI industry talks about mitigating these serious risks, many of which sound absolutely horrible, the response of many normal people is to say, well then why are we doing this in the first place? When those companies answer is well, it's happening one way or another and don't respond when people say wait. But why? The people are left to assume that the answer is because it's going to make some people rich. That is the default understanding in the absence of a better answer. And of course that default understanding just makes people angrier. If the answer is because China is going to do it if we don't, maybe for some that's a little bit more understandable, but it remains incredibly abstract. The only possible satisfying and only possible viable answer must be that the benefits of AI are higher than the costs. And just saying that in this hand wavy way we think the benefits are higher than the costs no longer cuts it, it never cut it, but it really doesn't anymore. Right now with where things are. Every single time any leader or senior official from any major lab speaks, they are either contributing to the strong sentiment that we see in all of these polls that AI is likely to be worse than it is good, or they are doing work to reverse that sentiment. I think that we in the AI industry should be judging every communication on whether it reinforces that negative sentiment or whether it actually combats it. So as I said, giving credit to the people who wrote this, I do not believe they were thinking about it first and foremost as a PR document. But unfortunately in the world that we live in and in the world that OpenAI and all these companies operate in, it is that whether they want it to be or not. Now, as you might imagine, I am far from the only person who has some negative feelings on that side. Daniel Jeffries writes, please, please, please, I'm on my knees begging every AI exec on the planet just stop with this stuff. Just give us models. Let the collective distributed intelligence of people figure things out in real time like we always do. Let people adapt. It's what we do. We are not giving birth to magic super miracle machines that suddenly invalidate every single pattern of the entirety of human history and technological development. We're not really AI is amazing, it's wonderful, but it's not magic. Can we please just let AI be cool and useful in problematic and realistic ways instead of all this crazy talk? Meanwhile, others point out that there is something discordant about where AI actually is and all of this talk of world changing superintelligence. And by the way, this is not just the Gary Marcuses of the world who are desperate to convince you that AI isn't all that powerful. These are people who are totally bought in. Cheyenne Zhao, whose literal handle is Genai is Real, posted the companion Altman interview and said the replies are more insightful than the interview. Someone pointing out that GPT 5.4 has been spinning in circles on a webhook for four hours while Sam talks about superintelligence captures everything wrong with how AI is being discussed right now. The models are genuinely impressive and improving fast, but calling this superintelligence devalues the word and makes it harder to have serious policy conversations when we actually need them. We're in the extremely capable tool era, not the new social contract era. Bucocapital bloke put it a little bit more bluntly last week. Speaking in general not about this specific document, he writes, you must understand that every tech executive has AI psychosis. They're puking out cloud generated markdown files full of hallucinations asking if this means they can fire 500 people. Aaron Levy from Box actually responded and said the worst thing you can do is just dabble with AI a little bit. That's the spot where you see its capability. But overgeneralize on the use cases and how easy the automation is. You almost have to use it too much, develop psychosis, then get to the other side and realize how much care in feeding and management of the agentic workflows is required. On the other end you realize you actually need to probably hire more or new people to then do all the new things agents can do. But let's talk about some of the policy proposals. I'm going to spend a lot more time on Section one, the Open Economy than I am on the second part, Resilient Society. The first thing they discuss is the importance of including worker perspectives in the AI transition. They write give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights. This is something that I do think is extremely important, but also reveals one of the biggest challenges with this document overall, which is the thing identified by Will Meneetis in his response essay, no new deal for OpenAI, which is that basically this document is absolutely chock full of pretty sentiments that, at least in the way that they are described right now, seem to wholly ignore the political reality and the political history that they operate within. We've discussed this worker management thing numerous times in the past on this show, and what is happening and will happen is a wholesale shift in the relationship between employees and management in lots of different ways. On the one hand, managers have much more power because they feel like they can do things with fewer people. On the other hand, the end worker who is actually using the AI kind of negates the need for a lot of layers of middle management. But then there's also issues like the fact that in many cases workers are training their own replacements. The point being that what's happening here, what will happen and what needs to happen, is not some policy that can be enacted. It's going to be a total new labor movement. OpenAI doesn't use the word union here, which is one of Will's biggest beefs with Will pointing out that the New Deal was not some benevolent meaning between the capital class and labor class facilitated by fdr, but the byproduct of decades of political violence and a labor movement that was willing to fight and literally die for change, not to mention leadership that had an actual mandate the likes of which no one in American politics has had for a very long time. Still, to the extent that we are talking about conversation starters, yes, we do need to have the conversation about this shift in the relationship between employees and management. Next up, we have AI first entrepreneurs. Now, the critique of this one is that telling a displaced customer service agent to go start some small business that competes with their former employers feels at best tone deaf. But of course, that's not the actual point of pro entrepreneur policy. In other words, the point is not that every worker who is displaced by AI is going to all of a sudden go be an entrepreneur now. It's to ask what sort of policy interventions and support structures could increase the successful small business entrepreneurship rate by 50% or even 100% from where it is today. There is not going to be one single policy silver bullet for the amount of change that's going to happen. Pro entrepreneurial policy is one part of a much larger toolkit, and in that I'm completely supportive. Now, I'm not totally sure what the right policy interventions are, what the right type of entrepreneurial support is, but I do think that this is going to be a part of the solution. Because in a vastly adapting future for many, the only secure future will be the one they secure for themselves. Next up, we have the right to AI. And this is something that OpenAI has talked about before. They write we need to treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy or to make sure that electricity and the Internet reach remote parts of the globe. And what I would say here, which to be fair, they give at least mention to, is that access to AI is going to be meaningless without the agency to actually use it. What I mean by that is that we can't just give everyone a free ChatGPT account and hope it works. The amount that companies are spending on AI infrastructure right now is based on studies that we've found more than 12 times bigger than the amount that they're spending investing in people's capability to use these tools. And they and that's within the companies who have a direct financial incentive to have their people use these tools. Well, we need a mass scale infrastructure mobilization to help people figure out how to use the new tools of the new economy. Call it whatever you want, a Marshall Plan for education. We need to be thinking in those big massive terms because without it, any right to AI is just a Pretty notion on a piece of paper. Next up, OpenAI calls on us to modernize the tax base. And this is actually an area where I think we are inevitably going to see some of the biggest shifts. And frankly, I think that we are going to see some breakdown of traditional, conservative and liberal lines when it comes to tax policy. The logic is that if the balance of the economy shifts from labor to capital, there just literally has to be some commensurate change when it comes to taxation. Now doing that well is going to be massively challenging, but I think based on the trajectory of both the economy and the larger political conversation, some version of this is inevitable. Maybe it's policies that have a lot of support in liberal circles already, like higher taxes on capital gains, maybe it's new types of taxes on automation. But basically, I think something has to give here and I think you will likely find some very strange bedfellows when it comes to figuring out how to do it. Well. Now, luckily, when it comes to an inside the AI industry perspective, this sort of shift in how we think about taxation likely has the benefit of being extremely good politics. The next idea from OpenAI, which is getting a lot of coverage, is a public wealth fund. They write, while tax reforms help ensure governments can continue to fund essential programs, a public wealth fund is designed to ensure that people directly share in the upside of that growth. Policymakers and AI companies should work together to determine how to best seed the fund, which could invest in diversified long term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI. Returns from the fund can be distributed directly to citizens, allowing more people to participate directly in the upside of AI driven growth regardless of their starting wealth or access to capital. I seem to be a little bit more skeptical of the ultimate importance of this than others out there. I certainly don't think it's bad. I think it would be good to have people rooting for the success of these companies. But I think I have a little bit more skepticism than many others around things where everyone gets a little share of them. And again, that's not because they're bad, but because I think maybe the central challenge of American politics is that people don't want the average of what people have. They want and feel like they deserve the exceptional. We live in a world where it feels like we are constantly confronted with people who have more than us, whether that's in Instagram posts, whether they're real or not, or having to walk through first class to get to our section of the plane. Now, it's not necessarily AI's job to deal with that. In fact, it may not be a policy remediation at all. But my concern about a public wealth fund is that I think it could be a very window dressing y exciting to write about type of thing that doesn't really move the needle when it comes to core sentiment. On the other end of the spectrum, I'm much more enthusiastic about things like OpenAI's discussion of accelerating grid expansion. Except I would take it farther and not just think about how to accelerate grid expansion in ways that don't cost individual people money, but actually have the benefits accrue to those people first. Basically, rather than these pretty pledges to ensure that the data center buildout doesn't increase people's electricity prices, we should be actively making their lives cheaper, not just keeping it the same. I think that as an incredible amount of wealth accrues to the AI companies, we are going to need ways for that to flow back to the rest of the world. Private financing of public utilities may end up being part of that equation. Another area that's seeing lots of discussion is the incredibly poorly named and framed efficiency dividends by which OpenAI is basically talking about reinvesting the realized value of AI back into regular people's lives. Now, again, to be fair to them, they are not planting their flag heavily in one or another policy, but they're coming back to ideas which have been floating around for a while now, like the 32 hour or four day work week. This is something that before he decided to go full frontal assault on the data centers Bernie was putting in his AI policy back last summer. I tend to be a little bit more skeptical of things like the 32 Hour Workweek because I think people view them as a panacea when really a lot of people are just going to work more anyway. But there are plenty of other ideas that have the same principle of reinvesting AI's realized value back into people that I think could be a really important thing. And this is both on the individual level, I. E. Things like retirement matches or covering a larger share of healthcare costs, but it also could be on that more global societal level. Later on in the document they talk about portable benefits, I. E. Things like healthcare, retirement savings and skills training that aren't solely connected to a single private employer. And the efficiency dividends could go to pay for that. They also talk about pathways into human centered work. And to the extent that there need to be things like free training programs and better support infrastructure around some of These industries that are historically taxed on resources like for example, elder care, again, those efficiency dividends could go to pay for that. To not dance around it, there is going to be some redistribution of AI generated wealth. And I think some of these types of programs could be more politically palatable than just handing people money directly. One idea that is very technocratic, but also interesting and I think worthy of a lot more conversation, is some of the ideas of adaptive safety nets that OpenAI is proposing. One of the things that they're suggesting is investing in much better, more direct measurement of how AI is impacting things like work wages, job quality, and then use those things to inform automated and dynamic social safety net programs. And honestly holding aside the AI context, what they're basically saying is that the tools we have at our disposal allow us to potentially make much more targeted, narrow and specific interventions, rather than having these big cumbersome programs which can buckle under their own weight over time. So again, as you can see, although I have a lot of specific thoughts around each of these areas, I do think there's a lot of good fodder for discussion here. I'm just not sure that this type of document is the right way to actually start those discussions. And I think in the context into which it is arriving, it might actually in some ways be counterproductive. The biggest applied critique that I've seen is that one of the things that is noticeably absent from the document is any sort of even hint of a commitment from OpenAI to programs or initiatives or policies that would cost them anything. As Will Menis writes. The document proposes that policymakers might consider higher taxes on capital. OpenAI could commit to paying them. The document proposes a public wealth fund. OpenAI could cede it. The document proposes that data centers pay their own energy costs. OpenAI could accept voluntary rate separation today in every jurisdiction where it operates. The document proposes that frontier AI companies adopt public benefit governance. OpenAI could reinstate the profit caps it dismantled six months ago. None of these things are in the document. The only things in the document are a workshop, fellowships paid in the company's own product, and an email address that routes to no one. Alexander McCoy puts this sentiment a little more cynically, writing good ideas. Sam, I know some members of Congress who can get right to work on writing the legislation. Some quick questions. How much equity in OpenAI should we plan on you contributing? Will it be your own equity, a dilution of existing shares? Or is your idea that the federal government will buy shares using taxpayer dollars once you IPO? 2? How many tens of millions of dollars of your own money, Are you pledging to commit to pass these policies you say are necessary? How are you going to counter the hundred million dollars of leading future AI political spending which opposes these policies, which is funded by your own investors and fellow executives? 3. How are you directing OpenAI Chief of Policy Chris Lehane to redirect OpenAI's massive lobbyist and public affairs resources to support this agenda which they currently actively oppose? Now this is coming from someone who in their Twitter bio says that they are fighting the power of big artificial intelligence corporations. So you need to view it through that lens. But I think that that would be a more prominent and common sentiment than you might think effectively. Where I agree with OpenAI wholeheartedly is that we need to have these conversations. But what seems to go unrecognized is that in the context of both the changes that they say are coming and the grave state of public opinion on AI in America, 13 page policy PDFs with no actual commitment or direction, ain't it for now that is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always and until next time. Peace. Sa.
