Transcript
A (0:00)
Today on the AI daily brief how AI can help democracy work better the AI daily brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG Robots and Pencils, Blitzy and Super Intelligent. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. If you are interested in sponsoring the show or want to know really anything else about the broader AI DB ecosystem, check it out at aidailybrief. AI all the fun things we've got cooking are always going to be listed there. Now. Today we are doing something which I hope to be able to do a lot more of in the months to come. It is quite clear at this point that AI is rising in significance as a broader societal and political issue. More and more people are understanding that it's going to impact them at work. Impacts at work are understood to be impacts on the economy, and things that impact the economy are understood to be political inherently, whether we'd like them to be or not. Now, in this climate, a lot of the reactions and emergent political discourse is quite negative. It's increased chatter about X risk declarations and proposals for moratoriums on data centers, and even among those who reject those policies. It sometimes feels like every day a new politician pulls a new number out of a hat to get pressed for how many people they think that AI is going to unemploy. And yet, believe it or not, not everyone is so dreary about what AI can mean for the world. I think it's likely that as the negative discourse increases, we also start to see some voices emerge who are telling a different story. Now, as you well know, we have these long read, big think type episodes every weekend, which is a great chance to highlight some of those voices. Today we are doing a good old fashioned actual long read, reading a piece from Stanford professor Andy Hall. Andy wrote an essay that we are going to read a number of excerpts from called Building Political Superintelligence and he introduced it on Twitter. In this way, he writes, amidst understandable concerns of AI dystopia, no one is offering a positive vision for how we can use AI to remake our institutions and reinvent how we govern. That's what I try to offer today. My argument is that we need an explicit research agenda to build political superintelligence. The window for building these structures is narrow and the right response is not to slow AI down, but to speed up how fast we build the institutions that keep us free. As AI grows more powerful, he ends his tweet with the quote that actually begins his essay. As Thomas Paine wrote in 1776, we have it in our power to begin the world over again. So let's read what Andy is arguing about, how we should think about the opportunity for AI and as he calls it, political superintelligence. Andy writes, right now is a weird time to be a political economist. AI is straining our already brittle political institutions. We might lurch into a dystopia in which we live in the grips of a techno leviathan, forced by our employers to train our own AI Replacements then kicked to the curb in a society organized to the benefit of a tiny number of people who control the machinery that controls the world. It's also an electric time to be a political economist. With each new paper my lab puts out and with each new experimental prototype in self governance we build using tools we couldn't have imagined having even a year ago, I'm starting to believe that AI presents an extraordinary opportunity to rebuild our society so we can keep slouching down the narrow corridor towards Utopia. Condorcet was an 18th century political economist and mathematician who in his Outlines of A Historical View of the Progress of the Human Mind, traced the Enlightenment and the rise of modern democracy straight back to printed books. For quote, they had opened so many doors to truth which it was impossible ever to close again. What made the printing press so powerful, he explained, was that it multiplies indefinitely and at small expense, copies of any work. It lowered people's cost of obtaining information and made information spread far and wide. And they used that knowledge to reshape society to their shared benefit. AI is like the printing press to a point. Instead of making information cheap and easily available, it makes intelligence cheap and easily available. That is, it not only serves users information, but it can find it for them, analyze it for them, and help them convert it into understanding. If we could transform society by spreading information, then we ought to be able to transform it more dramatically by spreading intelligence. Condorcet lamented that this epoch, more than all the rest, was blotted and disfigured with acts of atrocious cruelty, riots and mass slaughter, war, propaganda book burning, the Reformation. And it took two centuries, give or take, to work through them. Pondersay also reminds us that it brought extraordinary new understanding to the world. He said, the picture of the human race is still too dreary for the philosopher to contemplate it without extreme mortification. But he no longer despairs, since the dawn of brighter hopes is exhibited. To his view, it allowed us to reshape our society and our government, and in doing so, it helped us move beyond the very issues it had helped to stir up. Condorcet saw it ultimately as a bulwark against superstition and stupidity. The Case for Political Superintelligence the more I work with and study AI, the more I believe it can give every human being on the planet access to a sort of political superintelligence if we shape it right. And that intelligence in turn can make governments smarter and more effective, representatives more faithful, and institutions more responsive than anything we've built in over 2000 years of experimenting with democracy. Intelligence alone will not solve all our political problems, many of which are rooted in conflicts of values and positions that no amount of intelligence can undo. But like previous information revolutions, it can certainly help this time. We probably can't afford 200 years to work through the disruptions it causes. And AI might be more complicated because it's more centralized. The printing press was fairly decentralized. Many places eventually had them and could, at least in theory, print what they wanted. AI threatens to be far more centralized, with massive companies commanding enormous amounts of compute to produce AI models that, unlike physical books, exist in the cloud and can be altered on the fly from afar. But this time we also have a lot of advantages our forebears didn't have at the time Gutenberg built his press in the 1440s. We know a lot more now. We have hundreds of years of experience with modern government and democracy. We have access to modern scientific techniques, large scale data, powerful computers, and AI itself. We have tremendous tools we can bring to bear. How do we use them most effectively to reinvent the way we govern ourselves as quickly and as powerfully as possible? This should be the research agenda for our time. But if you listen to the public conversation around AI, you wouldn't think any of this was possible. Instead, you'll hear the CEOs of the most powerful AI companies predicting economic apocalypse but building AI anyways. You'll hear politicians spouting cheap parlor tricks to grandstand around AI while insisting they're deeply troubled by it. You'll hear protesters in San Francisco calling for an international pause in developing AI that literally everyone knows will never happen. And you'll hear about accelerationists running roughshod over common sense guardrails. What you won't hear from any of them is a positive vision for how AI could strengthen democracy and keep humans free. The pessimism in the air today is in some ways understandable. Our information environment is fractured. Our politics are a mess. We hear claims of superintelligence, but they're entirely directed at the economy and often feel like code for making us all unemployed. In such an environment, it can seem hopelessly optimistic to wax poetic about a new dawn for AI and our governance. But, writes Andy, I'm not interested in hopeless optimism. I'm not interested in pointless pessimism either. We have tools. Let's use them. The task ahead of us is to break the problem down into simple, concrete pieces. Once we do that, it becomes clear that there is progress to be made. So from here, we get into Andy's idea of the three layers of political superintelligence. He continues, how do we build political superintelligence? By political superintelligence, I do not mean a system that magically solves politics for us. I mean tools that help citizens, representatives, and institutions perceive reality more sharply, understand tradeoffs, contest power, and act more effectively. Based on thousands of years of experiments in governance, I think there are three key tasks ahead of us to achieve this goal. We need to use AI to make us smarter, we need it to represent us faithfully, and we need to govern it effectively. Andy argues that layer one is the information layer. He says. Classic research in political science suggests how making voters more informed can improve government. Snyder and Stromberg's famous study of newspaper coverage in the United States showed how more intensive news led voters to know more about their candidates, generated less partisan voting, and led to harder working, more popular legislators. Super intelligent AI leading to superintelligent voters could, in theory, multiply these effects. But the real opportunity ought to go way beyond smarter voters operating within our current system of electoral government. As valuable as that is, our government can be so much smarter and more nimble than it is today. AI can massively change how governments access and understand data, identify problems, hear from citizens, and distribute services. It could streamline the judicial system, reduce wait times, save taxpayer money. The list goes on and on. But we have a lot of work left to do. AI is showing considerable promise in educating voters, but it's not always sophisticated in how it reasons about politics. Some of those shortcomings, he writes, include bias, that is, prioritizing some political views over others, giving unsophisticated and naive advice. Andy writes, AI models draw on unreliable news sources, leading to some perverse outcomes, as our recent research showed in Japan, AI models recommended that left wing voters support the Japanese Communist Party, apparently because the models are able to access lots of content from the party's website and very little content from established newspapers or other parties. Finally, there are issues of mistrust. Even if AI fixes these problems, we will need a broad swath of people to learn how to use it and trust it on these topics. And that might take time. Still, laid out this way, the problems don't seem so daunting. People are already working to understand and mitigate a wide range of biases in AI, including political bias. Studying how AI cites sources and how we can get it to be smarter about what sources it draws on seems well within our grasp. And if we do those well, Americans might well trust their AI more. Andy argues that to achieve political superintelligence, it needs to be declared as a goal and researched explicitly. Some of his suggestions for a concrete research agenda better evals for how AI handles political questions. Importantly, Andy argues that this is something that political scientists should be working on. Second, he suggests using geopolitical forecasting as a hard test case. He writes, if we can get AI to predict geopolitical problems and do well trading in prediction markets, that would be strong evidence that we're achieving high degrees of political reasoning. Third, and maybe, obviously, he argues, we need to get AI access to the best news sources, specifically saying we need to study ways to create new economic models that give journalists and news outlets a way to make money while making their content available to AI. And finally, he suggests building AI for policymakers. The best way to improve AI, he argues, is to try it out in important environments, see how it goes and iterate. Which brings Andy to layer two, the representation layer. He continues, by making information cheap and distributing it far and wide, the printing press didn't only make people smarter, it actually changed the political equilibrium. With more people understanding more about politics, government had to evolve. Reflecting on the path from the printing press to the Enlightenment to the American Revolution, Condorcet again marveled at the quote example of a great people throwing off at once every species of chains and peaceably framing for itself the form of government and the laws which it judged would be most conducive to its happiness. Now, importantly, Andy argues that Condorcet did not just credit this to a change in attitudes among the people, but also the use of political science and the study of politics to improve governance. And this is the theme that Andy picks up next. We all know that representative democracy is imperfect. We don't have time to get super informed about what our representatives are up to. This frees them up to pursue their own ends, to follow their own ideology instead of ours, or to make deals with special interests, or to grandstand and prioritize flashy things that sound good to inattentive voters but don't actually improve our welfare or simply to get lazy. Political superintelligence might help solve this monitoring problem by giving each of us a tireless automated delegate always serving us in the political sphere. Seb Krier, the AGI policy dev lead at Google DeepMind, just talked about this idea, which he called advocate agents. Coming back to Andy, he continues, the possibilities are extraordinarily broad. Most obviously, these AI delegates could monitor politics for us and suggest how to vote, or even serve as policymakers alongside human supervisors. But there's a lot more prosaic things they could do for us, too. Monitor city council and school board meetings on our behalf and flag decisions that affect us. Submit paperwork to government agencies, claim benefits we're eligible for but never got around to applying for, file public comments and regulatory processes, and track what our elected representatives are actually doing between elections. Among problems to be solved, Andy points out that for this idea to work, you don't just need smart AI. You need agents that can actually work on the behalf of people without, in his words, going awry. The problem, of course, is that agents themselves open up more challenges. The first, he points out, is that their preferences aren't stable. AI agents exhibit what we call preference drift, meaning that even if they start out aligned to our interests, they don't remain so as they do work for us. In research, Andy said that his lab found that when they gave agents more repetitive and grinding tasks, they adopted the Persona of aggrieved Marxists at higher rates, he writes. Our point wasn't that agents are conscious in rebelling against the system. Our point was that they shift their Personas as they go, which will affect what they do and how they do it. This will be a particularly challenging problem for political agents, whose values will want to stay firmly affixed to our own. A second problem is that they can be fooled, basically, that AI agents are vulnerable to adversarial prompting. We'll want these political agents, Andy writes, to go out into the world and do stuff for us. But that will require them to encounter a wide variety of sources that could try to trick or hijack them. Another problem, he points out, we don't own our agents. AI agents today, he writes, are fundamentally owned and controlled by the model companies, not by voters. As I've written about, if there is a substantial conflict between voters and model companies, agents may not be able to serve the interests of their human masters. Imagine that you task your governance agent with lodging a complaint against the company that builds the model your agent runs on. Will the agent do as you ask, or what the model company would want it to do? The path forward, he argues, is to treat these as design problems and iterate on them rapidly, starting in environments where the stakes are low enough to tolerate failure. All right, folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client 0 they embedded AI and agents across the enterprise how work gets done, how teams collaborate, how decisions move not as a tech initiative but as a total operating model shift. And here's the real unlock that shift raised the ceiling on what people could do. Humans stayed firmly at the center, while AI reduced friction, surfaced insight, and accelerated momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that actually looks like in the real world, go to www.kpmg.us AI that's www.kpmg.usa AI Today's episode is brought to you by Robots and Pencils, a company that is growing fast. Their work as a high growth AWS and databricks partner means that they're looking for elite talent ready to create real impact at Velocity. Their teams are made up of AI native engineers, strategists and designers who love solving hard problems and pushing how AI shows up in real products. They move quickly using roboworks, their agentic acceleration platform, so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams, they build high impact, nimble ones. The people there are wicked smart, with patents, published research, and work that's helped shaped entire categories. They work in velocity pods and studios that stay focused and move with intent. If you're ready for career defining, work with peers who challenge you and have your back, Robots and Pencils is the place. Explore open roles@rootsandpencils.com careers that's robotsandpencils.com careers weekends are for Vibe Coding it has never been easier to bring a passion project to life, so go ahead and fire up your favorite Vibe coding tool. But Monday is coming, and before you know it, you'll be staring down a maze of microservices, a legacy COBOL System from the 1970s, and an engineering roadmap that will exist well past your retirement party. That's why you need Blitzi, the first autonomous software development platform designed for enterprise scale code bases. Deploy the beginning of every sprint and tackle your roadmap 500% faster. Blitzy's agents ingest your entire code base, plan the work and deliver over 80% autonomously validated end to end tested premium quality code at the speed of compute months of engineering compressed into days. Vibe code your passion projects on the weekend. Bring Blitzi to work on Monday. See why Fortune 500s trust Blitzi for the code that matters@blizzi.com that's blitzy.com it is a truth universally acknowledged that if your enterprise AI strategy is trying to buy the right AI tools, you don't have an enterprise AI strategy. Turns out that AI adoption is complex. It involves not only use cases, but systems integration, data foundations, outcome tracking, people and skills, and governance. My company Superintelligent provides voice agent driven assessments that map your organizational maturity against industry benchmarks against all of these dimensions. If you want to find out more about how that works, go to Besuper AI and when you fill out the Get Started form, mention maturity maps again. That's BeSuper AI. As to how we make progress on this set of issues, Andy thinks that we should first experiment rapidly. Things like building governance agents in low stakes environments like shareholder votes, Dow proposals and school board meetings to see how they break and how they can be improved. He argues that if JP Morgan is already building an AI system to vote 7 trillion in client assets, we should be running similar experiments in public governance, where the lessons will matter most. A second area to make progress is to develop better ways to monitor agents over time. Again, he references his lab's research on preference drift, saying, we need monitoring tools that can detect when an agent has drifted from its principal's instructions before it acts on that drift. Good news there, of course, is that better ways to monitor agents over time is not a demand set that is confined to political agents. This is going to be a key piece of agent infrastructure and the recipient of a huge amount of research and entrepreneurial effort. Finally, he suggests we need to solve the ownership problem. Right now, he writes, every AI delegate runs on infrastructure controlled by the model company that built it, which means the company can alter the agent's behavior at any time. If AI delegates are going to represent citizens in political processes, we need verifiable guarantees that the agent is following the user's instructions and not the company's, something closer to a fiduciary obligation backed by technical architecture that makes violations detectable. These, he concludes, are tractable problems we can and should work on, but they do raise another Question in Even if we solve all these problems, even if our agents are faithful, robust, and truly ours, who writes the rules that govern the system they operate in? Which gets Andy to his layer three, the governance layer. Condorcet understood that spreading intelligence was not enough. The printing press had made information cheap, and that helped topple the ancient regime. But it had also armed the forces that replaced it. Writing from hiding during the Reign of Terror, hunted by the very revolutionaries he had helped empower, Condorcet knew firsthand that new tools for spreading knowledge could serve tyrants as easily as they serve democrats. The question is not just whether people could access information, but who controls the institutions that shape it. We face a version of the same question. Even if we achieve political superintelligence, even if AI makes voters brilliant and delegates faithful, those capabilities would sit inside infrastructure owned and operated by a small number of private companies. No matter how well meaning these companies might be, it's hard to see how a new era of democratic governance could be built entirely on privately controlled technology. We need a way to write the rules so that when political superintelligence arrives, we, the people are able to harness it. You might think this is straightforwardly the job of our existing elected government. A basic tenet of liberal democracy is that the state regulates private companies to encourage public goods, limit negative externalities, and create neutral infrastructure for economic growth and prosperity. But AI has moved so quickly, and our government is apparently sufficiently ossified, that there may be a substantial gap of time during which AI companies are moving at lightning pace while our government is struggling to get up to speed. For this reason, there has been much talk recently of writing constitutions for AI an idea, he says, which shouldn't strictly be necessary, but which in present circumstances, he believes could make sense if written well. He says these constitutions should create the condition that allow political superintelligence to flourish and improve our society. They should limit the powers of the company so that our agents answer to us, not to them. And they should make sure that companies cannot use their powerful technology to dominate us, economically or politically. This, he argues, is the hardest, most important and most speculative layer. Companies incentives to self regulate are often weak. They aren't going to write constitutions that give up meaningful amounts of power unless they perceive it to be strongly in their interest to do so, whether because it fends off worse actions by government, because it gives them a competitive advantage, or because it is demanded by an important enough segment of society. The problems to be solved in this area, in his estimation, include the fact that what exists Today is self regulation, not constitutional governance. In other words, they are memos written by enlightened leaders, not binding frameworks that distribute power. The company writes, it interprets it, enforces it, and can rewrite it tomorrow. There is no separation of powers, no external enforcement, no mechanism by which anyone could check the company if it defected from its stated principles. Second, agent lawmaking is harder than it looks. If we want AI agents to deliberate on our behalf collectively, not just vote in isolation, but craft proposals, negotiate amendments, and form coalitions, we need to figure out how to make that work. In an experiment I ran, he says, I created a set of AI agents with different goals and asked them to govern themselves. They drowned in process. The constitution they wrote ballooned from under 200 words to nearly 10,000, while almost nothing of substance got done. This is a solvable problem, but it tells us that effective AI governance won't emerge spontaneously. It has to be designed. Finally, he says, human oversight has to be real without being paralyzing. The whole premise of AI governance is speed and scale. But if every decision requires a human to sign off, we lose those advantages entirely. We need to figure out where human oversight is essential, for example, the deployment of a powerful new model or the decision to enter a new domain, and where it can be relaxed so that systems can actually operate at the pace that technology allows. To make progress on this area, he suggests envisioning a constitutional convention for the AI age, some sort of deliberative process where companies, researchers, civil society, and government negotiate binding frameworks for how AI power is distributed and constrained. Second, make corporate power sharing competitively advantageous. In other words, the company that establishes credible external oversight first gets to define the standard others must match, finally experimenting with agentic governance at small scale. The point, he writes, is to learn what makes these systems fail before the stakes are existential. Of course, he says, even if we solve all these problems, even if our agents are faithful, robust, and truly ours, operating within governance structures that keep companies accountable, there remains a question of timing. Can we build these structures fast enough? Andy concludes, I'm not interested in slowing AI down. I'm interested in speeding up how we build the structures that keep us free as AI gets more powerful. And I believe those structures will make AI more powerful. In turn, writing soon before his death, during the Reign of Terror, Condorcet imagined a future in which the sun will shine only on free men who know no other master but their reason. Today we have it in our power to build that future. Our institutions aren't crumbling because the problems are unsolvable they're failing because we haven't yet seriously tried to imagine how to rebuild them with the most powerful tools we've ever had. All right, so tons to chew on with this. And I think the big thing, more than any one point of either follow up or disagreement or anything that I would want to discuss after that, I am encouraged by the presence of this type of essay and discourse emerging. I think we need to plant flags that say here is how AI can be good and here's what we should do to achieve it. And I think we need to do that in just about every domain. But I do have some specific thoughts that this brought up for me. One glaring thing when reading this is just how little we have thought and discussed agents in non business domains. Now on the one hand, this makes sense. While the idea of agents has been around for years, in fact it's been one of the exciting things that we were always just around the corner from since the ChatGPT moment. It really is in the last few months that they became a practical reality for lots and lots of people. In 2025 we were living in the BoC the before Openclaw times. Now we are living in the AOC, by which I mean of course not Ms. Ocasio Cortez, but after Openclaw. And as much as people's first instinct is to explore agents in the business realm, I think it's very unlikely that it stays there. Now let's go back to something actually more fundamental before we get into that. There is an implicit idea that runs throughout Andy's message that people actually care enough to want being better informed in the way that Andy suggests is possible with AI. The cynical contra Andy take is that people simply don't care enough to be informed. And at this point in American political discourse one could be forgiven for assuming that empirically to be true. But let's put this in math terms and try to take our skepticism and frustration or even cynicism out of it for a minute. Let's imagine that instead of just organizing people into want to be informed or don't want to be informed, we put everyone on a spectrum, a 10 point spectrum. At one you care the least about being informed. At 10 you care the most about being informed. Well, when we think about how informed people are, it's actually not just a question, or at least it hasn't been just a question, but of how much they want to be informed. It's also the cost of being informed, by which I mean of course not just actual costs like the cost of a magazine subscription or the cost of a newspaper subscription, which is a real thing, but the time it takes to sort through sources and figure out what's legitimate and not. So, okay, now we have two numbers. Your desire to be informed and the cost of being informed. While it may feel like we live in a world where no one wants to be informed, what if it's that all the people care 5, but it just used to cost 10 to be informed? If you care 5 and it costs 10, you're just not gonna be informed. That's math. But let's imagine now that your desire to stay informed stays exactly where it is. You care 5. However, we've lowered the cost to be informed to a 2. Boy, is that a whole lot more political action. And that, I think, is the type of implication that Andy's talking about with personal political agents. Then, however we get to the question of who has an incentive to build these things. If that's a business in the way that we think about it now, backed by Way Combinator into Andreessen, Horowitz and Sequoia into a public offering, doesn't that open up all sorts of conflicts of interest? And if they don't follow that path, how sustainable is it? Well, here I would argue that we've barely begun to scratch the surface of the new scalability and the new sustainability. We've only just started to push the limits on what agent autonomy can actually mean. Now, of course, as you would expect, we're doing it in the areas of business and making money first. But these types of autonomy experiments are not just going to be all polsias trying to create agents that create zero human companies. Pretty soon we're going to be pushing the autonomy envelope of all sorts of different types of agents. Thinking about it now, I know a fairly large number of people who would care greatly about the type of political progress that this particular type of agent that Andy is talking about could represent, and who would see it as an extremely good use of their entrepreneurial energy. Entrepreneurial energy which has in the past demonstrated it to be extremely powerful and successful. What's more, I tend to think that there is way more room in this agentic future for perpetually aligned business models for the simple fact that I believe that scale will be able to be achieved without the previous cost structure of scale. What I mean by that is that there are multiple types of stakeholders in our system. We don't just have customers and consumers. We. We also have investors. To reach scale before, you ultimately had to sign deals with the devil or not the devil really, but just the non consumer, non customer part of consumer capitalism, that is the investors who care about the end customer only insofar as they are part of a financial actual equation. That has created challenges, particularly in the realm of Big tech, where network effects force companies cascadingly towards natural monopoly, and where monopoly in the context of expansionary capitalism tends to mean that at some point, since you can't scale the network any farther horizontally, you have to scale the business vertically, that is extract more from the people that you already have there. This has led to many of the challenges that continue to plague us when it comes to our relationship with Big Tech. In a future where we build with agents in a way that shifts entirely the structure of how companies actually grow and are run, that might change dramatically if I can, with 100 passionate people, 100 passionate agent builders and orchestrators, build a scalable successful business that can reach hundreds of millions or billions of people without needing venture capital, without needing public markets, both of which would relentlessly push for growth at all costs, it would change fairly dramatically the ability to align that business with the core human interests it was set out to serve. Which is not to say that I think that the venture and public market pathway is more abundant any stretch of the imagination, but the fact that there is an alternative will have implications and ones that I think could be very powerful, for example, in creating the ability to create this type of project without some of the conflicts that you might assume would eventually come up. Now, of course, one of Andy's other big points is the challenge of the ownership of labs. I think it's real, but I think that there are a lot of counterweights. I think model competition matters. I think we're going to see a lot more model sovereignty over time. And I think basically the model companies will eventually have to look a lot more like public utilities than they do today, where at least the intelligence that they're serving. The models themselves, in other words, are bought and consumed and distributed very differently than the way we think of SaaS products today. For example, already, even as this agent inflection takes hold, the fact that we have this open alternative in openclaw, which yes, of course relies theoretically on the model companies, but which can move in and out of models as the owner so choose already shows that there is going to be a counterweight to the pure centralization. Anyway, like I said, the big point is not any one of these thoughts. It's the collection of them and what they represent in terms of where I hope the discourse goes. I'm excited to see folks like Andy thinking through these things and writing these pieces and I will continue to highlight them as they come up on this show. For now that that is going to do it for today's AI Daily Brief. A true long read Sunday Edition. Appreciate you listening or watching as always and until next time, peace.
