Transcript
A (0:00)
From Google Flash to Amazon moves to more OpenAI fundraising to Bernie Sanders moratorium on data center construction, we are talking about the most important stories in AI this week. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG Super Intelligent Robots and Pencils and Blitzy. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. To learn about sponsoring the show, send us a note at sponsorsidailybrief AI and lastly, if you want to learn more about our forthcoming original research and benchmarking efforts, go check out aidbintel.com you'll be hearing a lot more about that in 2026. However, for now we have got to dig into so many stories that we've missed. It's been a weird week. On the one hand, we are prepping for the end of the year, which means I'm producing a ton of shows for the next couple of weeks as everyone goes off for the holidays. But there's also been a couple of big releases which consumed entire shows. For example, we had ChatGPT images 1.5 which sort of shoved things out of the way. So what we're doing today is kind of the inverse. Instead of a full episode on a single topic, we are going to be talking about a ton of topics. It's sort of an extended headlines, but except for the fact that many of these stories could easily be mains in their own right, and we are going to kick off with another model release. Now this one is really interesting because on the one hand it is not the premier model release and yet on the other hand, according to the company behind it, it kind of seems like it is. Around the middle of the week, Google staffers started cryptically, or not so cryptically, posting lightning bolts all over Twitter. Slash X Now, given the pattern of how we got Gemini 2.5 Pro followed by 2.5 Flash and everyone assumed that we were getting Gemini 3 Flash and that's what the lightning bolts referred to. And indeed that is what we got. What was unexpected is just how good Gemini 3 Flash appears to be. Sundar Pichai tweets We're back in a flash. Gemini 3 Flash is our latest model with Frontier Intelligence. Built for lightning speed and pushing the Pareto frontier of performance and efficiency, it outperforms 2.5 pro while being three times faster at a fraction of the cost and they really just kept coming back to the speed, sundar tweeted later. Watch below as 3 flash generates complex graphics, 3D models and a web app before the previous generation even finishes processing. Google AI Studio product lead Logan Kilpatrick writes, Gemini 3 Flash punches way above its weight class, surpassing 2.5 Pro on many benchmarks while being much cheaper, faster and more token efficient. And if you're wondering why all the Comparisons are to 2.5 pro instead of 3 pro, this seems to be a prerogative from inside Google. Google chief scientist Jeff Dean writes, one of the things we strive to do with each new Gemini release is to make the new Flash model as good or better than the previous model's Pro model. So this is explicitly a thing that they were attempting. Gemini 3 flash exceeds 2.5 pro on nearly every metric, often by very large margins, and almost matches Gemini 3 Pro on most benchmarks. And honestly, as much as the marketing Comparison is to 2.5 pro, it's very clear that even relative to 3 pro inside Google, a lot of people have this as their favorite workhorse daily driver type model. Noam Shazir writes, we've packed Gemini 3's pro grade reasoning into a leaner model with Flash level, latency, efficiency and cost. It's my favorite model to use. The latency feels like a real conversation with the deep intelligence intact. Demis Hassabis calls it the best pound for pound model out there and I actually wonder if this idea of pound for pound models is going to become something that we think about more as as we move into more complex workloads where companies actually are shifting between the state of the art and these more token efficient models on the benchmarks. Logan Kilpatrick points out the massive jump from the middle bottom of the pack on artificial analysis for 2.5 flash all the way up to the very top behind only Gemini 3 Pro and GPT 5.23 flash also did really well on the arc, Simon Smith says, on arc AGI1, it scores just behind GPT5.2 extra high, but for about 5.5 times less cost per task. And it turns out that at least from a first impression basis, it's not just Google insiders who are really excited about this model. AI entrepreneur Bindu Reddy writes, Gemini 3 flash 3 intelligence too cheap to meter Gemini Flash series has been one of the best small models ever. We have 100 times more usage on Flash compared to the Pro version. Flash 3 seems to be Google's best model yet. Deletebrow writes I had early access to Gemini 3 flash and and it shocked my vibe test as I walked in with 2.5 Pro to flash expectations. Looking at the evals now, it all makes sense. The latent story here from my point of view is Google has cannibalized a big chunk of 3.0 pro use cases. The fact that Google pushed this out shortly after 3.0 makes me think they already know future 3x pro will have stellar performance. Flash is now the best agentic model hands down for its price point. The lower score on HLE and GPQA diamond over Pro means it is not as knowledgeable as Pro, which makes sense. The choice is clear. Flash 3.0 should be the de facto agentic model unless you are in a knowledge heavy domain. But I suspect even there with sufficient context management you can get good value out of Flash 3.0. Also giving a little sum up to the year he writes, Gemini LLMs have been a black swan for a big chunk of 2025. I doubt any outsider could have predicted total Pareto frontier domination by the Google franchise by end of year. Now the one thing that people are pointing out just for the sake of some comparison is that the hallucination rate seems very high relative to other models. In the Artificial Analysis Omniscience hallucination rate test, which measures quote how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It was at the very top of the heap at 91%, so that is something certainly to keep an eye on. Still, overall, people are incredibly excited about this release. Google also announced that in addition, they're expanding access to the Pro models and providing paid subscribers with higher limits. And one practical operator's note, you might have noticed that the model selector choices have changed from Fast and Pro to now fast Thinking and Pro. Google's Josh Woodward gives a key Fast three Flash Thinking equals three Flash but with Thinking and Pro equals three Pro with thinking. So if you want three Pro you have to select Pro not just thinking. Anyways, I haven't had too much time to really dig in and test, but so far it seems like a great model and another good addition to our toolkit. Next up, we move to some big breaking investment stories from the last couple of days. Bombshell OpenAI fundraising news as Amazon considers making a $10 billion investment. The information reports that Amazon is in talks to invest 10 billion or more into OpenAI, citing three people familiar with the discussions. The valuation would be higher than 500 billion, which was the valuation struck for the tender offer completed in October. Talks commenced in October after OpenAI completed their corporate restructuring that gave OpenAI the ability to sell normal common stock to investors, as well as putting a final end to their exclusive compute arrangement with Microsoft Azure. The report highlighted several potential aspects of an OpenAI Amazon megadeal. First, it would of course help OpenAI deal with immediate cost pressures. They've committed to spending 38 billion renting servers from AWS over the next seven years, so an equity for compute deal could be more cost effective. The information sources also said that OpenAI making use of Amazon's Trainium chips could be part of the deal. Amazon has been pushing hard to drive Trainium adoption, including making the use of Trainium a condition of their investment in Anthropic earlier this year. Now, crucially, Amazon won't be able to offer OpenAI models through their bedrock platform. Microsoft still has the exclusive right to offer OpenAI models in cloud services. However, Amazon and OpenAI have apparently been discussing other partnership opportunities. One of the sources said that agentic e commerce opportunities have been discussed. While OpenAI also wants to sell enterprise ChatGPT seats to Amazon staff now, it's worth reinforcing that. 10 billion is a very big number that could go a long way to bridging the gap to OpenAI's anticipated IPO. OpenAI's 2025 fundraising total was 40 billion and they're projecting a cash burn of 100 billion over the next four years. The deal could also shake up allegiances within the AI space. At that point, OpenAI would be partnered with all three of the major cloud providers, with everyone but Google Also on the cap table more complicated are OpenAI's chip deals. They already have plans in motion to develop their own custom silicon with Broadcom, and Amazon's Trainium adds to their diversification away from Nvidia, but for the moment, Nvidia is still the only chip producer with the quality and capacity to fill OpenAI's new data centers. Nvidia is also an existing shareholder in OpenAI and announced a deal in September to invest up to 100 billion or more. It was later revealed that this fundraising was highly conditional and basically gave Nvidia the option to invest if they so choose. There is another whole dimension to this around whether this suggests that Amazon's partnership with Anthropic is drifting apart, or if this just reflects the fact that in AI today everyone is going to partner with everyone. Whatever the case, obviously markets liked it. Amazon was up 2.3% in overnight trading, and while some people immediately rushed to say how news is bullish for some and bearish for others. I kind of think Amit is investing had it right here when he wrote bullish. Amazon not really bearish. Nvidia, AMD or Broadcom in my opinion. Market may interpret it as bearish, but the chip ecosystem is massive and only growing even with custom Asics. The bigger question is how OpenAI continues to get every single important tech company to invest in them and tie part of their success to OpenAI's ability to scale. Now that was not the only fundraising news from OpenAI this week. Later in the week the information again reported that OpenAI had even more fundraising in the works. Talks are reportedly underway to raise not $10 billion, but tens of billions of dollars, with sources saying the final number could be as much as 100 billion. The valuation is said to be at 750 billion. Now sources noted that the talks are still early and nothing has been finalized. This would obviously be a 50% jump in valuation since OpenAI's tender offer in October, but it might not be as crazy as it seems. For example, if they can actually source 100 billion a new investment, then that makes up a lot of the difference for a post Money valuation. Still, the new valuation wouldn't leave a lot of room for OpenAI's IPO, where they're reportedly targeting a trillion dollar valuation. Now there's no precedent for an IPO that size, and if they have to reach even further, the IPO could start to really stretch the limits of public market liquidity. Multitrillion dollar companies exist, but they generally don't have a wave of venture fund managers who need to sell their stock all at the same time. The bankers working on next year's batch of IPOs from OpenAI, Anthropic and SpaceX are apparently thinking about this problem already. The information reports that bankers are considering staggered lockups for existing investors. The standard arrangement is a 90 or 180 day cliff, but allowing so much selling all at once could easily overwhelm market liquidity. We are really getting to the point where any news about OpenAI fundraising or deal making is kind of a Rorschach test on what you think of OpenAI, Daniel Newman writes. The smartest and most sophisticated investors are all piling into OpenAI at eye watering valuations while a wildly bearish narrative spreads about its demise. All bet on the smartest and most sophisticated AI will be more than fine. Now, staying on the theme of OpenAI and Amazon, but shifting over to the Amazon side of that deal. Amazon also made further news with the announcement of a new AI focused department. CEO Andy Jassy announced on Wednesday that veteran AWS executive Peter Desantis would lead a new organization that pulls their disparate AI initiatives under one roof. The organization will bring together the AI Models team that's in charge of training Nova, as well as their AGI Labs team that's currently working on computer use agents. Silicon development, including the Trainium chip and Amazon's Quantum Compute project will also be folded into the new AI Org. Jassy commented, I believe we are at this inflection point with several of our new technologies that will power a significant amount of future customer experiences. Nat Desantis is currently the Senior VP of AWS and has been with Amazon for almost three decades. Jassy wrote that Desantis led some of the most transformative technologies in computing history. For example, DeSantis led the launch of EC2, which is the core AWS service, as well as managing their custom silicon team and a dozen AWS infrastructure and product innovations. Jassy concluded, the path ahead is full of opportunity. With the foundation that's been built, the traction we're seeing, and Peter's leadership bringing unified focus to these technologies, we're well positioned to lead and deliver meaningful capabilities for our customers. Now, alongside the reorg, there's also a changing of the guard for AI at Amazon. Jassy announced that Rohit Prasad, Amazon's head scientist of aig, is leaving the company at the end of the year. Prasad had been with Amazon since 2013 and served as the head scientist for the Alexa project from its earliest days, while Jassy was magnanimous. Most people are kind of assuming that Amazon's stagnant year in AI contributed to Prasad's departure. As another potentially important footnote, robotics specialist Peter Abbeel has been placed in charge of the Frontier Models team. Peter joined Amazon in 2024 after an aqua hire deal with his robotics startup Covariant. They were the first to launch a commercial foundation model for embodied AI in March of last year, and first impressions are positive. Computer science and ex curmudgeon Pedro Domingos writes, unlike Meta, Microsoft or Apple, Amazon now has one of the best AI researchers in the world leading its AI efforts. Great move and Godspeed. Peter Abeel now, on the one hand, we tend to take leadership shakeups as signs of trouble, but there is a very clear pattern here. Just like Google went through a hard period where they had to reorganize everything under a single roof and and get everyone aligned in a common focus Meta has been going through that all year and it seems like Amazon is following suit now. My guess is that Amazon is poised for a much better 26 than they had a 25 because of all these moves, and so I'm excited to see what the new moves produce. Shifting now to a little bit of market news Some think that a busted debt deal for Oracle could be the pin that pops the AI bubble. The Financial Times reports that Blue Owl Capital has declined to fund Oracle's next big project, a $10 billion data center in Saline Township, Michigan. Blue Owl is a private equity group that has been one of Oracle's primary funding partners over the past year. Sources said that negotiations have stalled out and the agreement to fund the project will not go forward. The FT reported that Blackstone is in talks to step up as a backup, but they're yet to sign a deal putting the project in jeopardy. Now Blue Owl had been structuring these deals through special purpose vehicles, essentially new companies that build and own the physical assets and then lease the facility to Oracle once complete. Other large investors like pension funds, family offices and sovereign wealth funds then buy debt and equity issued through the special purpose vehicle. The data center and associated revenues are typically put up as collateral for the debt. Reportedly, investors pushed for stricter leasing and debt terms for this deal in light of shifting market sentiment and fears about Oracle's growing debt load. Now, Oracle, for their part, played down the issue, commenting, our development partner, Related Digital, selected the best equity partner from a competitive group of options, which in this instance was not Blue Owl. Final negotiations for their equity deal are moving forward on schedule and according to plan. A related Digital spokesperson added, the notion that Blue Owl walked away is unequivocally false. This is an exceptional project that drew significant interest from equity partners. Still, the news has markets rattled, with Oracle stock plunging by 5.4% on Wednesday after the article was published. The Stock is down 45% since its all time high in September, which is of course when they first announced their $300 billion OpenAI deal. It is now trading below levels from the deal. Andreas Stano Larson writes, this is how bull markets end. If debt markets dry up, that is the availability of money, then the AI trade is in trouble. This is the key question for 2026. Wherever we are. It's very clear that markets are on the edge heading into the end of the year. This is the first indication we've had that private equity firms don't have an infinite appetite for data center funding, but it's not obviously the end of the trend. This could all blow over if Oracle finds a new partner, and it might even just be a seasonal issue, with few investors looking to make new allocations in the final weeks of the year. Still, the jitters are evident and becoming much more frequent as we close the book on 2025. Sure, there's hype about AI, but KPMG is turning AI potential into business value. They've embedded AI in agents across their entire enterprise to boost efficiency, improve quality, and create better experiences for clients and employees. KPMG has done it themselves. Now they can help you do the same. Discover how their journey can accelerate yours at www.kpmg.usagents. that's www.kpmg.us agents. Today's episode is brought to you by my company, Superintelligent. Superintelligent is an AI planning platform, and right now as we head into 2026, the big theme that we're seeing among the enterprises that we work with is is a real determination to make 2026 a year of scaled AI deployments, not just more pilots and experiments. However, many of our partners are stuck on some AI plateau. It might be issues of governance, it might be issues of data readiness. It might be issues of process mapping. Whatever the case, we're launching a new type of assessment called Plateau Breaker that, as you probably guess from that name, is about breaking through AI plateaus will deploy voice agents to collect information and diagnose what the real bottlenecks are that are keeping you on that plateau. From there, we put together a blueprint and an action plan that helps you move right through that plateau into full scale deployment and real roi. If you're interested in learning more about Plateau Breaker, shoot us a note. ContacteeSuper AI with Plateau in the subject line, AI isn't a one off project. It's a partnership that has to evolve as the technology does. Robots and pencils work side by side with clients to bring practical AI into every phase. Automation, personalization, decision support and optimization. They prove what works through applied experimentation and build systems that amplify human potential. As an AWS Certified Partner with Global Delivery Centers, Robots and Pencils combines reach with high touch service. Where others hand off, they stay engaged because partnership isn't a project plan, it's a commitment. As AI advances, so will their solutions. That's long term value. Progress starts with the right partner. Start with Robots and pencils@ropotsandpencils.com aidaily Brief this episode is brought to you by Blizzi, the Enterprise Autonomous Software Development platform with infinite code context. Blitzi uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Enterprise engineering leaders start every development Sprint with the Blitzy platform, bringing in their development requirements. The blitzi platform provides a plan, then generates and pre compiles code for each task. Blitzi delivers 80% plus of the development work autonomously while providing a guide for the final 20% of human development work required to complete the Sprint. Public companies are achieving a 5x engineering velocity increase when incorporating Blizzi as their pre IDE development tool, pairing it with their coding pilot of choice. To bring an AI native SDLC into their org, visit blitzi.com and press get a demo to learn how Blitzi transforms your SDLC from AI assisted to AI native. Next up, another story that could easily be a main chatgpt is ramping up their third party integrations with the release of their App Store. There is now an app Directory allowing users to browse available integrations and OpenAI has also rebranded their Connectors feature, now calling them apps as well. According to a support page, Chat connectors are now apps with Chat, Deep Research connectors are now apps with Deep Research and Synced connectors are now apps with sync. ChatGPT is one step closer to becoming an everything app or an AI operating system rather than just a chatbot. Earlier in the week, OpenAI hired former Shopify VP of Product Glenn Coates as their new Head of App platform and his announcement made it very clear what the plan is writing. We're going to find out what happens if you architect an os, ground up with a genius at its core and that can use its apps just like you can. I can't think of anything more exciting to work on. And with the App Store now live, we had a wave of third party integration announcements. Salesforce CEO Marc Benioff wrote, welcome Agentforce Sales and ChatGPT our Christmas gift. The world's number one sales cloud is now alive inside the world's number one LLM DoorDash CTO Andy Fang posted. DoorDash is now partnering with OpenAI to bring grocery shopping directly into ChatGPT. You can now turn a recipe idea instantly into a shoppable grocery list in the chat and seamlessly check out on DoorDash for on demand delivery. In an announcement post by CEO of Applications Fiji Simo, a whole slew of other partners including Adobe Airtable, Apple Music, Clay, Lovable, OpenTable and Replit are all now also available. I would say in general the attitude among observers is interested. Observation developer Nick Dobos writes. Intriguing but odd. Available on mobile but buried deep. Photoshop is nice combined with ImageGen. Nice to see some common business ones like Linear and Notion. Some are super weird. Fairly uninteresting first batch. Who is connecting Peloton or Zillow? Hoping we see more weird and creative stuff now that you can finally submit apps, I need to sit down and find the time to actually jam on some app ideas. Some commentators, on the other hand, are a little bit more skeptical of the whole idea of an app store. James H Asked Marc Benioff wouldn't it make more sense to have GPT inside Salesforce instead of the other way around? And this is kind of the thing to watch going into next year. Now if you are interested in the whole chat store conversation on Sunday's Big Think episode, I'm reviewing some big predictions that others have had and I actually talk a little bit about a prediction that ChatGPT becomes the next great app platform one last OpenAI update we got more details on OpenAI's deal with Disney. Bloomberg reports that no cash changed hands as part of the deal, with Disney taking payment exclusively in stock and warrants. Disney received $1 billion in stock upfront and also received an option to buy an undisclosed additional stake in the company at a later date. While Bloomberg source didn't get too specifics about the terms of the warrants, they did say the deal was structured to align the company's financial interests. If Sora is a hit alongside the financial terms. CEO Bob Iger disclosed that the deal only had a one year exclusivity period after which Disney could sign similar deals with rival AI labs. While the structure means Disney is waiving a cash licensing fee, Iger believes in the upside potential, stating, we want to participate in what Sam is creating, what his team is creating. We think this is a good investment for the company. Iger also discussed the rationale for cutting a deal instead of fighting the AI industry in court. He said, no human generation has ever stood in the way of technological advance and we don't intend to try. We've always felt that if it's going to happen, including disruption of our current business models, that then we should get on board now. In our final section of this extended news episode, we shift our attention more to the government and public side of AI, starting with a new announcement from the Trump administration of a department focused on AI infrastructure that they're dubbing the US Tech Force. According to the government website, the Tech Force will recruit, in their words, an elite core of engineers to build the next generation of government technology. The administration is aiming for around a thousand hires. Recruitment will focus on early career technologists from traditional recruiting channels. Experienced engineering managers will also be seconded from private sector partners. These partners include AWS, Apple, Google, Dell, Microsoft, Nvidia, OpenAI, Oracle, Palantir and Salesforce. The government is looking for people with expertise in software engineering, AI, cybersecurity, data analytics and technical project management. Importantly, the Tech Force won't be a military division. However, it's not clear where in the government hierarchy it actually will be located. Teams of workers will be allocated across all government departments, reporting directly to individual agency heads. The idea seems kind of like embedding forward deployed engineering teams into government agencies to drive AI and tech projects. Still, the recruiting ad compared Tech Force to the storied U.S. engineering Corps of World War II. The website said engineering teams would tackle the most complex and large scale civic and defense challenges of our era, from administering critical financial infrastructure at the Treasury Department to advancing cutting edge programs at the Department of Defense, and everything in between. Essentially, the initiative is an all of government effort to completely rebuild software and hardware infrastructure. Scott Kapoor tweeted, your government needs you to transform the federal government through modern software development. If you're up for a huge challenge, join the country's best and brightest technologists in the inaugural class of US Tech Force. We're partnering with the top US Technology companies to take on this challenge. You'll learn a ton network across the most important government agencies and private sector companies, ultimately creating powerful career opportunities whether you want to continue in public sector or join the private sector. And indeed, the initiative appears to be in part a jobs program for younger tech workers who've been hit particularly hard by current conditions in the labor market. Recruits will serve two year stints in government and then will be eligible for preferential hiring at those partners. AJ Wald writes, I think techforce is needed, but anyone that has worked in tech knows it takes more than two years. It should be five years and all student loans paid off if they finish staying on White House related policy Nvidia is considering ramping up production of H2 hundreds to meet Chinese demand, writes Reuters. Nvidia has told Chinese clients that it's evaluating adding production capacity for its powerful H200AI chips after orders exceeded its current output level. Now, that would be a very sharp departure from recent reporting that suggested that Beijing would heavily ration H2 hundreds. ByteDance and Alibaba have reportedly already placed large orders, and while Reuters stated that Beijing is yet to greenlight any imports. The Chinese tech companies clearly want to get their orders in as soon as possible. Now the news raises a few implications for US Chip policy. First, it gives hope that the US Strategy of flooding China with US Chips isn't too late. Nori Chu, the investment director at White Oak Capital Partners, also confirmed that H2 hundreds far exceed anything being produced by Huawei. He told Reuters its compute performance is approximately two to three times that of the most advanced domestically produced accelerators. I'm already observing many cloud service providers and enterprise customers aggressively placing large orders and lobbying the government to relax restrictions on a conditional basis. The surge of orders also revisits concerns about Nvidia servicing Chinese demand over the needs of the domestic industry. TSMC is capacity constrained, so chip production is a bit of a zero sum game at the moment. U.S. lawmakers had proposed a bill last month to force chip makers to give preferential treatment to domestic firms, but it was set aside after Jensen Huang visited Washington. Nvidia's decision could cause a backlash if we see a shortage of chips hold up Data center construction and indeed it is with data construction and the politics around it that we will conclude this episode. I'm sure you have seen this, but Senator Bernie Sanders is calling for a moratorium on data center construction in order to slow down the AI race. In a new video announcing the policy proposal, he referenced multiple projected harms, including isolation of children through chatbot use and worker displacement. Sanders said, there is a whole lot about AI and robotics that needs to be discussed and analyzed, but one thing is for sure, this process is moving very, very quickly and we need to slow it down. I will be pushing for a moratorium on the construction of data centers that are powering the unregulated sprint to develop and deploy AI. That moratorium will give democracy a chance to catch up and ensure that the benefits of technology work for all of us, not just the wealthiest people on earth. Now, Sanders has of course, been increasingly focused on AI labor disruption as a core policy discussion over recent months. In July he attached the AI productivity boom to his calls for a four day workweek, and more recently he wrote multiple op eds fleshing out his viewpoint. His central thesis was that democratic input is required on a technology of such transformative potential, and that the conversation around policies to protect workers needs to start now, before the bulk of the damage is done. Now, even if you don't agree with Bernie's labor centric and democratic socialist political views, a lot of folks agree that elevating this discussion is a pretty reasonable place to start. Previously I even said that I liked that Bernie Sanders was focusing on this four day work week idea because it, on the one hand acknowledges the inevitability of AI while also trying to distribute the benefits more widely. In what will be unsurprising to many of you. I feel very differently about this type of moratorium, and lots of others seem to as well. AI Media creator Matthew Berman wrote, I hate everything about this video. Cherry picking the doomer talking points to scare people is gross. A moratorium on data centers would hand the AI race to China. They are wishing we stopped building. I really don't want a 90 year old to dictate tech policy. And by the way, if you're thinking that the narrative of an AI pause sounds pretty familiar, then you're not far off. Sanders has been consulting on policy with Geoffrey Hinton, who was central to advocating for an AI pause in 2023 as part of a coordinated effort led by the Future of Life Institute. That push, believe it or not, is still ongoing. With the Future of Life Institute now conducting polling in an attempt to establish popular support for their cause, it is very clear that AI development and data center policy are going to be key in next year's midterms. Sanders is now planting his flag at the extreme end of the political discussion, but he's far from alone. And this is not, right now a left right issue. Or at least there are both members of the left and the right that are carving out this anti AI political territory. Florida Governor Ron DeSantis hasn't gone so far as to lock arms with Bernie Sanders yet, but he did recently argue that data centers have zero economic benefit for locals. He said recently, once it's done, it employs like half a dozen people, and these tech companies will likely bring in foreigners on some visa. They're not going to hire from your local community. That's just not what they do now. It is way beyond the scope of this particular show to dig into how much I think the people building data centers have failed to recruit communities to their cause and actually made them valuable to those communities. But I'm sure that's something we'll talk about a lot more in 2026. When it comes to the moratorium. Specifically, many economically minded folks on X pointed out that this would pretty much have the exact opposite effect of Bernie's stated goal of ensuring that the benefits of technology work for all of us, Austin Allred writes, Making it so no one else can build data centers literally locks it into the richest companies owning AI. Nick Dobos again writes, constrain AI compute and free tiers will vanish and only the rich will have the best AI. Imagine going to school and only the rich have AI to help learn. This will do the exact opposite thing you're aiming for. Dystopia Breaker writes, I agree with many of your concerns around risk and governance, but NIMBY the data centers is going to make the problem worse, not better. If you halted compute build out here, you're ensuring that only the relatively wealthy have access to this technology that you yourself describe as transformative through the same price effects as NIMBYism in housing. Some took this as an example of the work that the AI and tech industry need to do to actually align with people's perspectives. Nick Arner writes, it's increasingly important that the tech industry effectively communicate and diffuse the benefits of AI faster or this view will become more popular and widespread. I'll close with a take from Parmita Mishra, a computational biologist. She writes, it is so sad truly to think that data centers are only helping Mark Zuckerberg and all the other billionaires when there are people waiting for a cure to their disease. With five years to live, discovery is the zeal of life. You want to slow down scientific discovery? You think only Mark Zuckerberg is affected by this? Come on. We all know the importance of scientific discovery. It's why we have the World Wide Web. Thank God they did not cut that down. You know who needs data centers? We need data centers. The countless small businesses, scientists and engineers that are spending all their time on Earth trying to cure disease in lieu of optimized hardware. You need more data centers for cancer discovery. State of the art. It is a desperate need for many. Some of us are trying to use AI to fight diseases. Some can clown on us. But the smartest people in the world see the need for AI for biology as obvious. Some even see it as a moral imperative. Blanket deceleration is what you are proposing. The data center demands for biology are exploding and there are consequences for not having a nuanced view. This is important for all American patients, all families, all people on Earth. Like I said, there is so much to discuss around this. It will be an unavoidable part of the conversation in 2026 and for now we will leave it there and we will close this episode. Hopefully this gives you a sense of all the other things that were happening this week. In any case, I appreciate you listening or watching. As always and until next time, peace.
