Loading summary
A
Your film is now ready to be shown. Good morning, I'm Justin Hendricks, editor of Tech Policy Press. We publish news, analysis and perspectives on issues at the intersection of tech and democracy.
B
Artificial intelligence is the next frontier, but is it also the next bubble? Some experts think we could be heading towards something similar to the implosion of the dot com bubble, where technology stocks rose rapidly with the emergence of the Internet in the late 90s and then crashed in 2000.
A
The real question is if it's only stock market investors, then those investors will of course suffer significant losses. And this is help us sovereign wealth fund, pension funds and the like. But the bigger threat to society is that if it's financed by borrowing money, which is increasing the case, and especially if banks are to the money.
B
But it's very interesting that the rest of the world's stock markets have done far better than America this year. Possibly the widest gap since 2009. So I think what's happening is that, yes, in America we are spending a lot on AI, but I think there is a quiet questioning going on as to whether all this investment is actually going to pay off.
A
If you read, watch or listen to financial news, you'll find there is a boom in discussion over whether the AI boom is a bubble and what the consequences might be if it bursts. Today's guest says if such a crash occurs, it will represent a significant policy opportunity, a potential point of intervention that could lead to meaningful reform of the tech sector. Let's jump right in.
B
I'm Assad Ramzan Ali. I'm the director of AI and Tech policy at the Vanderbilt Policy Accelerator, Saad.
A
I'm pleased to have you join me today. We're going to talk about this report you've just put out from the Vanderbilt Policy Accelerator called After the AI Crash. And the title assumes there will be a point in time which we can refer to as after the AI crash. Why are you so convinced that that crash is coming?
B
Yeah, and appreciate you having me, Justin. I started this project with a if there is an AI crash. There was a lot of discussion at the end of last year around are we in a bubble? Are we not in a bubble? And my thought was if we are, are we prepared for that, like as a policy community about what to do about it. And so I wasn't convinced that there was one when I started writing it. But when I got into the weeds of researching this paper, I became convinced that there. That there is a pretty bad basic financial situation going on. So I don't start from the premise that we definitely will have an economic crash. But I do start from a. There's a plausibility that we do. So I try to lay out in the report, here's how we might get there, here's how it might occur, and if it does, the main part of the report is how do we think about a policy response if that happens?
A
It seems like your kind of fundamental argument about why a bubble exists is sort of similar to what we've seen. Various other experts point to that there's a mismatch between the capital expenditures on AI, particularly from the large firms, the billions and billions being invested and the rate of returns. That's essentially it. There's a basic math problem.
B
There's a basic math problem where we have JP Morgan anticipates that we're going to invest $5 trillion between now and 2030. When you match that with the tens of billions that are coming in as revenue from these AI systems, there's a basic math problem. And so at that fundamental level, that's the only thing I'm going off of, is that, hey, these huge projections of AI is going to change everything. That might be true for a lot of parts of society, but in the next five years, we anticipate that revenues will catch up to the trillions of dollars of investments is a completely different question than how good is the technology? What are the good and the bad societal implications of adopting it? It's a math problem.
A
There are counterarguments to this point of view. You know, some of them kind of hold that we're looking at this the wrong way, that this is more like electricity than the dot com bubble or more like, you know, the steam engine, which is going to radically change a variety of sectors? I don't know. Do you, do you feel like you've seen any counter argument to the bubble thesis that you think still, I don't know, gives you any doubt?
B
The place where I could be wrong is how quickly that these technologies get adopted within businesses that pay for the technologies. So I use a lot of JP Morgan's analysis on the finance side, but Bain and company, the consultancy also did an analysis and the way they look at this is if we shifted all of society's technology spending and most of that is enterprise spending, what businesses spend on backend systems, if all of that moves to AI, we still have hundreds of billions of dollars of GAAP per year in revenue. So that's where my mind goes to. You have to assume not just all of its shifts in the next five years and it expands significantly. And so that's where again, I'm not of the. Like I have to be right. There is a crash. I'm of the. It's totally plausible. And if there is one, we should be ready as a policy community for what that means. Because the way the financial structure of how the money is being spent, that impacts all of us. It's all of our money in some way or another that's mixed into everything that's happening.
A
Well, let's get into that a bit then. As far as being ready, I mean, it's both partly to stave off the worst consequences. But you know, if we look back to the housing crisis, we look back to the 08 period, there was an opportunity then to reshape the economy that some would argue maybe was squandered. You know, that more could have come out of that moment to potentially, you know, fundamentally shift the way the American economy works. Is that the type of thing you're thinking about here? That there's a little bit of both here? It's a little bit of stave off the worst and then also see it as an opportunity if it occurs?
B
Yeah, that's right. I think about in the, in the 2007, 2008 crisis where a housing financial, you know, crisis caused an economy wide downturn that impacted everybody, every industry, every person who was saving up for retirement was impacted. The way I think about that is the responses to that were one, we pumped a bunch of capital to bail out companies and banks, but not people. And that's where we should learn from that. There was a huge political reaction. You'll remember on the right and left, the Occupy Wall street movement. So much of the political backlash held that out to be a central tenet of what they were reacting to. Second, the types of things we did I look at in a couple different buckets. Dodd Frank set up a system that was really technocratic. There was an interagency group, the Financial Stability Oversight Council that met a couple times a year and figured out which banks and non banks would be systemically important enough to try to avoid another 2008 like scenario to try to avoid the government bailing out banks. And yet just not that long ago we had to bail out Silicon Valley bank, a bank that was on nobody's list as systemically important at that level. And so that's where I go back to did that work? Do we feel good about that? And I think part of when I talk to the people who are working on that response, when you're reacting to a crisis, you don't have the imaginative capacity to think about the big ideas. And so this is an attempt to say let's do that thinking now. Let's lay that out, let's have the conversation, let's have the debate about those issues now.
A
I mean you brought up the idea of bailout. It feels to me that the kind of politics around a potential bailout of tech firms are pretty poisonous. That would not be an attractive idea to pretty much anybody at this point.
B
I think that's right. If you conceive of a bailout as a check written by the government to a technology firm, I do not think that's right for other types of things that might happen. The CFO of OpenAI mentioned loan guarantees and that got a big backlash because that would be seen as a bailout. What I get worried about is we're already leaving billions of dollars in tax breaks at the state level. We're already, we're already doing those for data center build out. Doesn't that start to look like a slow bailout? So that's the, that's the scenario that I worry about is we do that in other ways that look like tax subsidies or that look like long term contracts where the government is overpaying for service. So that's what I'm watching for. I'm not saying we're like too far down that road now. Although on tax subsidies we, we are pretty far down that road. But the chat to a company, I don't think that's a good idea for many reasons including when there's a bank run, there's a systemic problem. It's my money in that bank. There's no such thing as a bank run on an OpenAI or an Anthropic. Right. That analog for the rationale for the bailout doesn't even exist.
A
I want to get into some of the policies that you think Congress should consider if in fact this bubble does burst. But just for anybody that's listening abroad or you know, may not be familiar with some of the things that came out of the 08 crash basics on Dodd Frank, maybe even Glass, Steagall, we should cover as well. There are some parallels here that you, you bring up.
B
Yeah, so let's do the like couple minute version. So in the 1920s and 30s, America Express experiences the Great Depression. The whole world experiences a Great Depression that is in large part a financial crisis. The two members of Congress who led the banking committees, Glass and Steagall, that was their last names, Senator Glass and Representative Stegall, came together on a number of banking reforms in the 1930s, including the 1933 Banking act, which includes a few sections that we now call the Glass Steagall Act. But it's just four sections of that larger bill. The basic tenant is a bank that takes in deposits, that takes in your money. Can't also be investing those kinds of funds in the market. That basic tenet of separating those kinds of businesses to keep the risk pools separate and to not make the depositors bear the risk of commercial activity that a bank is doing, that actually goes back to the bank of England's charter in the 1600s that banks shouldn't mix financing and commercial activity. So that idea is really old. 1933, we codify it in Glass Steagall, and we say that that's a thing that shouldn't happen. Over time, Glass Steagall gets weakened through regulations, through court opinions, and it formally gets repealed in the 1990s through the GLBA, a separate law that's not worth getting into. 2008 happens, and there's a lot of discussion around should we have big structural shifts again like we did after the Great Depression, where we separated banking activities and instead we took a more technocratic path? That's. That's the setup that I'm looking at here of we didn't do that. We didn't do Glass Steagall. We did Dodd Frank, which was a number of things, one of which was a system to create a listing of who. Which entities are systemically important now. Now, the one caveat is we did create the cfpb, the Consumer Financial Protection Bureau.
A
And.
B
And that's the idea. That's one of the ideas that I think about here is that happened in part because then Professor Warren wrote a journal article about the need for a consumer protection agency before the crisis, but the big structural ideas of after, you know, you hear about, like postal banking or other universal banking types of ideas that could create new models in the economy, in financial services. A lot of those came after the crisis. A lot of those. Those discussions happened five years after the crisis. It's the kind of thing where you could, if you had them on the shelf at the time, that's at least part of the debate around how to respond.
A
Let's get into a few of these policies that you think Congress should consider right now. I mean, the first one, you want to stop financial engineering. You know, there's enormous sums of cash, particularly around the hyperscalers, but also outside them in the broader kind of data center investment. A lot of that is More, you know, debt oriented. You point to this other thing that I think people have been, I don't know, very curious about. At least on my social media feed I've seen multiple examples of the depictions of this circular equity financing that seems to really capture people's attention or imagination for exactly what's going on here. All the complicated ownership schemes and debt and loan guarantees, etc. I don't know how do we get this piece of the house in order?
B
Yeah, so let's take equity, which is investing, owning parts of a company and debt as two separate things here. So on the equity side of the house, companies investing in other companies is not new. Companies giving financing to their customers is not new. Companies investing in their customers is new. At this scale where you have unprofitable customers that are getting money to buy from their vendor, from an investment from that vendor. And the problem there is it, it obscures the actual business intensive incentives in the market that could exist. You're basically a company is saying, here's money to buy my product. And so we actually don't let that messes with prices, that messes with demand, it messes with all types of questions around what's going on. As far as I can tell, we've never had circular equity investing at this scale ever. There's like one off examples of this happening in one off companies or in roll ups of some minor industry, but not at this scale, nothing like this. So my view is if we've never seen anything like this and we're seeing it basically pump up the markets that we're worried about, we should put a halt to that. On the debt side of the house, the big issue is opacity. So corporate bonds, so these are many of the hyperscalers, are big tech firms that really got their rise as asset light companies that didn't have to get massive loans all the time. And that changed. They're all now borrowing a ton of money in the corporate debt markets. The modern corporate bond market comes about in the 1870s during the railroad bubble. We're now better about that market. We have transparency rules, we understand how that works. But a lot of this has moved beyond the traditional corporate debt, traditional corporate bond markets, to a thing called private credit, which is way more opaque. But the name's a little bit funny. It shouldn't be called private credit because it's actually a lot of their investors are your 401k, your IRA, your life insurance plan, if your parents have a pension. That's where a lot of that money gets invested. And so it has a public impact. So when you think about Facebook, we hear a lot about their big Louisiana data center, the Metadata Center. That is a $27 billion facility that is not on Facebook's books. It is not a loan that they took out. It's not money that they're paying directly. It is a special purpose vehicle, an spv, so a distinct LLC that receives money from a private credit loan. So we don't know the details of any of this stuff until it gets reported. We don't know how big the problem is. Someone the other day just asked me how much of data centers are in private credit. I don't know the answer. And I've been paying attention to this pretty closely, I have guesses, but that's not, that's not useful. We actually don't have a sense of the scale of the problem. So that's where all of this needs to come into the spotlight. We need to know the details of how much this is going on so that we can understand risk.
A
Well, you brought up the data centers piece of it. I've been also tracking this very closely and trying to collect, you know, news reporting across the entire country on data center infrastructure development over the last kind of couple of years. And it's been extraordinary to watch the reporting and the curiosity and the pushback evolve around. I guess if you stand up on the moon, what would appear to be one of the largest infrastructure projects we've ever engaged in. But you talk about this idea of distortive government subsidies. I mean, you've just now talked about the extent to which the companies may be holding some of these things off balance sheet, making it harder to see them through that mechanism. But it's also really hard to understand exactly the scale of even the public investment at this point. So a lot of that's speculative too, right?
B
At the state level, we have a race to the bottom where depending on how you count it, something between 30 and 40 states have a tax subsidy. So they have a tax break for investing in data centers, for the construction of data centers. Virginia famously has pretty big one. And you hear folks at the county level, so within a state, a different governmental entity, they get more excited about data center sometimes until recently, until the people have showed up to meetings to, to push them off because of the tax revenues. But one of the things I try to think about is the tax revenues coming into a county government are money going into one hand, but we're losing it in the other. So Loudoun county makes $1.4 billion a year in tax on revenues from data centers. The state of Louisiana leaves $2 billion on the table. Those are the same taxpayers, Loudoun county people, the people who live there also pay state taxes. They also benefit from the state. So that's the thing that I'm trying to watch is Loudoun county has figured out a mechanism to get benefits from data centers. And even there you have a lot of pushback. And I think all of the people who are pushing back are. They have legitimate crises in their lives that they're saying to their governments that they don't want to do this. So there's a whole host of issues on data centers that's happening. The one I focus on here is the tax breaks at the state level have gotten, in my view, out of control. The state of Missouri in this race to the bottom, if you invest $25 million and create 10 jobs, you are exempt from all state and local sales and use tax. That is not a lot of jobs. My parents own a small business. They own an ice cream store in San Antonio, Texas. They employ eight people. So they're almost, they're like they're inching up to that eligibility threshold which is crazy. Like that's not the like kind of capital investment or economic development policy that we traditionally chase to give 100% tax breaks to.
A
Well, one of the other ideas that you have here is what to do if in fact some of these data centers end up as stranded assets. And I've even seen interesting ideas from some artists about like, well you know, if you end up with data centers that are either defunct or stranded, what else can we do with them? You know we turn them into community centers or you know, otherwise kind of take advantage of them for housing or whatever. But you're talking about public cloud maybe is a more realistic near term opportunity.
B
Yeah, I, and the, the way I think about this is the these special purpose vehicles, these one off LLCs that are set up just to manage data center or there's also a rise of neo clouds which I know you all have published on, which are companies that aren't the hyperscalers are often extremely levered, so extremely indebted relative to their assets were previously crypto mining companies. Often they own a lot of the data centers that are going up today. And those are the types of entities that I worry most about will go under. And if they go under, if they get into various stages of financial insolvency. I think it would be interesting for us to have public infrastructure, public cloud infrastructure. And this is the Hard things you need to get there. And we have mechanisms to manage this. The Department of Energy, the national lab system there, they manage data centers. They have their own data centers. The National Science foundation has nair, which is a layer on top of the cloud that tries to allocate cloud resources to researchers who want to use AI for research. They know how to allocate compute resources as well. So that's where my mind goes to. This could be a way for the government to get on the cheap. A resource that would actually make for useful public infrastructure.
A
I guess going along with that, the idea of sustaining AI R&D for public purposes. You have a section on protecting workers. The extent to which, you know, should there be a crash, we need to think about what it means for workers. There's lots of folks who are invested in the boom as well, who also think that there are things we need to do to prepare for the impact of AI on workers either way. And I wonder to some extent if some of your ideas here, I don't know, almost makes sense either way. You have this thing, for instance, on ending workplace surveillance. But I don't know, what do we need to do to protect workers in the case of an AI crash?
B
Yeah, if there's a financial crisis caused by over investment in AI, job loss to me will be cause and effect and scapegoat for those over investments. So you could see job losses perpetuating because the companies that do companies will want to continue to invest in AI or use AI where it's profitable and getting rid of labor will be seen as possible, even in the instances where it's not as effective. And that's my scapegoat comment. So like you see companies today laying off people and saying it's because we think AI will do their work. That's not really saying that AI took their jobs. It's saying that AI sounds like a good reason to lay people off. And so my view is no matter how we get there, we might see job losses. And the three categories I think of are do the traditional things like expand unemployment insurance, get rid of worker work requirements for social safety, and things we do that during every financial crisis. We should do that again if the job losses are big enough. I think we need to do something like a digital Works Progress Administration. So during the New Deal, we created the Works Progress Administration to put at the end, it ended up being 8 million people to work. This was the government hiring people and putting them to work for public reasons. Right. Now, we've known for years that state and Local governments and even the federal government has a shortage of tech workers. That applies beyond tech to knowledge work. But we've known for a while that there's a shortage of tech workers. If knowledge workers and in particular coders are the ones who lose their jobs, as a lot of the press suggests, we should put them to work for the public purposes we have. The thing I worry about for the, for the remaining jobs is companies will try to squeeze as much profit of them out of them as possible. We've seen this happen in trucking where for 15 years we've been talking about automation losing leading to trucking jobs disappearing. That hasn't happened. But what did happen is all of those truckers are now over surveilled and all of them in their surveys talk about how bad it's become. So we should get rid of that mechanism for worker surveillance to empower people again.
A
The other thing that you're suggesting here is utility style regulation and maybe even a new digital regulator that we should out of this crisis come out with something that is durable and allows us perhaps to contain the tech industry going forward.
B
The way to think about this is do we have the state capacity to think about the regulatory interventions that are going to be necessary, that are already necessary. And some of the things I talk about in the rest of the report, like utility style regulation for what are effectively digital utilities, these are markets that have a high cost of entry where you're not going to see tons and tons of competitors that are means to ends for that kind of administration. You need an entity to manage it and that's why you need a new regulator. And this is also where in my view, in utility markets, not only should you think about the regulatory interventions that are necessary, so I don't propose a throw it all at the wall, do every kind of utility regulation for every market, but what makes sense the one market structure thing that to return to our conversation on Glass Steagall, separating the hardware and the software I actually think would be really important right now. You have speculation in one market which is AI models leading to an overinvestment in another which is data centers. And so separating those out would actually bring a little bit of discipline where they can actually have the market mechanisms to think about supply and demand and their own is one customer too overinvested or not. So if you own data centers or chips or chip fabs or cloud computing, which is the necessary part of data centers, then you can also be the one that's training AI models. You have to separate those Businesses out. So those are, that's a, that's a quick view of some of the regulatory parts of the paper.
A
Let me just press you on that one. How would that work in practice? Like who would have to be taken apart in your view?
B
Yeah, so for a number of years during the conversations that the antitrust subcommittee in the House really kicked off in earnest in 2019, 2020, there has been a. You. What's interesting to me is the financial professionals who say, actually what's interesting is when you look at Amazon as two parts of a business. Amazon.com, a retailer that's online and AWS, the web services that also owns data centers and its own chip design business, and an energy wholesaler, et cetera, et cetera. They, a lot of financial folks want to separate those out because the multiples you get, the way you would invest in those is actually two very different industries, two very different businesses. So that's the easiest one to talk about because that's an example we've known where separating those out could bring financial benefits to the investors, it could bring market discipline in a lot of other ways. Now it does get complicated in some places, but it's not so complicated you can't do it. Nvidia, the big chip designer, has a lot of investments in AI companies up and down the stack. Everything from almost every independent model maker to the applications to data companies to cloud, neo cloud companies. That's where things get complex. That's the messiest when, when we see all these, these almost artic, artistic depictions of how complicated the AI stack has become. My favorite is depicting it as a plate of spaghetti. That's where Nvidia's investments make a lot of that stuff messy. That's the messiness. But those are, we can actually deal with those. We know how to undo an investment. We know how companies can become independent.
A
So there are several other ideas in here, but I want to make sure I call out one really novel one. Something that seems like it's almost impossible here in the United States just to prosecute fraud.
B
Almost every financial crash we've seen in our country's history has had some degree of fraud, accounting gimmicks or investment fraud or banking fraud of some kind. That happens. And you'll recall in 2008 it became a rallying cry that nobody went to prison. There was no accountability. That's real. That, that feeling, that political backlash, that is very real. And while it's not strictly true, there was one mid level banker who did go to prison. But the rest we didn't feel, the populace didn't feel like there was accountability for what happened and what everybody else paid for in that way. Look, I'm not alleging that there's criminal fraud happening now. And for those who do prosecute, they have to stick within the bounds of the prosecutors handbooks that they get, which is about being fair on how we, how we enforce laws. All that to say besides 2008, every other financial crisis we've seen, people went to prison because of that kind of fraud. During the dot com bubble you had Enron and WorldCom CEOs went to jail. During the savings and loans crisis of the, of the 1980s, hundreds of people went to jail. I mean even in the small business loans for during COVID where we put out a bunch of money for small businesses to get loans and grants, hundreds of people have gone to prison for that kind of fraud. And so all I'm saying is that shouldn't be off the table as part of the reaction that we have after a crash.
A
A lot of these ideas seem interesting, attractive to me on some level, yet they also seem like wishful thinking in the current political environment. I don't know what are you thinking about the kind of near term possibility you're still imagining perhaps a rational Congress might set out to try to solve some problems as opposed to necessarily doing the bidding of the industry.
B
The theory of why this paper, I wrote it and I wanted to work on it, is that politics changed drastically during a crisis. And so while yes, I agree with you Justin, it has been frustrating for me personally how slow we've moved on tech policy issues. I have written many bills for members that went nowhere. That said, during a crisis we have the opportunity to do things that we've needed to do for a long time. And so my hope, my desire is that we take advantage of that opportunity to do the things that are in the public interest and not just what industry says is the right thing to do.
A
And do you have some interest from Capitol Hill, anybody paying attention to this report?
B
We have received a lot of interest from Capitol Hill. I've been taking a lot of calls from staffers in the House and Senate. And actually in just a little over a week from from when this will air, we're going to be hosting Senator Warren, Senator Elizabeth Warren to have a discussion about this exact topic of we think there's a looming AI crisis. Look, I don't make a prediction on when this will happen, but I do say we need to be ready about the things that are happening in those markets and policymakers need to start debating these these ideas now. So we're pleased that we'll be hosting Senator Warren to have that discussion at Vanderbilt. It'll be in D.C. but it'll be hosted by Vanderbilt.
A
Asad, thank you so much for speaking to me about this report. I look forward to, you know, seeing what happens to these ideas. Can't say I'm, I'm hoping for a crisis. But also I certainly recognize what you say, which is that it would be a terrible thing to waste should it come about.
B
I also am not hoping for one, but we should be ready if it happens. Thank you for having me and thanks for taking the time.
A
That's it for this episode. I hope you'll send your feedback. You can write to me at justinettechpolicy Press. Thanks to my guest, thanks to my co founder, Brian Jones, and thank you for listening. Tech policy press.
Date: April 12, 2026
Host: Justin Hendricks
Guest: Asad Ramzan Ali (Director of AI & Tech Policy, Vanderbilt Policy Accelerator)
This episode delves into the possibility of an AI-fueled economic bubble—and what policymakers should do if (or when) it bursts. Justin Hendricks speaks with Asad Ramzan Ali, who authored the report “After the AI Crash” at the Vanderbilt Policy Accelerator. The discussion covers warning signs of a bubble, lessons from previous financial crises, and a suite of policy interventions—from banning financial engineering tricks to utility-style regulation and new digital regulators. The tone is candid, deeply analytical, and urgent, with both concern for risks and hope for reform should the opportunity arise.
| Theme | Policy Ideas/Insights | Timestamps | |------------------------------|-------------------------------------------------------|---------------| | Is AI a bubble? | Investment/revenue mismatch; crash plausible | 01:42–04:29 | | Lessons from past crises | Pre-plan reforms; use crisis as policy opportunity | 05:32–12:02 | | Financial engineering | Ban circular equity; transparency in private credit | 12:47–15:43 | | Data center subsidies | Re-evaluate tax breaks; stop “race to the bottom” | 15:43–18:27 | | Stranded assets | Public cloud, re-purpose infrastructure | 18:27–20:07 | | Worker protections | Strengthen safety nets; ban surveillance; WPA for tech | 20:07–22:48 | | Utility-style regulation | New digital regulator, separate hardware/software | 22:48–26:17 | | Fraud/accountability | Prosecute financial/fraud violations post-crash | 26:17–27:55 | | Politics & reform moment | Crisis creates windows for bold action | 27:55–29:52 |
This episode serves as an urgent, clear-eyed roadmap for policymakers, industry observers, and the public, outlining not only the risks of an AI financial bubble, but also bold reforms to avoid repeating past mistakes. Ali argues for immediate groundwork—so that if a crash comes, society is ready to enact truly structural change for the tech sector, workers, and the broader public interest.
For listeners:
If you care about the intersection of technology, democracy, and economics, this episode summarizes both the risks ahead and the policies that could turn a future AI crash into an inflection point for meaningful reform.