Loading summary
A
How can you free your team from time consuming office tasks? Amazon Business empowers leaders to not only streamline purchasing, but better support their teams. Smart business buying tools enable buyers to find and purchase items fast so they can focus on strategy and growth. It's time to free up your teams and focus on your future. Learn more about the technology, insights and Support available@AmazonBusiness.com when you own your own.
B
Business, you own every decision. Now own the card that rewards you for it. Chase Sapphire Reserved for Business is a painful card that elevates your travel experience and offers premium benefits that can take your business to the next level. Sapphire Reserved for business offers 8x points on all purchases through Chase Travel, 3x points on social media and search engine advertising, airport lounge access, and more. With over $2,500 in annual value, it's the card that gives back all you put in. Learn more@chase.com annual Reserve Business Chase for Business make more of what's yours Accounts subject to credit approval restrictions and limitations apply. Cards are issued by JPMorgan Chase Bank.
A
NA member FDIC Hiscock Small business Insurance Knows there is no business like your business. Across America, over 600,000 small businesses, from accountants and architects to photographers and yoga instructors, look to his Cox Insurance for protection. Find flexible coverage that adapts to the needs of your small business with a fast, easy online'@hiscox.com that's his c o x.com there's no business like small business. Hiscox Small Business Insurance Bloomberg Audio Studios Podcasts Radio News this is Bloomberg Business Week with Carol Massar and Tim Stenovec on Bloomberg Radio.
C
My AI detector is better than my colleagues because they were fooled by a video a couple days ago and I was like, that's AI.
D
Okay, well, that's exactly where I want to go with our next guest. Anu Bradford is Henry L. Moses professor of Law and International Organization at Columbia Law School. She was last on with us just over two years ago when the Will Smith Eating Spaghetti test looked not realistic at all. Just to show how far things have come, she's the author of several books, including most recently, Digital Empires the the Global Battle to Regulate Technology. This was published back in 2023 and that's exactly what I want to start. Where I want to start with you, Professor Bradford. I got an invite today to try out Sora from a friend of mine. I think he's a paying subscriber of OpenAI's chat GPT and that's why he has an invitation. And I was watching some of the videos that he made. And I'm thinking to myself, we are so cooked. Are we?
E
So thanks so much for having me, Tim and Emily. We may well be cooked, but the question is, what are we most worried about? I think there are many exciting developments. I think we're certainly more entertained in many ways. But at the same time, I think those who warned us early on that this is not the kind of AI revolution that we should be just leaving for the tech companies to govern, to manage. We do need governments involved, we need some guardrails, we need some regulation to make sure that these fast advances that we are witnessing are moving to the direction that we're comfortable with.
D
It seems like it's too late, though.
E
I don't think it's too late. I think we certainly are not at the point where we can say that AI is done. I think we will continue to see massive developments in coming years. And we already have a governance regulation on the radar of many lawmakers. And obviously the Europeans have been most proactive, as they usually are, and they have the AI Act, a comprehensive piece of legislation that is already in force. And now it's then the matter of implementing it effectively. And then we also need to see what happens in the United States at the state level. China is definitely interested in governing AI. We have many other jurisdictions. So I think there is a lot that is happening and a lot more that needs to be done.
C
In your new book, Digital empires, you break apart how Basically different regimes across the globe are governing and regulating AI differently. So there's the us, there's Europe, there's China. In your research, have you found one nation is kind of doing that balancing act the best so far in terms of balancing out? They don't want to stifle innovation, but they also want to protect users of AI consumers, of AI companies that are getting involved with AI, who's. Who's doing the balancing act the best, in your view?
E
So I think any regulator really needs to think about this balancing to make sure that we harness these tremendous benefits that are associated with AI, but also really safeguard our citizens and societies from various risks. In many ways, there is a perception that the Europeans are erring on the side or preemptively protecting against these risks and maybe then forgoing some of the innovation benefits, Whereas the Americans would be erring on the side of maybe being very techno optimist and not thinking about all those potential downsides. I think in many ways I do like and endorse the European model in the sense that in my view that best safeguards the public interest and really takes seriously the fundamental rights of individuals and democratic structures of the society. But there is always, I really reject this notion that this comes at the cost of innovation. There definitely is a gap where the Americans are doing much better in generating AI innovations compared to the Europeans. But the reason, the reason is not that the Europeans are so keen on regulating. I think there are many other reasons that explain why. There are just fundamental pillars of the tech ecosystem in the US that are much stronger and the Europeans have fallen short in replicating that. So regulation as such, the protection of those rights is not a choice that needs to come at the cost of making beneficial progress in this space.
D
So, you know, I keep going back to our conversation that we had with you two years ago, because the world has changed so much since then. Two years ago, Joe Biden was president. There were a lot of folks who didn't think that Donald Trump would win another term. Fast forward two years. Donald Trump is the president. David Sacks is crypto and AI czar. The regime thinks about this completely differently than I think it's fair to say the Biden administration. What do you think the US needs to be doing right now to regulate this technology? What would you like to see David Sacks do?
E
Yeah, so you're so right. There has been a complete U turn in many ways. Towards the end of the Biden administration. There was closer alignment between the traditional transatlantic allies where the US was really moving closer to the European view that technology like AI needs guardrails. And there was a genuine attempt to join forces among the world's techno democracies in order to halt the advances of Chinese digital authoritarian views of governing technology. So I really saw this potential for the US and the EU to join forces to bring about a very beneficial change in this space. But now the US is doing, I think, two things. So it's first of all giving a lot more power to the tech companies walking away from regulation, embracing this deregulatory zeal that really reflects the very strong form of techno, libertarian, techno optimist worldview. But in many ways, the US is also playing Beijing's game and becoming very state driven. We see massive state investment in some of these leading tech companies. We see export controls investment, investment restrictions, we see subsidies. So the US is to me, losing some of its own goals. And if you think about how that will also impact the US's adamant goal of being a leader in AI, what is happening in the space of immigration, I think that is really counterproductive. If you Think about where all those AI innovations come from. The U.S. so what the U.S. would need to do first. The U.S. would need to regulate this space. We need to make sure that fundamental rights are protected. We need to make sure that those societal risks are under control. And we need to also, at the same time make sure that we will continue to invest in the development of the AI by retaining the world's best talent, which often is immigrant talent, including then Chinese data scientists who have been contributing to advances in this space in.
C
The U.S. is there any specific regulation that comes to mind like that you would want to see in the US that would prevent this kind of idea that Tim presented in the beginning of the segment about this, the spaghetti test. It sounds silly, the Will Smith spaghetti test, but it gets at the heart.
D
To show you the videos. It's like, it's crazy.
C
Yeah. Concern that people have that suddenly the Internet is going to be, you know, filled with these videos and we're not going to.
E
And fake.
D
It's the end. I'm sorry.
C
Well, maybe we can have a professor help suggest.
D
She said it's not. She said it's not the end already, which I'm grateful for.
C
It's not the end. Is there a specific piece of regulation that comes to mind? Is it about, I don't know, digital privacy, people needing disclaimers on top of every video that you see on the Internet.
E
So I think there are many aspects and there's no easy way to say that you just need to do one thing in order to then address this multitude of different harms. But one thing, it does start from the protection of privacy and our agency and our ability to be able to tell what is fiction and what is not and our ability to engage in conversation based on real information that is not manipulated by AI. And this information obviously existed even without ChatGPT type of tools, but they are now fueled with this AI driven ability to manipulate our sense of reality. So in many ways, I think it does need labeling, it does need the kind of transparency and accountability that we have, a sense of how these AI systems are built and how we engage with them. But then there are also risks around just protecting content. Content creators, we need to take copyright seriously. And the idea of how do you actually train these models with the data that has been generated by individual authors, by journalists, and that needs to be also compensated well so that we still have the incentive to engage in that kind of content production. But privacy is obviously very high on my list. Disinformation is very high on my list. Then there are questions that are more about existential risks, more about systemic risks. And even if it's hard to sometimes know the probabilities of some of the most severe risks and how likely they are to materialize, we still need to be prepared to also, as a society to confront that kind of reality when AI advances really fast and we reach the point when we find even harder to govern that technology. So I think there are all these layers and we are not even really having, at least at the federal level, a real conversation in how we go about regulating the space.
D
Professor Anu Bradford Henry L. Moses professor of Law and International Organization at Columbia Law School. She's the author of several books, including her most recent, Digital the Global Battle to Regulate Technology, published by in 2023. But as relevant right now as it was two years ago.
F
This podcast is brought to you by FedEx the new power Move hey, you know those people in your office who are always pulling old school corporate power moves? Like the guy who weaponizes eye contact. He's confident, he's engaged, he's often creepy. It's an old school power move. But this alpha dog laser gaze won't keep your supply chain moving across borders. The real power move? Having a smart platform that keeps up with the changing trade landscape. That's why smart businesses partner with FedEx and use the power of digital intelligence to navigate around supply chain issues before they happen. Set your sights on something that will actually improve your business. FedEx the new power Move hey, Ryan.
G
Reynolds here from Mint Mobile. Now I don't know if you've heard, but Mint's premium wireless is $15 a month. But I'd like to offer one other perk. We have no stores. That means no small talk. Crazy weather we're having. No, it's not. It's just weather. It is an introvert's dream. Give it a try@mintmobile.com switch upfront.
C
Came in at $45 for 3 month plan.
E
$15 per month equivalent required.
C
New customer offer first 3 months only then full price plan options available, taxes and fees extra.
H
See mintmobile.com wishing the holidays could come early. If you own or manage your business, they can. With help from iHeartradio. People are already shopping for their loved ones and hunting for deals wherever they can find them. Including right here. They're listening to the radio. They're listening to podcasts. They could be listening to you. Don't wait for everyone else to kick off the holidays. Get your best season of the year up and running today. Call 844-844, iheart or visit iheartadvertising.com.
Episode: The Future of Tech Governance Around the Globe
Date: October 20, 2025
Hosts: Carol Massar, Tim Stenovec
Key Guest: Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law School; Author of "Digital Empires: The Global Battle to Regulate Technology"
This episode tackles the rapidly evolving world of artificial intelligence and global technology governance. The conversation with Professor Anu Bradford delves into how the U.S., Europe, and China navigate the delicate balance between fostering innovation and instituting regulatory guardrails. The episode grapples with questions of AI’s societal impact, the risks of unchecked development, the need for government intervention, and cross-national regulatory philosophies. A particular focus is given to policy responses, the current U.S. political shift, and concrete regulatory priorities like privacy, disinformation, and transparency.
“My AI detector is better than my colleagues because they were fooled by a video a couple days ago and I was like, that's AI.”
— Host
"There are many exciting developments... but we do need governments involved, we need some guardrails, we need some regulation to make sure that these fast advances that we are witnessing are moving to the direction that we're comfortable with."
— Anu Bradford
"I don't think it's too late. I think we certainly are not at the point where we can say that AI is done. We will continue to see massive developments... there is a lot that is happening and a lot more that needs to be done."
— Anu Bradford
"I do like and endorse the European model... that best safeguards the public interest and really takes seriously the fundamental rights of individuals and democratic structures of the society.”
— Anu Bradford
"There has been a complete U turn in many ways. Towards the end of the Biden administration, there was closer alignment between the traditional transatlantic allies... But now the US is doing... two things. It's giving a lot more power to the tech companies and walking away from regulation, embracing this deregulatory zeal..."
— Anu Bradford
"It does start from the protection of privacy and our agency and our ability to be able to tell what is fiction and what is not and our ability to engage in conversation based on real information that is not manipulated by AI.“
— Anu Bradford
“Disinformation is very high on my list. Then there are questions that are more about existential risks, more about systemic risks.... we still need to be prepared to also, as a society to confront that kind of reality when AI advances really fast...”
— Anu Bradford
“We are not even really having, at least at the federal level, a real conversation in how we go about regulating the space.”
— Anu Bradford
This episode is essential listening for anyone interested in the future of technology and policy. The discussion offers a clear-eyed assessment of how the pace of AI progress is outstripping public policy—yet asserts it's not too late for governments to provide necessary safeguards. Anu Bradford’s analysis provides nuanced, global perspectives on how the world’s major powers approach tech regulation, rejecting the idea that regulation and innovation are inherently at odds. The episode highlights key priorities—privacy, disinformation, copyright, and transparency—and urges that the U.S. take urgent steps in AI governance, even as federal action lags. If you want to understand the current state and critical next steps for managing the risks and rewards of AI on a global scale, this episode is a must.