
Katherine Boyle speaks with Sarah Rogers, Under Secretary for Public Diplomacy, about the intersection of AI, free speech, and global information systems. They discuss how major technological shifts, from the printing press to the internet to AI, have reshaped communication and power, and why this moment may be even more consequential. Recorded at the a16z American Dynamism Summit, the conversation explores the role of public diplomacy in the digital age, the risks of censorship and overregulation, and how governments are approaching AI as both a national security priority and a platform for global influence. Rogers also highlights the importance of maintaining “AI with a Western soul,” and why preserving open systems and freedom of expression will shape the future of innovation.
Loading summary
A
AI is going to be more important, not less important. And so the proliferation of a Western AI stack should be a top priority for anyone who cares about freedom.
B
The rules around AI are changing fast. There's a lot of regulation abroad around digital safety and misinformation that's in some ways becoming the petri dish for lobbying groups and organizations in America to bring that here.
A
I think when we send signals in this policy domain, they should be signals that are consistent with free speech. The economist Tyler Cuban talks about AI with a Western soul. And I completely agree with him that that is the greatest soft power tool we can possess.
B
How can the US Government encourage private sector to encourage free speech at their companies? Even if they don't have the ability to buy a company like Elon does and put his fingerprint.
A
I think we need to.
C
Every major communications technology has produced the same instinct. Control it before it controls you. The printing press brought fears of heresy. The Internet gave rise to a disinformation apparatus funded in part by the US Government itself. Sarah B. Rogers inherited that apparatus. As Under Secretary of State for Public Diplomacy, she took over an office that had been submitting content removal requests to platforms like Twitter and Meta and funding NGOs to determine what Americans were allowed to see online. She is now running the opposite operation. Her argument is also strategic economist Tyler Cowen has written about AI with a Western soul. AI that reasons, individualistically, prioritizes user consent and operates on rules based principles. Rogers believes the proliferation of that AI stack is the most important soft power tool the United States possesses. This conversation with Sarah B. Rogers, Under Secretary of State for Public Diplomacy, was recorded at the A16Z American Dynamism Summit in Washington D.C.
A
please welcome to the stage Kathryn Boyle and Sarah Rogers.
B
Undersecretary Rogers, it is an honor to have you here as you've been, I would argue, one of the most vocal proponents of free speech and digital freedom in this administration. Free speech and anti censorship are arguably, I would say, the most important American battles of our time. Now, a lot of people do not know how free speech in these battles are linked to public diplomacy in the State Department. So to start, what is public diplomacy?
A
So one of my favorite parts of getting this nomination was watching all my friends and family wait a polite beat to ask that question after congratulating me. So when we think of diplomacy, we're ordinarily thinking about the relationship between the American government and foreign governments. Two ambassadors shake hands, strike a minerals deal. That's diplomacy. Public diplomacy is different. It is my privilege and charge to lead the relationship between the American government and the foreign public. And so that includes things like educational and cultural exchanges, like the Fulbright program. It includes our fast twitch media response assets in global public affairs. And more relevantly than ever, it includes our engagement with the information environment. The backdrop, the operating system on which these conversations run, so that under the prior administration included things like global engagement, censorship efforts that were the subject of the Murthy Supreme Court litigation, where the State Department, along with other organs of the executive branch, would contact Twitter or Meta and say, we think that these Charlie Kirk tweets are disinformation. You should take them down. That apparatus resided inside my part of the State Department. And in the State Department's reorganization, I acquired the Digital Freedom Office, which is basically under my tenure, the opposite of that. So I'm pursuing transparency, truth and reconciliation on prior censorship, and I am making freedom of expression a primary prong of our public diplomacy.
B
Yeah, no, no. And it's definitely been something to watch. I mean, we live in a world where the Internet mostly works. In America, you can text, you can post, you can send memes, you can criticize our government. But in a lot of countries, that is just not the case, as we're seeing a of surprising countries as well. When the US says it supports Internet freedom abroad, and you've been very vocal about this in even places like Europe, what does it mean in practice?
A
So this has gone through different permutations over time. If you think back to the era of the Arab Spring and Occupy Wall Street, I think the foreign policy establishment was really excited about Internet freedom because they saw, correctly, that it made conversation more disintermediated. It enabled a populist periphery to kind of challenge legacy authorities. And that was generally good for openness and freedom and generally bad, entrenched, closed, controlling institutions. And then after the Arab Spring, I think people got nervous that there might be an American spring and 2016, Trump, Brexit brought a lot of those anxieties to the fore. And you saw efforts to transform a lot of government, Internet freedom and digital diplomacy initiatives into kind of quote, unquote, disinformation, curation, the suppression of what they called adverse narratives. And getting inside this apparatus and seeing it firsthand, what you really see is a lot of people involved in these efforts had pro social intentions. They wanted an ecosystem where people had more access to true information, where people were less likely to be misled by adversary information operations, which are a real thing. I mean, especially in the age of AI, we are seeing foreign adversaries intentionally introduce malicious, false propaganda into our information arteries. And I think to combat that, people just kind of went overboard. And we see this in every era where technology drastically changes and the structure of human communication changes. Like when the telegraph was invented, people were worried that that was gonna diminish everyone's attention span. When the printing press was invented, all these heretics are gonna be able to print their own Bibles. What are we gonna do about it? And we're living through a moment that's on par with or exceeds any of those communications revolutions. So naturally, there is an urge to kind of put the innovation back in the bottle or at least harness how to control it so that legacy institutions can decide who it's for and what it does. And I think it's not consistent with American values or American interests to use or to fetter technology that way. I think we want to use it to make people free.
B
Yeah, no, I'd love to dive into that more because digital freedom is a national security issue, especially with our adversaries. You touched on it where it's never been easier for adversarial nations to create disinformation and to flood our channels. What was the old mandate for digital freedom under the previous administration, and how have you changed it? What are the priorities for you?
A
Right, so the State Department has undergone a reorganization, and I've actually acquired new authorities in the National Defense Authorization act to promote Internet freedom that my office didn't even have before. And so we had a digital freedom team that did some good work. They're still doing so combating malware and spyware, foreign cyber attacks. We are still doing that work. I think our Digital Freedom Office was involved in information integrity initiatives with some international organizations that did a mix of good things and bad things. I'm all for promoting content provenance, helping people determine if you're looking at a piece of content online, where it came from, whether it's AI generated and whether it's true. We just want to empower users to do that rather than institute these kind of opaque, tyrannical choke points far upstream of where the user sees the information. These NGOs that are funded by the government that make decisions about what arguments about pediatric transgender medicine should you be allowed to see? That was the kind of thing that was countenanced before, that's not countenanced now. And we are much friendlier to initiatives like censorship, circumvention, VPNs, and to initiatives like Exa's community notes that put that power in the hands of the users or the crowd in a way that's transparent.
B
Yeah, yeah. And we've talked about on the stage today sort of the Project Maven moment as the turning point for tech and American dynamism. When you think about, is there a Project Maven moment that you would point to with the EU and other governments and their current attacks on free speech, which I know you've been spearheading for the administration.
A
So I love the touchstone to Project Maven because I feel like most people in tech and in this room kind of think of it in a double edged way. So Project Maven was the catalyzed awareness that tech, innovation and now national security were one interest. And Project Maven also sparked some very ideologically driven employee revolts at Google. And I think both of those things are on everyone's mind now with the proliferation of AI, which all of the smart money knows and everyone in this room knows that AI is the next thing and all of the policymakers know it too. The economist Tyler Cowan has this great phrase, he talks about AI with a Western soul. And I completely agree with him that that is the greatest soft power tool we can poss. AI that reasons in an individualistic way, a rules based way that prioritizes user consent, for example. Those are all western principles. And that is going to be the underlying reasoning model on which so much of the world's communication and commerce runs. And so the proliferation of a Western AI stack is really, it's a top priority for our entire administration. It should be a top priority for anyone who cares about freedom. You asked about a similar catalyzing moment with EU and let's say, kind of foreign allied tech regulation. And I think one question I get when I engage on these freedom of expression issues in places like EU and UK is why are you being so hard on us? You know, Russia and China censored the Internet. And the answer is we are much harder on Russia and China. We treat these countries as adversaries in several key arenas. But when Russia and China censor the Internet, they just firewall it off. They don't purport to levy fines on American companies for allowing Americans to engage in First Amendment protected speech on American political issues. The EU did that. I think one key moment in August 2024 was when a now former EC European Commission official, Terry Breton, sent a letter to Elon Musk threatening him with regulatory penalties if he aired an upcoming interview on X with then presidential candidate, former president and now current President Trump. The interview hadn't even happened yet. So it Wasn't like President Trump had said something the EU wanted to censo. It was more like, if you allow Donald Trump to speak on your platform, you will face regulatory liability, likely in the eu. Then, in the same letter, Breton makes reference to another ongoing investigation against X that ostensibly, they say, had nothing to do with speech. This is the investigation that recently culminated in a €120 million fine. This is the investigation about, are your blue checks assigned in an authentic, truthful way? Is your algorithm transparent enough? And in that letter, he basically says, because if you let President Trump speak, that is going to increase your exposure to adverse findings of liability in this other regulatory crackdown. So this is, I mean, this is an issue I litigated on in my private practice before the Supreme Court, before I came into the administration. The kind of viewpoint, skewed enforcement of ostensibly content neutral regulations is both insidious and it's inevitable when you have something that's this politically pitched. So I think the idea that these kind of European censorship laws, laws that make it illegal to insult a politician, as it is in Germany, for example, or illegal to blaspheme Islam, as some prosecutors argued recently in the uk, when you transpose those to a transnational Internet and then tell American companies that can face fines up to 6% of global revenue if they transgress those laws, you force us to have this international conversation. And the conversation that I've had in some contexts, like on X, is kind of confrontational and feisty. But there are other conversations that happen in other diplomatic contexts that I think have been constructive. And we have to have the conversation now because digital space is going to be more important, not less important, for international relations and commerce. AI is going to be more important, not less important. And we need rules of the road that preserve that, that spirit of liberty and creation that enabled all the founders in this room to build what they did and made America the engine of innovation that it is.
B
Absolutely. I want to get more into that because, yes, you've been very public. You've led some of the legal sanctions on some of these bureaucrats who did try to harm American companies. But as you said, the State Department is also. Secretary Rubio went over and gave, I would say, a triumphant speech about the relationship between the EU and the US and how that needs to be forged so that we can jointly make sure that we are the leaders of the free world and that our adversaries, particularly on things like AI, don't take control. So, I mean, I would love to understand how is that relationship Going. In your opinion, what are some of the things that have changed in the last several months in terms of these tactics working to make sure we have Internet freedom?
A
I think we really value these alliances. And as Secretary Rubio underscored in Munich, we engage with Europe on these issues because we want our European allies to be safe, strong and prosperous like us, not just so that they can defend themselves vis a vis NATO, but because. Because we comprise one civilization and we have a lot of shared interests. And if you engage with Europeans bilaterally or multilaterally, they will all affirm, and I think a lot of them believe that free expression is one of our shared interests. So I think as a lot of these regulatory actions kind of reach their final stages and we get the opportunity to kind of negotiate and go back and forth on specifics, it's really my hope that constructive progress can be made, not to the point where we have identical speech regimes in all of these countries, but to the point where there's not an insidious and sweeping censorship contagion as the House Judiciary GOP's recent investigation suggested.
B
Now, I love that you brought up Tyler Cowan's brilliant quote on AI with a Western soul. I think that's a very important and kind of pithy way to put it. But a lot of people in this room are building with AI right now, and the rules around AI are changing fast. There's. There's a lot of regulation abroad around digital safety and misinformation that's in some ways becoming the petri dish for lobbying groups and organizations in America to bring that here. Maybe tell some of us what the most troubling legislation you're seeing is in different pockets of the world, and what tech should do to combat these sort of free speech laws.
A
I love that question. So I'll talk about the AI regulatory landscape and then what tech should do is a whole separate, juicy question that I'd love to get into. So I think with AI, copyright is a big issue. So there are kind of bones of the Internet that we take for granted that are built into its structure because the Internet grew up in America. So one of those is CDA230. That's why all these platforms exist, because you can publish third party content without being exposed to the same range of legal liabilities that a newspaper necessarily would for airing the same. CDA230 is one of those structural features of the Internet that has made it what it is. That is actually an artifact of American law, which a lot of people don't appreciate. Another is the fair use doctrine. So we've seen a lot of good rulings from courts that training AI is fair use. If you showed a class of kindergarteners a library of books and they learned from it, that teaching that class of kindergarteners would be fair use. So if you pay for the same books and then you use them to train an LLM, a lot of courts are saying that's also fair use, which is great. I think we are seeing regulatory temptations in other countries, including perhaps the eu, to treat copyright in a different way, which would be very difficult, devastating. And similarly, on the IP front, there's a lot of interest in transparency, which is understandable. But if you force companies to disclose aspects of the AI that let foreign adversaries, for example, reverse engineer the model weights, you're really compromising the American and the Western competitive edge. I also think that it would be very dangerous for AI to be subject to some of the content regulations that Europe currently imposes on what they call very large online platforms and very large online search engines. So there's this. One frustrating aspect as an American lawyer dealing with foreign laws is to expect statutes, especially statutes that would impose potential criminal liability or serious civil liability, to be very specific. And these statutes just kind of say you have to do risk assessments for all of these things. Risk assessments for hate speech, risk assessments for speech that could adversely affect civil discourse or hurt someone's well being. And what does that mean? Does it mean the AI is too, too good and people use it too much and it hurts their well being? And especially with an LLM, anyone who develops these things knows that you can be very careful and you can impose a lot of safety architecture, but it will emit unpredictable responses sometimes. And I've seen draft legislation that imposes strict criminal liability if the LLM is even capable of generating certain kinds of content. And I don't mean child pornography, I mean, you know, content that might not even be seen under the First Amendment. And I think that kind of strict liability regime that degrades sort of the CDA 230 protection layer and adversely incense against kind of creative training of models is. I don't like to see it.
B
I know you've been watching the Department of War's negotiations with various LLM companies, and you're a lawyer, as are several of the undersecretaries at State who've been very vocal on it. Maybe talk us through your views on contracting with the Department of War and maybe more broadly how to think about AI and free speech and alignment In a national security context.
A
Look, I think we have several great AI companies in the United States, and I defer to lawyers at Dow and elsewhere on which of them meet certain statutory thresholds. But I think what is essential from a national security, national defense perspective is that AI keeps its Western soul, and that at these really important debates about what kill shot the autonomous weapon should take or what the scope of data synthesis should be, these important debates should happen in the way that they've always happened under our Constitution, which is in courts and on state house steps and in these crucibles of democratic deliberation. They shouldn't be subject to the fiat of Silicon Valley executives or to tech workers who, you know, we've seen, you mentioned, you know, woke tech workers in a prior era, like we have seen tech workers make decisions, for example, that you should. It should not be permissible on Twitter to call a. A convicted sex offender, a male, which he is. I mean, that was a decision that tech workers made, and they're entitled to that opinion. But the reason that we have these democratic deliberative bodies and processes that have served us so well for 250 years is so that, that we can have these courts, which we've crafted to be deliberative and careful, think seriously about questions like unlawful search and seizure and what is too invasive and what kind of surveillance should be allowed, and then write down in a principled, consistent way what positions we should abide by. And so maybe it's because I'm a lawyer, but I think rule of law needs to be a touchstone. And I think you've seen that reflected in some of the administration's positions.
B
Absolutely. And going back to something you said about we've referred a lot in this conversation to Twitter and now apps. I would argue that it's the most consequential moment in terms of Elon buying Twitter kind of displacing the trust and safety team there and kind of changing, I would say, the kind of nature of free speech for a lot of the country with that movement. But it's not something that can be done. It's not a playbook that can be easily replicated. Elon can do it, but not many companies can. So I guess if you have some advice for the people in this room or the companies in this room, how can the US Government encourage private sector to encourage free speech at their companies, even if they don't have the. The ability to buy a company like Elon does and sort of put his fingerprint on it?
A
So I can think of Several ways. And I'm pursuing all of them as best I can. For the government to encourage private companies to favor free speech, one is just not to create regulatory cudgels that can be wielded in a capricious, arbitrary way, like we've seen in Europe with this blue Check investigation, for example, or like we've seen in some of these debanking cases. So I represented the NRA in a prominent case where the regulations being enforced were ostensibly viewpoint neutral ones, but there was strong evidence that they were being enforced disproportionately against banks that allowed pro gun groups to contract for financial services. And we've seen that before. So we try to make. We should have a regulatory environment that's crisp and principled, where it's always clear what you have to do and what you can do to comply with the law. That's one thing we can do. Another thing we can do is we can look, I mean, to the extent that we regulate tech companies and there's going to be some regulation, there already is. As with any emerging industry, we should have regulations that favor viewpoint neutrality. And I think we've seen since 2020, and especially since 2024, we've really seen the tech industry kind of come to the side of free speech in ways that it hadn't before, catalyzed significantly by Elon's purchase of X. And that's great. And I think a lot of founders are not only patriotic, but they have that kind of gray tribe freedom impulse in them. And so I think it's natural. But I think we need to, to the extent that there are incentives in our law favoring one kind of content moderation or another, we need to favor viewpoint neutrality. I use the phrase viewpoint neutrality deliberately because that's a concept out of First Amendment law. But that doesn't mean that that founders shouldn't try or shouldn't offer users the tools to curate and navigate the information environment in other ways. So spam content, pornographic content, will behave differently than other content. People will have more negative engagements with it. And if you offer people the ability to see less of that in their feeds, or to see less content with foreign provenance, for example, that's not viewpoint based suppression. That is not a viewpoint based distinction. And I think our regulations should be kinder to that kind of content moderation. And I think obviously as government officials, we need to stand up for our companies and our industry when their interests and American political freedoms are threatened. So if the US Government, to go back to that Terry Breton letter, if the US Government threatened Le Monde or threatened Vivendi. These are French platforms for hosting an interview with Emmanuel Macron. The French government wouldn't stand for it, and we shouldn't stand for it. And that's what these sanctions signaled. And that's, you know, obviously we have a lot of foreign policy priorities in the administration, and these really are critical allies with whom we share so much. But I think when we send signals in this policy domain, they should be signals that are consistent with free speech.
B
Absolutely. Well, as we always say, we invest and support the Second Amendment so that we can enjoy the first Undersecretary Rogers, thank you so much for being here
A
and for the thank you so much for having me.
C
Thanks for listening to this episode of the Academy 16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts and Spotify. Follow us on X16Z and subscribe to our substack@a16z.substack.com thanks again for listening and I'll see you in the next episode. As a reminder, the content here is for informational content purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures
A
Sam.
The a16z Show
Episode: Sarah Rogers: Free Speech, AI Diplomacy, and What America Owes Its Allies
Date: May 4, 2026
Host: Andreessen Horowitz (Kathryn Boyle, interviewer)
Guest: Sarah B. Rogers, Under Secretary of State for Public Diplomacy
This episode features an in-depth conversation with Sarah B. Rogers, Under Secretary of State for Public Diplomacy, focusing on the intersection of free speech, AI and technology regulation, and the United States’ diplomatic responsibilities in supporting digital freedom at home and abroad. Rogers reflects on the evolving landscape of public diplomacy, the challenges posed by foreign and allied regulations, and her philosophy and policy approach to ensuring that American values—especially free expression—are defended in the digital era.
[02:00 – 03:48]
Definition & Scope: Rogers contrasts traditional diplomacy (government to government) with public diplomacy, which involves the relationship between the US government and foreign publics. This includes educational/cultural exchanges (e.g., Fulbright program), rapid media response, and shaping the global information environment.
From Censorship to Digital Freedom:
Under previous administrations, her office was involved in content moderation and censorship (notably, the Murthy Supreme Court litigation). Now, she has reoriented toward transparency, reconciliation on prior government censorship efforts, and making freedom of expression a "primary prong" of public diplomacy.
[03:48 – 07:38]
Changing Views on Internet Freedom:
The US government initially saw internet freedom as a way to promote openness and challenge entrenched regimes (Arab Spring, Occupy Wall Street) but became wary after populist movements like Brexit and Trump’s election.
Efforts shifted from promoting open dialogue to combating perceived "disinformation," which led to suppression—sometimes overzealously—of certain narratives.
Government’s New Mandate:
Rogers now focuses on supporting transparency, user empowerment, and avoiding "opaque, tyrannical choke points" in information control. Her team champions tools like censorship circumvention, VPNs, and user-driven fact-checking (e.g., Community Notes).
[07:38 – 12:44]
Regulatory Clashes: Rogers details the growing tension between American free speech values and European regulations, which increasingly fine American companies for allowing protected speech. - Example: In August 2024, the EU (Terry Breton) sought to penalize X for hosting a Trump interview, even threatening unrelated regulatory crackdowns to pressure compliance.
Diplomacy with Allies:
While Rogers is tough on adversaries, she emphasizes constructively engaging with allies (EU, UK) to avoid what she terms an “insidious...censorship contagion.”
- Quote (Rogers, 12:44):
"We want our European allies to be safe, strong, and prosperous like us...because we comprise one civilization..."
[13:43 – 17:32]
Concept: Rogers frequently invokes economist Tyler Cowen’s phrase "AI with a Western soul" as both a guiding principle and a form of soft power. This means building AI frameworks rooted in individualism, rule of law, and consent—foundational Western values.
Threats in Global Legislation:
She critiques foreign regulatory proposals, especially from the EU, that threaten to undermine US-style protections like CDA 230 and fair use. She warns that overseas “strict liability” regimes and ambiguous requirements for “risk assessments” could stifle creative AI development, hurt the US competitive edge, or force American companies to adopt censorship models counter to domestic norms.
Quote (Rogers, 14:13):
“There are bones of the Internet ... built into its structure because the Internet grew up in America ... CDA230 is one of those structural features ... Another is the fair use doctrine...”
Quote (Rogers, 15:30):
“If you force companies to disclose aspects of the AI that let foreign adversaries reverse engineer the model weights, you're really compromising the American and Western competitive edge.”
[17:12 – 19:17]
Alignment with US Values:
Rogers insists that debates about the use of AI, especially in military and defense contexts, belong in courts and democratic institutions—not among executives or activist tech workers.
Emphasis on Rule of Law:
The US must ensure that AI (even lethal autonomous systems) operates within the bounds of American constitutional and legal principles.
[19:17 – 22:53]
Unique Role of Private Companies:
The conversation touches on Elon Musk’s purchase of Twitter/X as a singular (and not easily replicated) move to reorient trust and safety towards more maximalist free speech.
Government Actions:
Rogers outlines ways the US government can support free speech in tech:
Avoid Regulatory Abuse:
Laws and regulations must be clearly defined and immune to arbitrary, disproportionate enforcement. The government should resist pressure to use regulations as a “cudgel” for content-based discrimination.
Promote Viewpoint Neutrality:
Regulations should explicitly favor viewpoint-neutral moderation. Exceptions (e.g., for spam or porn) must be based on user experience, not political viewpoint.
Defend US Companies:
The government must support American companies when foreign governments threaten their political freedoms. She gives a hypothetical: If the French government punished French companies for hosting interviews with French politicians, the French would defend them—the US should do the same.
Quote (Rogers, 21:00):
“We should have a regulatory environment that's crisp and principled, where it's always clear what you have to do and what you can do to comply with the law.”
Quote (Rogers, 22:12):
“If the US Government...threatened Vivendi for hosting an interview with Emmanuel Macron, the French government wouldn't stand for it, and we shouldn't stand for it.”
AI as Diplomacy’s Soft Power:
“AI with a Western soul...that is going to be the underlying reasoning model on which so much of the world’s communication and commerce runs.” (Rogers, 08:00 & 13:43)
On the Shift from Censorship to Freedom:
“I'm pursuing transparency, truth and reconciliation on prior censorship, and I am making freedom of expression a primary prong of our public diplomacy.” (Rogers, 03:32)
EU’s Preemptive Censorship Attempt
“…if you allow Donald Trump to speak ...you will face regulatory liability, likely in the EU.” (Rogers, 08:55)
Rogers articulates a robust, principled case for why defending freedom of expression—especially in the age of AI—is not just a matter of values for America, but also a strategic diplomatic imperative. She champions a vision where the US leads by example, empowers users, and forges alliances without compromising foundational rights, even while navigating rising regulatory headwinds from abroad.
For full context on nuanced legal and diplomatic positions, listeners should refer to timestamps above for specific quotes and argumentation.