
Loading summary
A
Hey there, agile adventurer, just a quick question.
B
What if, for the price of a.
A
Fancy coffee or half a pizza, you could unlock over 700 hours of the best agile content on the planet? That's audio, video, E courses, books, presentations, all that you can think of. But you can also join live calls with world class practitioners and hang out in a flame war free and AI slop clean slack with the sharpest minds in the game. Oh, and yes, you get direct access to me, Vasko, your Scrum Master Toolbox podcast. No, this is not a drill. It's this Scrum Master Toolbox membership. And it's your unfair advantage in the agile world. So if you want to know more, go check out scrummastertoolbox.org membership. That's scrummastertoolbox.org Membership. And check out all the goodies we have for you. Do it now. But if you're not doing it now, let's listen to the podcast.
B
Hello everybody. Welcome to this very special bonus episode. It's about a topic that at least I feel very strongly about, let's call it just Saving Democracy. And we have with us to talk about this very important topic, Anthony Vincy. Hey, Anthony, welcome to the show.
C
Hey, thanks so much for having me.
B
So let me tell you a little bit about Anthony. He's a former CTO and Associate director at the National Geospatial Intelligence Agency, that's NGA in the usa. And he's also the author of a very interesting book. It's called the Fourth Intelligence Revolution. You can check it out in your favorite bookshop. And Anthony has been at the front lines of modernizing the intelligence community for the age of AI. And in his new book, he lays out a stark warning, something we're going to explore a little bit today. We're entering an era where machines don't just augment intelligence, they are transforming it. But the real battlefield isn't just digital. It's also cognitive, economic and societal. And I think that you will all agree by the time we're done with this episode that this really is an existential battle to save democracy. So, Anthony, first of all, thank you very much for having written the book and for being here on the show to share this critical message before we dive into the topic, which is a very meaty one I should say. Tell us a little bit more about your background.
C
Yeah, well, I have sort of flip flopped in and out and between technology and the tech industry and intelligence. And so when I started my career, it was kind of the last dot com boom and I was at a startup in New York in kind of 2000, so it was a while ago. And then I went into intelligence from there and I became a case officer, which is somebody who goes out and recruits sources. And I went to Iraq and places like this. But then I left government and I went and started another, a tech startup in New York again. And, and that was fun. And we did the whole thing. We raised venture capital and we ultimately exited the company. And then I got an opportunity to go back into intelligence and this time it was at nga. And what I was asked to do was to modernize that agency. And at that time there were very few people who had been both an intelligence officer and had done tech startup stuff. And that was what kind of made me a little bit different. And the director realized they needed to bring more tech into the agency and in particular AI. And this was kind of the era of ImageNet and computer vision really starting to work. And that's what they needed at this agency. And that's what I went there to do. And we can kind of dive more into that. But that role was about being the chief technology officer, being the associate director and bringing those technologies in. And then I left intelligence again and now I'm running a tech startup again.
B
So high level, what was that, let's call it renovation or innovation in the NGA all about?
C
Well, NGA is like a lot of what you might think of as services companies, right? They and intelligence in general historically had been that there were people who had a job and it was, they use technology, but it was primarily done by people. And what NGA does is they are responsible for analyzing satellite imagery and geospatial intelligence. So if you can imagine, there are thousands of people who come to work every day and they have to look at these satellite photos and determine if there's a threat. And I don't know if you saw that movie House of Dynamite where it's about this kind of surprise missile launch. Well, these are the people who look out for that.
B
Right.
C
They come to work every day and they want to make sure that, that we know as a nation and as the world, if there's going to be one of those launches. So they're literally sitting there looking for, you know, a missile on a launch pad in a country like North Korea or something like this. But they're doing it manually historically. Right. They literally used to sort of print out and develop film and look at it, you know, on a light table with, you know, magnifying glasses now, you know, or When I got to NGA was on computers and they're sort of zooming and into these images and then Google Maps.
B
Right.
C
Almost like Google Maps but before Google Maps and. Or Google Earth. Right. And by the way, a little tidbit of history. Google Earth was built off of Keyhole, which was a technology that was funded by Ink Util and nga. So it all came from there actually. And you can kind of look, it's like an interesting history, but that's what they were doing. And the problem is that there's just too much imagery to look at, like a lot of things in the world. Just too much data affect the whole world these days. Yeah. And so when I came along I was watching imagenet and some of these technologies start to work in computer vision, using machine learning in particular, and realized, hey, we can start to implement this into the workflow and that would change everything about what we're doing now. Little did I know when I started how hard that would be because it's not just a technology problem, it's a training problem, it's a recruiting problem. It's integrating into the IT stack, It's integrating into what your customers were using, where the customer there by the way is like a president or a four star general, so they're not the most forgiving customer if something doesn't work. And it, and it took years, but it's now starting to actually happen. And actually a recent NGA director had mentioned how publicly that they are now producing reports that say no human hands have, have, have touched this report. So you know, nine years later it's starting to work.
B
So it's interesting that that problem that you started tackling was based on vision using AI, because right now we're in the middle of another problem which is not necessarily related to vision, but it's related to the amount of data that is now available. We are even individuals today are literally flooded with thousands of pieces of information that are very hard to make sense of. And I just want to give the example of scams, which is a major problem in Western Europe and in the US today already, where people are being bombarded with so much information that they can't actually distinguish the scams from the real messages. Like for example, Amazon package delivery scams where people will extract information from you and then use it against you, for example, to steal money from you. And we have a much bigger problem because this information cannot be, can now be collected at scale and used at scale. How do you see that from an intelligence perspective? Because it's not just about us defending ourselves individually, but this starts to be a nation problem.
C
Yeah, well, one thought on it is that I think we are surpassing the point now at which people can keep up with the data, period, no matter how much you process it and analyze it. And I think for a lot of years, the intelligence community and companies and everyone was moving into this kind of big data analytics world where we would, you know, companies like Palantir, for example, would integrate data and figure out ways to compress it, to visualize it, to present it to a human being that would then, you know, do the analysis, as it were. And that goes from everything from figuring out how to track down a terrorist if you're in the intelligence community, to these kinds of scam issues and so forth. And I believe where we're at, the world we're entering now, is where the machine, the AI, has to do the analysis itself, period. And it never comes to a person. It just produces a finished piece of intelligence, is what we would call it in the intelligence world. And then that integrates directly into something. So it might integrate into an operation in the intelligence community, or it might integrate, you know, into something like you're saying in this scam world to just simply not responding to what looks to be like a scam email. And this is actually now, you know, like a lot of tech guys, I'm obsessed with these kinds of problems and I've been addressing them for years. And now I'm doing that in my new startup where I'm, you know, building a system. Our company is called veeqo, where we are trying to fully automate intelligence and produced, produce finish intelligence. And we're going to launch it early next year.
B
So this is actually very interesting. So I myself work in the security, cybersecurity world and have been working for about three decades. And one of the problems we are facing is exactly the same that you describe, which is the amount of information that is available there. But we need to be able to automatically process it and transform it into a user digestible piece of information or suggestion, recommendation for action. And when I look at it from my perspective, I mean, we're dealing with viruses, ransomware attacks, scams, of course you're dealing with other things. But when I look at it kind of broadly, I see very clear trends. And I want to give just one very concrete example where at least anyone who follows the cyber news is aware and familiar with the idea of ransomware attacks that were used. And this is now my speculation, I want to be very clear about it potentially by Russian and Chinese and North Korean actors to finance black ops from their military establishment. Let's not forget that the ransomware, let's call it market size, for lack of a better word, is already bigger than some countries economies. And I'm worried because I work in this industry, like how do we protect ourselves against this? Because we can clearly see that there's a massive collection of information and then digestion and translating that into attacks that follow. Kind of a, let's call it kill chain. Also individual, not just organizational, it's not just organizations that are the targets. How do we protect ourselves against that? And you talk about that a little bit in the book as well. What are your thoughts on that?
C
I think that all of this is going to become automated on both the offense side and on the defense side. And there was a recent research report that Anthropic put out where the Chinese had used CLAUDE to automate a cyber espionage campaign. And the anthropic researchers believed that they had automated 80 to 90% of the attack using Claude. And that's just the first taste. I believe that in the near future close to 100%, if not 100% of cyber espionage and cyber hacking will be done in an automated fashion. It will literally be click the button, decide your target and go and do it. And the only way to keep up with that is to automate back. Right. And to go to automate the defensive side. And I think that there's one example of where this is already happening in the world and that's in finance. And if you look at quant funds and these hedge funds that trade in the microsecond scale and they're completely machine driven because there's no way for a human being to kind of keep up and process all the data and trade at that speed. And it's competitive industry. So your hedge fund is trading in an automated way, so is the other guy. And the only way to compete with the other guy is to, is to be automated. I think the same thing's going to happen in cyber and ultimately I think the same thing's going to happen in the physical world with drones and then in all intelligence.
B
Now a lot of people listening to us might find that this is kind of, you know, too complex, not really applicable to the day to day world. But in fact you and I know that's not the case. There is a, let's call it a medium currently used to not hack organizations but hack people's minds. And we call it social media and we think it's a benevolent technology, although not necessarily always positive, but benevolent. But that's not really the case and I wanted to dig down into that a little bit more. How do you see social media becoming potentially an existential threat for democracy itself?
C
Yeah, it in many ways already is an existential threat. And the way that I would think about it is, is, is this when we see these hacks of our information or when we see a report that TikTok, which is a Chinese owned social media platform, is collecting information, that information is being used potentially to target an advertisement or some sort of information operation at people. This is what happened in the 2016 election where the IRA, the Internet Research Agency, which was a Russian intelligence backed company, used Facebook and used platforms like Facebook to target ads to, to voters in America to change the election, to disrupt it. Well, now technology has proceeded and what can happen is that TikTok doesn't have to place an ad. What it can do is just influence someone through the algorithm by presenting them information or not presenting them information. And that can be used to change people's views. And this has been demonstrated by researchers. There's, there's one group out of Rutgers that showed not only was TikTok censoring information, say about the Uyghurs, which is an ethnic minority group that's prosecuted in China or Tiananmen Square, but the longer a user was on TikTok, the more they used it, the more benevolent view of human rights in China that user had. So it's actually working. And, and it's so subtle you can't even see it unless you do these big statistical studies. So those kinds of things really threaten democracy. And AI controlled by an authoritarian state like Deep Seek is going to be even worse.
B
Yeah. In the book you say something to the effect of AI will hack our minds in the pursuit of our adversaries geopolitical goals. Do you have like a specific example? What, what was that Rutger study about? Can you share that a little bit more?
C
Well, that Rutgers study was showing through volumetric statistical analysis how, how information on TikTok was different than US owned social media sites like Instagram and that it was biased towards Chinese Communist Party geopolitical needs. Like for example, that Taiwan, they want to treat Taiwan as part of mainland China and how it was working, like I said. But when I talk about hacking minds, I mean it more literally. I actually think that what's going to happen is the difference between social media sites and AI is that AI is a dialogue. AI becomes this arbitrary of information. It's not one way. It's not just presenting you information. You're in a dialogue and so it presents you some information, you then kind of interact and you ask it questions and then it's going to kind of reference that and come back to you. And this is really, really different when it comes to information operations. It's more like what I used to do as a case officer where I'm trying to kind of convince you to, of something, right? And this AI will try to convince you. And there's been a new study which I would highly recommend looking at in science and then another one in nature. They just came out very recently that showed that AI systems that are pre trained can be much, much more politically persuasive than traditional advertising. And what's interesting is they're not doing this by even being persuasive in the way that we would think they're not. They're not using like persuasive rhetoric. What they're doing is providing an overwhelming amount of quote unquote facts to the user to change their political opinion on something. The problem is, is those facts, quote unquote, are not always facts. Sometimes it's making up information. Now imagine in the 2026 election in the US, the 2028 presidential election in the US or any election globally, these AI systems are going to be used to try to sway voters opinions. And there is no doubt in my mind that politicians and political operators are going to use this. The problem, you know, that may be fair within the system. I bet both sides or all sides are going to do this in a parliament. But now imagine these systems were hacked in some way or are just owned by an authoritarian state like, like China, which owns Deep Seek. Imagine these systems are hacked and, and, and information is poisoned in some way so that they are now not just politically persuading because of a political party, they're politically persuading because of the geopolitical national security imperative of the Chinese Communist Party or of the Russian government, or of the Iranian government, or of, you know, pick your kind of authoritarian government and that they could then use them to persuade people to vote in a certain way, to disrupt an election, to do what they want in an information operation. And that is a real threat to any democracy globally and we have very little to stop it. And one more quick point, if you think this is hard, there was another study by Anthropic not long ago where they showed that using only 250 documents, they could poison a very large language model. So you don't need millions of documents and pieces of data to poison these models with just hundreds. You can do that.
B
I guess there are case officers out there just figuring out how do we get those LLMs to be poisoned? This is scary. I mean, to be honest, I always knew that democracy was going to be challenged, especially after what happened in 2022 in Ukraine and is still happening at the time of recording in Ukraine. But what I wasn't aware of is how important it would be in that picture. Now in the book, you call it the fourth intelligence revolution. What kind of solutions do you propose in the book? What should we be looking for and of course, preparing ourselves for in this context?
C
This fourth intelligence revolution is driven by China and the competition with China, as well as some other actors like Russia, which, and, and AI. And that competition with China is about more than just traditional political and military competition. It's about economic competition, it's about science and technology competition. So the first thing that we need to do is to compete in intelligence in those fields as well. And that view of intelligence that many of us have, including myself, you know, from the Cold War and like James Bond and stuff, was all about politics and the military. Now it needs to become about economics, about science, about technology. And doing that requires not just doing it with case officers like me. It means intelligence needs to work with, with private companies, with the public and so forth, and have kind of a public private partnership. So that's the first task, is to expand what intelligence does, but also to work with the rest of society. I call that a whole of society approach. The second thing is we need to acknowledge AI and use AI in and to automate intelligence across the board, like what we were talking about, that's the only way to compete. The third thing, and this is probably the most radical change, is that in this world, as we were just kind of alluding to, intelligence affects everyone. And everyone is having their information collected by foreign intelligence agencies and adversaries. And then that information is being used for information operations against all of us. And the only way to become resilient against that is you can't just rely on governments to protect everyone. It's like there, it's just overwhelming. And by the way, we don't necessarily want our intelligence agencies censoring what we see or getting involved. We don't want another kind of Edward Snowden situation where we all feel like we're being spied on by these agencies. So we, we do need these agencies to help, just like you need police to help fight crime. But we all also need to become resilient on our own. And that involves training and learning to think like an intelligence officer and protecting yourself from these information operations and maybe even setting up a decentralized civilian intelligence agency of sorts, similar to like a bellingcat.
B
Yeah, and that last piece is actually very important. Many of our listeners might not be aware, but I'm sure, Anthony, you know, that there's a lot of open source intelligence being published all the time, and especially since the beginning of the Ukrainian war, the invasion by Russia that kind of exploded with imagery and information, the famous Tik, not TikTok, but Telegram channels and all of that. But one thing that you mentioned, which I think is very important from my perspective, having worked in the cyber security industry for about three decades now, is the idea of personal resilience to these operations. Because one of the things that is kind of implicit in everything we discussed, but not really said out loud, is that information operations, now we have enough information to make them individualized. And that's one of the things that people don't understand. When you're on TikTok or LinkedIn or Instagram or Facebook, you can be individually targeted with a certain set of beliefs that are most likely to persuade you individually to vote a certain way, whatever that way is. And let's not forget that we've done that to our own populations. For example, the Cambridge Analytica scandal out of the UK influencing US elections. That's the thing. We have been trained to influence our own populations as well. It's not just foreign powers. And that means we really need to be very much aware. So what are some of the tips that you have, Anthony, for those of us who want to be aware, who want to be prepared against this potentially individual information operations?
C
I would start to think like an intelligence officer, to become sort of like a citizen spy on your own. And here's how intelligence officers think. Here's a few things that really anyone can learn to do. It's just about having the habit. You know, one is to think about the threat and to think that there may be someone who's targeting you. And we're used to thinking about this in terms of cybersecurity already. You know, even children are trained to think like this. Now that, hey, there might be somebody trying to hack into your computer, so you need to change your password, you need to not click on phishing emails. Well, now we need to start thinking about information in general that way and think to yourself, well, there might be somebody targeting me, me as an individual, not just generally like me as an individual. And I just need to start to be careful and to look at things, think about ways to protect myself. And there are things you can do. So one thing is to think like an analyst. An intelligence analyst always triangulates information. They never take one piece of information and call that fact. No matter how trusted the source, they're always going to look at another source, right? Well, you can do the same thing, right? No, no matter how, how much you trust, say a single newspaper or other information outlet, go check another one and, and preferably, by the way, check one that's like the total opposite politically, right? If you read the New York Times, go read New Newsmax or vice versa. And if they both say the same thing, that probably means it's true or more true, right? Or even go check the South China Morning, see Post, like a Chinese newspaper or something like that. So you triangulate information. Another thing you can do, and I bet a lot of people listening already sort of think this way is, is, you know, you know, Q from James Bond, the like technology officer. There are real people who do that. They're real science and technology officers in these intelligence agencies. And the first thing those people do when there's a new piece of technology is they assess the risk in that technology. They figure out who made this technology, how was it made, are there potential threats in it. And, and if you just take that mindset and just take a minute before you start using that new app, that new social media system, whatever it is, and just do a little research, usually this is open and say who made it and might there be a threat here that will do wonders to like your security. It doesn't mean you don't need to use it. You know, look, you can what, what intelligence officers, the way we think about it is we calculate risk and, and we try to mitigate that risk. And it doesn't mean I'm not going to use TikTok. It means I'm going to go in knowing that there's a risk and start to think about how I'm going to mitigate it. So for example, one way I can maybe not share certain information on it. Maybe I can, maybe I could use a vpn. Maybe I can, you know, purposely set up multiple accounts so that all my information is not centralized. There are ways to mitigate risk once you start to think about technology as, as a risk itself that has to be mitigated. So those are some of the ways we can start to think like an intelligence officer and kind of protect yourself.
B
So I think that to, to end, I think that it's important to name the threat. And right now we could, we could say that, you know, when we think about saving democracy, the threat is that somebody will be capable of influencing our views to go against our interests. Right. Like the ability to persuade us to vote against our own interests. So we need to name the threat and then as you said, then start thinking like an intelligence officer. So the book is the Fourth Intelligence Revolution. Be sure to read it or maybe even offer it as a special present to someone who is interested and wants to know about these things. Saving Democracy is definitely worth the investment and the attention. Anthony, it's been a pleasure to have you on the show. Before we go, where can people find out more about you and the work that you're doing?
C
I have a website, anthonyvincy.com, a n t h o n y dash v dash I n c I dot com and I have a substack called three kinds of intelligence. And then you can also read the book the Fourth Intelligence Revolution, the Future of Espionage and the Battle to Save America. And thanks so much for having me on. I appreciate it.
B
Been a pleasure. Anthony, thank you for your time. Thank you for your generosity with your time and your knowledge.
C
Thanks.
A
Alright, I hope you liked this episode, but before you hit next episode, here's the deal. This podcast is powered by people like you. The members who wanted more than just inspiration. They wanted real tools and real connection to people who are practicing agile. Every day we're talking access to over 700 hours of agile gold, CTO level strategy talks, Summit keynotes, live workshops, E courses, Deep Dive interviews, books, and if you're into no estimates, we got the pioneers of no estimates in those Deep Dive interviews as well. Agile, business Intelligence, creating product visions, coaching your product owner courses, you name it. You'll get invites to monthly live Q&As with agile pioneers and practitioners, plus a private Slack community which is free of.
B
All of that AI slop you see everywhere.
A
And of course, without the flame wars, it's a community of practitioners that want to learn and thrive together. It's the best place to connect with, with community and learn together. So if this podcast has helped you before, imagine what you will get from this podcast membership. So head on over to scrummastertoolbox.org membership and join the community that's shaping the future of Agile. We have so much for you, so check out all the details@scrummastertoolbox.org because listening is great, it's important, but doing it together. That's next level. I'll see you in the community.
B
Slack. We really hope you liked our show. And if you did, why not rate this podcast on Stitcher or itunes? Share this podcast and let other Scrum Masters know about this valuable resource for their work. Remember that sharing is caring.
Podcast: Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Host: Vasco Duarte
Guest: Anthony Vinci, Author of "The Fourth Intelligence Revolution", ex-CTO & Associate Director at the National Geospatial Intelligence Agency (NGA)
Date: January 10, 2026
In this urgent, thought-provoking bonus episode, Vasco Duarte is joined by intelligence and tech expert Anthony Vinci to explore how artificial intelligence is fundamentally transforming not just intelligence gathering, but the very health and resilience of democracy itself. The conversation draws on Vinci’s unique experience modernizing the US intelligence community and his latest book, "The Fourth Intelligence Revolution." Together, they dive deep into the new frontlines—where nation-states, criminal actors, and AI systems wage an invisible war for our minds, our data, and our freedoms.
[02:39]
Notable Quote:
"What kind of made me a little bit different: the director realized they needed to bring more tech into the agency and in particular AI… That's what I went there to do." (C, [03:26])
[04:20–07:24]
Memorable Moment:
"When I came along I was watching imagenet and some of these technologies start to work in computer vision ... realized, hey, we can start to implement this into the workflow and that would change everything." (C, [06:22])
[07:24–10:21]
Notable Quote:
"We are surpassing the point now at which people can keep up with the data, period, no matter how much you process it and analyze it ... where the machine, the AI, has to do the analysis itself, period." (C, [08:42])
[10:21–13:46]
Notable Quote:
"I believe that in the near future close to 100%, if not 100% of cyber espionage and cyber hacking will be done in an automated fashion. It will literally be click the button, decide your target and go and do it. And the only way to keep up with that is to automate back." (C, [12:18])
[13:46–16:26]
Notable Quote:
"TikTok doesn't have to place an ad. What it can do is just influence someone through the algorithm by presenting them information or not presenting them information… The longer a user was on TikTok, the more benevolent view of human rights in China that user had. So it's actually working." (C, [15:13])
[16:26–20:39]
Notable Quote:
"AI will hack our minds in the pursuit of our adversaries geopolitical goals … Imagine these systems are hacked and, and, and information is poisoned in some way so that they are now not just politically persuading because of a political party, they're politically persuading because of the geopolitical national security imperative of the Chinese Communist Party or of the Russian government … and that is a real threat to any democracy globally and we have very little to stop it." (C, [18:14])
[20:39–23:51]
Notable Quote:
"We all also need to become resilient on our own. And that involves training and learning to think like an intelligence officer and protecting yourself from these information operations ..." (C, [22:41])
[23:51–28:57]
Notable Quote:
"An intelligence analyst always triangulates information. They never take one piece of information and call that fact. No matter how trusted the source, they're always going to look at another source, right? ..." (C, [26:07])
[28:57–29:48]
Notable Quote:
"When we think about saving democracy, the threat is that somebody will be capable of influencing our views to go against our interests, right. Like the ability to persuade us to vote against our own interests." (B, [28:58])
Anthony Vinci:
| Segment | Timestamp | |--------------------------------------------|-------------| | Guest Background | [02:39] | | Modernizing NGA, AI Integration | [04:20] | | Human vs Machine Analysis | [07:24] | | Cyber Warfare Arms Race | [10:21] | | Social Media as Ambiguous Threat | [13:46] | | AI Mind-Hacking, Persuasion | [16:26] | | Solutions: Intelligence Revolution | [20:39] | | Citizen Resilience—"Think Like Intel" | [23:51] | | Naming & Understanding the Threat | [28:57] | | Further Resources | [29:48] |
This episode underscores that the collision between AI, data, nation-state interests, and cognitive warfare is not just tech theory—it’s a present-day, existential challenge for democracies worldwide. Vinci’s prescription is both practical and urgent: automation and resilience are our tools, but every citizen must play their part—by thinking critically, triangulating information, and understanding both the subtlety and the scale of the threats we face.
Standout Quote:
"We can't just rely on governments to protect everyone ... we all also need to become resilient on our own." — Anthony Vinci ([22:41])