Transcript
Dr. Rob Johnston (0:00)
Foreign.
Santia Ruiz (0:04)
Hi, I'm Santia Ruiz, and you're listening to Statecraft. Today we're joined by Dr. Rob Johnston, an intelligence community veteran and an author of the cult classic Analytic Culture in the US Intelligence Community, a book so influential it's been required reading at darpa. First and foremost, Johnston is an ethnographer. His focus in that book is how analysts actually produce intelligence analysis, the ways it works, the ways it doesn't. Johnson answers a lot of questions I've had for a while about intelligence and spying. Questions like why do we seem to get big predictions wrong so consistently? And why can't the CIA find analysts who speak the language of the country they're analyzing? And would your average better on polymarket be a better CIA analyst than the pros more broadly? There's also this meta question that I always come back to on Statecraft. Is being good at this stuff an art or a science? And by this stuff, in this case, we're talking about intelligence analysis. But I think that the question generalizes across policymaking. Would more formalizing and systematizing make our spies better? Would it make our diplomats better? Would it make our EPA bureaucrats better? Or would it lead to more bureaucracy, more paper, worse outcomes? How do you build processes in the government that actually make you better at your job? As a reminder, the full transcript can be found at www.statecraft.pub. if you like this episode, leave a comment and rate us on your podcast platform. Dr. Rob Johnston, thank you for coming on Statecraft.
Dr. Rob Johnston (1:35)
It's my pleasure. Thank you for having me. I really appreciate it.
Santia Ruiz (1:38)
I'm really excited to get into this, and I want to start with just a very simple question. What's wrong with American intelligence analysis today?
Dr. Rob Johnston (1:45)
That's an interesting question. I'm not sure that there's as much wrong as there once was, and I think what is wrong might be wrong in new and interesting ways. What has always stymied analysis is cognitive models, whatever our mental models are of how the world operates and the variables that matter within that world. And that's always influenced by the individual life experience and expertise of the analyst when you're benchmarking a nation. When I say that, I mean something like the CIA's World Factbook, right? Right. Just this basic encyclopedia of essential knowledge about a country we all kind of agree on. This is the truth on the ground as we understand it. And then you have these individual differences that show up and they start digging into these intelligence questions. And the questions range from, you know, wither China, which is so broad as to be almost meaningless all the way down to, okay, here's this weapon platform. Do we know if this weapon platform is at this location or if it's been moved to that location? And do we know if these factories are associated with that, that kind of, that level of detail? The Wither China questions are always driven by poor tasking. Really, the specific in the weeds questions are generally driven by very concrete requirements at a specific time. I don't think the problem is really all of the cognitive effort that goes into that, nor even all of the differences. And I'm a big believer in cognitive diversity, frame it that way, not as ethnicity or race or as diversity in any of those senses, but rather the cognitive diversity that you encounter when you have a bunch of people from different life experiences and different disciplines talking about and trying to solve a problem that if you don't have that, you're missing something. And I see that in engineering all the time. The biggest problem is communicating my two cents. The biggest problem is communicating with policymakers. Policymakers have remarkably short, and I don't mean this in a sense, but remarkably short attention spans. And they're conditioned by a couple of things that the intelligence community can't control. They want to know, is X, Y or Z going to blow up or not? Okay, that's fine. However, if you say yes, probably in 10 years, there's nothing a US policymaker can do about, oh God, 10 years from now. I can't think about 10 years from now. I've got to worry about my next election. If you say, oh, by the way, you've got 24 hours, they think, oh God, I can't do anything about it. It's too late. I don't have a lever to pull to effect change in 24 hours. So there's always this timing, teaming problem, right? If I give a policymaker two, three weeks, that's sort of optimal space for the policymaker. But a lot of the consumers of intelligence aren't savvy enough consumers to know that they should ask for that in the next three weeks. Lay out the three different trends that might occur in country X and tell me what the signposts are for each of those so that I can make some adjustments based on ground truth. So if we see X occur, it indicates that there's greater probability that Y will occur versus A to B. I think that communication between the consumer and the producer really needs a lot of focus and a lot of work. In my experience, it's always an intelligence failure and a policy success. It's never a policy failure. The first person to get thrown under the bus is the intelligence community.
