Transcript
A (0:11)
Hey everyone. Welcome to the Drive Podcast. I'm your host, Peter Attia. This podcast, my website and my weekly newsletter all focus on the goal of translating the science of longevity into something accessible for everyone. Our goal is to provide the best content in health and wellness and we've established a great team of analysts to make this happen. It is extremely important, important to me to provide all of this content without relying on paid ads to do this. Our work is made entirely possible by our members and in return we offer exclusive member only content and benefits above and beyond what is available for free. If you want to take your knowledge of this space to the next level, it's our goal to ensure members get back much more than the price of a subscription. If you want to learn more about the benefits of our premium membership, head over to peterattiamd.com subscribe
B (1:04)
welcome to a
A (1:05)
special episode of the Drive.
B (1:06)
In this episode, I step back to share how my thinking is evolving around a topic that sits actually well upstream of almost everything else we discuss on this podcast, which is how to think scientifically. Now, I get asked this question all the time, and frankly, I don't think until this episode I really had a comprehensive way to approach this important topic. So this is really an introspective episode about why scientific thinking is so difficult for us as a species, why it matters more than ever in an environment flooded with misinformation, and what each of us can do to get better at separating what we want to be true from what the evidence actually suggests. So, without further delay, I hope you'll enjoy this episode of the Drive. Today, I want to talk about a skill that sits upstream of nearly every decision you make about health policy, risk, and even how to evaluate other people in this space. I want to talk about how to think scientifically. By that I don't mean how to run a lab or memorize statistics. I mean how to evaluate claims, how to update your beliefs when the evidence changes, and how to figure out who to trust when you can't do the analysis yourself. Which, as we're going to come to appreciate, is often, if you get good at that, you put yourself in the position to make better decisions than somebody who simply knows more facts but doesn't know how to weigh them. We're going to cover four things here today. First, what scientific thinking actually is beyond, you know, spending time in a lab. Second, why it's so hard for us to which has less to do with intelligence than you might expect. Third, what you can do as an individual to get better at it. And fourth, how to find people you can trust when you can't do the analysis yourself, which, as I said a second ago, is going to be most of the time for most people. One idea is going to thread through this entire episode, and I want to put it on the table right now. The goal of thinking scientifically is not simply to be right, it's to be less wrong over time. Science is a process built around that principle. And what I want to do today is help you engage with it more skillfully. This is a topic I get asked about very often, and I think, honestly, until now, I haven't had a great consolidated approach for laying it out. So let's start with what we actually mean when we say think scientifically. There's a common misconception that scientific thinking is something scientists do in labs and the rest of us just receive in the form of results. But that's not what I'm talking about at all. Thinking scientifically is a way of engaging with claims about the world, any claims, not just ones that come with a citation attached. At its core, it means generating hypotheses, possible explanations for why something might be the way it is or how something works. It means testing those hypothes against experimental evidence. It means updating your beliefs when the evidence changes. And it means tolerating uncertainty throughout this entire process. It means separating what you want to be true from what the evidence suggests is true and recognizing, really recognizing, how often those two things are in tension. As Richard Feynman, someone we're going to refer to a few times today, one of the greatest scientific thinkers in history, once said, the first principle is not to fool yourself, and you are the easiest person to fool. Scientific thinking means being more invested in the process that produced a conclusion than in the conclusion itself. Again, I want you to say that again with me, because that is not intuitive. Scientific thinking means being more invested in the process that produced a conclusion than. Than in the conclusion itself. Most of us evaluate claims by asking, is it true? A scientific thinker asks a different set of questions first. How did they arrive at this? What's the evidence? How strong is it? What are the alternative hypotheses or explanations? And scientific thinking means understanding that I don't know, and it depends are often the most honest available answers. This idea I don't know is critically important, and I don't think we discuss it often enough. In many ways, I don't know can always be the first answer to any scientific question. The second answer is then our best understanding based on the available evidence. But out of ease and out of confidence, and out of trying to avoid sounding like a broken record, we often just skip straight to the second answer. We drop the uncertainty. And when we do that, we lose something essential. We lose the thing that makes scientific thinking scientific. Because here's the thing. One of the most useful ways to think about science, especially in medicine, is by focusing on two of its core functions. The first is ruling things out, and the second getting less wrong over time. There's a famous saying within scientific research, often attributed to George Box, but honestly, I've seen it attributed to 20 other people. All models are wrong, but some are useful. More often than not, we are not proving a claim in some final absolute sense. We are comparing explanations, testing predictions, and gradually gaining confidence in the ones that survive contact with data. We rule things out one by one until we're left with the explanation we can't rule out. And then we make a logical leap. We say we've eliminated every other possibility we can think of. So we have growing confidence that this explanation is correct. But notice the qualifier here. Every other possibility we can think of. It's not a proof. Hard proof only exists in formal logic and mathematics, where we can demonstrate within a set of defined rules that something must be true within those defined rules. The rest of science relies on experimentation, trying to discover what the rules are, doing our best to accept or reject rules based on available evidence and deducing the best possible explanations within the landscapes we've identified. This is fundamentally different from a derived proof. Now, I started my academic career in mathematics, and we spent a lot of time working on proofs, and it was very difficult for me when I transitioned from mathematics to medicine, which was so fundamentally messy. Instead, we rely on our best approximations of reality. But true certainty is not even on the table. Our models are, at their core, probabilities built on probabilities. They aren't proof, they're simply the best we've got. Sometimes that distinction barely matters in practice. Take gravity. The idea that objects with mass attract to each other is an empirically derived theory, One that is not derived from a true mathematical proof, but has experimentally out competed all other explanations and provided countless verified predictions. This theory, first proposed by Isaac Newton in 1687, has proven so successful that future iterations didn't destroy it. They refined it with incredible implications. At the turn of the 20th century, Einstein proposed that gravity doesn't only lead to objects attracting each other. Gravity quite literally bends time and space, slowing and speeding up the passage of time A proposition that was quite frankly insane sounding at the time, and insane sounding maybe now, except that it works. For example, due to the incredible speed of man made satellites circulating the earth together, with their distance decreasing the pull of Earth's gravity, the satellite systems responsible for GPS have to adjust their clocks by 38 microseconds every day. Time literally passes slower on these satellites. And without these adjustments, predictions made by Einstein's theory of relativity, our GPS system would misalign by 8 meters per minute, or 11 kilometers per day. From principles discovered by experimentation, we have satellites orbiting over our heads perfectly combating the pull of Earth's gravity to rotate around the planet near endlessly. On these satellites we adjust their clocks using experimentally derived principles of the time dilation due to gravity. Instead of sending these hunks of metal crashing onto Earth or getting data from them that errs on the order of kilometers, our theory of gravity permits the coordinated movement of hundreds of satellites over decades, able to pinpoint exactly where your room is in your house or where you forgot your phone. Now, do we know everything about gravity? No. In fact, combining our best theory of gravity with our best theory of particle physics is one of the greatest unsolved scientific problems of our day. But have we discovered enough about gravity to be useful? Undeniably. And confidence in our models isn't restricted to physics, although admittedly that's where the highest confidence tends to concentrate. But take something that impacts biology. Take smoking. We have overwhelming evidence that smoking causes cancer. The epidemiologic data with its enormous hazard ratios, the mechanistic understanding, the dose response relationships, taken together, they've ruled out any plausible alternative at that point. Is it really so different to say we've proven it? Not much. When the evidence is so overwhelming, the gap between our least wrong model and capital T true becomes vanishingly small. But most questions in medicine don't look like that. Most live in the middle. The evidence is suggestive, sometimes highly suggestive, but imperfect. The model is useful but incomplete. And the conclusions are right enough to act on now, but not final. Dietary cholesterol is a good example. For decades, the accepted answer was straightforward. The cholesterol we eat raises the cholesterol in our blood, which raises cardiovascular risk. This was treated as settled dietary guidelines were built around it. Eggs became the enemy. And the evidence did point in that direction. It wasn't fabricated and it wasn't a conspiracy, but it was incomplete. The relationship turned out to be far more complex and far more individual than the simple causal chain suggested. If the field had held onto that finding as our best current model rather than settled, the guidelines might have been updated much sooner. Now, here's the part that's conceptually, maybe even effectively, difficult. The implication is this. Some of the guidance that exists right now, today, as we're having this discussion is going to turn out to be as incomplete as eggs and cholesterol. Some of the guidance that you and I believe in and follow. Now, the kicker is, I don't know which parts. Nobody does yet. At least not at a conscious level. But the history of science tells us that some of what we currently treat as settled simply is not. We have to live with that. We have to make decisions based on the best available knowledge, while staying aware that the best available truth is rarely the whole truth. It means holding room for doubt and living confidently anyway. It requires being a walking contradiction. The good news is that thinking scientifically works precisely because it's built to address its own imperfections. The system has self correcting baked in as long as you don't freeze your conclusions in place and defend them like territory. And this ability can be honed. It's not a talent or a personality trait per se. It's a discipline, a practice that requires effort, humility and repetition. You can train it. You can get meaningfully better. That said, scientific thinking is a practice, it's not an achievement. Good thinking now doesn't guarantee good thinking later. It's something you have to keep doing or it atrophies. Most of us think we already do this, but most of us are wrong about how consistently and effectively we do it. So let's talk about why. Here's the core. Thinking scientifically is not just hard, it's unnatural. And I mean that in a very literal biological sense. We're primates. That means we have been evolved as social animals for roughly 50 million years. For the vast majority of that time, your survival depended on your standing within a social group. If the group accepted you, you had access to food, mates and protection. If the group rejected you, you were in real danger. Exile wasn't an inconvenience, it was often a death sentence. Social belonging was a survival imperative. And our brains were shaped over tens of millions of years to be exquisitely good at navigating social environments, reading faces, building alliances, signaling loyalty, maintaining status. This is what our cognition was optimized for. We do not just use social skills. We fundamentally rely on social groups and social information. We are also an intelligent species. But most learning for most of our history happened through imitation or through language. And both of these are Intrinsically social. You learn from members of your group, be it watching or listening. The information you receive is filtered through trust, status and identity. Even reading a study alone in your office, by the very act of using language, is participating in a social system of knowledge. Now here's where the timeline narrows. The first stone tools are around 3 million years old. Homo sapiens, which is what we are, showed up about 250,000 years ago. Formal logic was systematized roughly 2,500 years ago. And a formal system of empiricism, the basis of the scientific method, is maybe 400 years old. That's it. A few hundred years of empiricism built on top of 50 million years of primate social cognition. Logic and hypothesis testing are not our default state and can even be at odds with our fundamental sociability. We form groups, form identities around these groups, and let group membership shape what we believe and how we interpret evidence. Social information can and will override logical information. This isn't a bug that only shows up in uneducated people. It's a basic feature of human cognition. Evolution shaped our brains, but evolution works on things that are good enough. Good enough to survive as a hunter gatherer was good enough for evolution. We weren't shaped to be the ultimate logicians. We were shaped to be good enough logicians to outcompete other animals, to access resources other animals couldn't, to make fast decisions in uncertain environments, and to do it within an intricate social structure. That's a very different optimization target than figure out what's inviolably true. The pursuit of science requires almost the opposite. Holding multiple hypotheses, tolerating uncertainty for years, understanding counterintuitive concepts like conditional probability and effect size, willingness to change beliefs that are deeply held and socially costly to abandon. When you frame it that way, the real question isn't why scientific thinking is hard, it's frankly, how do we manage it at all? It's frankly more amazing that we can do this than it is surprising that we struggle with it. And yet we do manage it. Now, if there were a second thesis within this section, it would be despite the limitations of our biology, despite our pull toward fast social identity, protective reasoning, we have invented, notice I use the word invented, a remarkable set of corrective tools. But importantly, we built structures, formal structures specifically designed to counteract our natural tendencies. Peer review, blind experiments, pre registration of hypotheses, statistical frameworks. These aren't just tools. I think of them as prosthetics for objectivity. They exist precisely because we've recognized at some point collectively that we couldn't trust our own unassisted judgment. And instead of giving up, we engineered workarounds. Think about what a double blinded clinical trial actually is. It is an explicit admission that even well trained experts can't be trusted to evaluate outcomes without being influenced by what they hope to find. So we remove this information. We build a system that assumes we're biased and corrects for it. Science also institutionalized productive disagreement. Peer review is adversarial by design, the norm of replication. The idea that you're finding doesn't count until someone else can reproduce it says something remarkable. We don't trust any one of us, but we trust the process. And at the individual level, we can train ourselves to be better, to engage in science as a process rather than a set of facts. To accept or reject it is slow and it is humbling, but it works alright. So far we've been talking about the structural level. How science as an institution has built systems to overcome our natural cognitive limitations. Now I want to shift gears and talk about something that's probably more directly relevant to most of you. How we as individuals, even if we're not professionally interacting with the scientific method on a daily basis, ourselves, can integrate scientific principles into our daily lives. We're going to look at five ideas, and while they're all important tools, we're going to really dive into the fifth. How to approach outsourcing our thinking when necessary. First, let's start with the idea that we want to treat certainty as a cue to slow down. When you encounter a claim and you feel certain about it, treat that certainty as a signal to pause. Certainty is a feeling, not an indicator of truth. Your brain generates it. For all sorts of reasons that we've talked about. Social consensus, emotional resonance, familiarity, repetition, the confidence of the speaker. None of those have anything to do with whether the claim is correct. When you notice certainty, ask yourself, why do I believe this? If the answer is social or based on identity, Everyone in my feed agrees the person sounded confident. People I identify with believe this. I want this to be true. That's a red flag. It doesn't mean the claim is wrong, but it means your basis for believing it is social, not evidential. If the answer is that you've seen the data, you've understood it, you've considered alternatives, and you still find this conclusion most compelling. You're in a much better place. Now, here's the recursive part that makes this so powerful. Asking yourself this question in the first place is step one and the more you do it, the more honestly you'll do it. If you start by being certain that you're always logical, you'll eventually learn to ask yourself why you're so certain you're being logical. The questioning deepens over time. It's hard to say when one masters this skill, because, frankly, I believe it looks different for different people. But this is the process. Question your certainty. Question your questioning. Find the certainty and uncertainty and build comfort in that uncomfortable space. You won't become perfect at this. I know. I sure haven't. But we can get monumentally better. We can get better at knowing what we do not know. And that awareness is worth a lot. Okay, Second, judge the process, not just the conclusion. When someone makes a claim, most of us instinctively evaluate the conclusion. Is this true? Do I agree? That's natural. But it's not the first question to ask. The first question should be, how did this person arrive at this? What evidence? How strong? What alternatives were considered? What do critics say? And have they engaged with those criticisms? When you start asking these questions, something shifts. You stop evaluating claims as things to agree or disagree with, and you start evaluating them as products of a process. And the quality of that process tells you far more than the conclusion itself. A good process can produce a wrong conclusion. That happens in science all the time. But a bad process that happens to produce a right conclusion is not something to trust because it got there by accident and it won't be reliable going forward. Engaging with the process. How did we get this conclusion is the key. Now let me make this concrete with a tangible example. Detox cleanses, juice cleanses, supplement protocols. Products that claim to remove toxins from your body. Body. It starts with a real observation. You don't feel as good as you used to. Maybe you're tired, your digestion is off, your skin is feeling dull. And we all know there are chemicals, pollutants, and food additives in our environment that aren't doing us any favors. Okay, all of this is true. The basis is real. That's what lures us in. Then comes the conclusion. Drink this, stop eating that, take this capsule, and those things go away. The process failure is the absence of everything in between. How specifically does this product remove toxins? Which toxins? Were they measured before and after? How were they measured? What was the control? What's the mechanism by which the juice or supplement bind to mobilize and eliminate a specific harmful substance from your body? Almost every time, the answer to those questions is silence or vague, gesturing at flushing and purifying. No real mechanism and no real study. A conclusion has been asserted with no specific hypothesis being tested, no mechanism being described, no blinding, no control. It's a leap straight from a real observation, a real problem, to a marketed solution with none of the work done in the middle. We can even be tricked by a lived experience here. Maybe your headaches do go away when you consume nothing but lemon juice for three days. Maybe your skin does clear up or your digestion feels better. Maybe there is a real effect. But what confidence do we have in its cause? By dramatically altering our diet, we alter numerous features of our physiology. By definition, we're eating different foods, drinking different fluids. We're changing the very inputs our bodies metabolize, giving rise to how we feel and think. How do we know without an appropriate process that a toxin was purged from the body rather than a toxin was removed from your diet? Even if the effect is real, its explanation can miss the mark entirely and get you stuck repeatedly enduring three to seven days of intentional starvation in some proprietary placebo for years. When cutting some element of your diet, some processed food for example, or portion size, is actually doing the work without a controlled investigation, we're fed a conclusion, but not the means to judge its validity. Now, detox cleanses might feel like an easy target, but the argument structure, real observations straight to confident conclusions can be far more subtle than a bottle of green juice. It shows up in supplement marketing, in wellness claims, and in things that sound much more sophisticated than a cleanse. What we're training ourselves to notice is the jump problem to conclusion with nothing rigorous connecting them. And while we're on supplements, here's a related example of why process questions matter. You'll often see supplement companies claim their products are third party tested, and that sounds reassuring. But if you ask a process question, what specifically are they testing for? The answer is often simply heavy metal contamination. Which means you aren't getting assurance that your Ashwagandha capsule contains Ashwagandha. You're getting, at best, assurance that your Ashwagandha capsule doesn't contain toxic levels of lead. It's not that lead contamination isn't a problem, nor is it an overt lie that the product was tested. But by omitting the step where you question the process, asking what the third party testing was for specifically, we can be lulled into a sense of confidence that the testing process in reality wasn't even designed to provide. This is what evaluating processes looks like in everyday life. Not necessarily reading clinical trials per se, just pausing long enough to ask how someone got from the problem to the conclusion and noticing when the answer is they didn't bother or they didn't bother to do it right. Okay. Third, notice when identity is doing your thinking. This is a hard one, maybe the hardest on the list. Coalitional thinking is our default mode for all of the reasons I described earlier. It is hardwired into our DNA motherboard, and it can be the enemy of scientific thinking. No group is always right. No political group, no activist group, no scientific group. If you find yourself believing that your team has the right answer on every issue, that's not a sign that you found the right team. It's a sign that your group identity is doing your thinking for you. There's a great line from the movie Men in a person is smart. People are dumb, panicky, dangerous animals, and you know it. Individual thinking can be remarkably rational. Groups driven by identity often aren't. The discipline is to consider arguments on their merits, not based on where they're coming from. That means engaging with arguments from people you generally disagree with and questioning arguments from people you generally trust. Let me give you two examples. Most of us are familiar with Galileo and the heliocentric model. Galileo presented evidence that the earth revolves around the sun. But this conflicted with Aristotle's physics, which the Church had adopted as essentially doctrine. Galileo was tried, forced to recant, and spent the rest of his life under house arrest. The evidence didn't matter because the conclusion threatened the identity and authority of the institution evaluating it. This example is famous, but it isn't terribly relatable. The example I find even more instructive comes from inside the medical community itself. It's actually something I wrote about in outlive. In the 1840s, Ignis Semmelweis was working in the maternity ward of the Vienna General Hospital. The hospital had two clinics, one staffed by doctors and one by midwives. Mortality from childbed fever in the doctor's clinic was roughly five times higher than in the midwife clinic, and Semmelweis wanted to understand why. He systematically ruled out explanations, birthing positions, the route of the clinic's priest, until a colleague died after cutting his finger during an autopsy on one of the childbed fever patients, and the colleague died of symptoms identical to childbed fever. This was the light bulb moment for Semmelweis. The dead bodies were carrying something, and doctors were carrying that material from autopsies directly to deliveries. Midwives didn't perform autopsies, so they couldn't carry material from the autopsy to the maternity patient. He required physicians to wash their hands with chlorinated lime before working with maternity ward patients. And mortality dropped from 18% to under 2% in some months, all the way to zero. And yet the medical establishment rejected it. Now this is where it gets really interesting for our purpose. Because the rejection wasn't purely religious or political. It wasn't just that doctors didn't want to believe it. Germ theory didn't exist. Yet they had what sounded like a legitimate scientific objection. The dominant theory of disease transmission was miasma. The idea that disease was caused by bad air, by noxious fumes. Now, under that framework, the idea that invisible material on your hands could transmit disease didn't make any sense. You can't wash bad miasmus off your hands. Semmelweis had gone through the right process, found the right conclusion. But he couldn't explain why his intervention worked in terms that fit any accepted theory. Over time, his findings let doctors tell themselves and each other that they were rejecting Semmelweis on scientific grounds. His conclusions couldn't fit the prevailing theory. But layered underneath that objection was something much more primal. Accepting Semmelweis's data meant accepting that doctors had been killing their patients. That their own hands, the instruments of healing, the symbols of their professional identity, had been vectors of death. That was an identity level threat. And the stated scientific objection gave cover to the unstated identity defense. That pattern, identity based motivation, hiding behind scientific sounding skepticism, undoubtedly happens today. The lesson isn't that you should distrust doctors. It's that even trained experts can resist evidence when accepting it threatens identity status or the story they've been telling themselves. Even in the face of overwhelming evidence, even when the process is right. That's how powerful the pull of identity can be. Becoming aware of it is difficult, but critical for scientific decision making. Okay, the fourth one. Don't confuse criticism with understanding. This one is practical and I think underappreciated. In science we need to respect the asymmetry between building knowledge and attempts to discredit it. It is vastly easier to criticize a study than it is to design and run one. It's vastly easier to poke holes in evidence than it is to generate evidence. This is just a structural fact about how science works. Every study can be criticized. I mean that literally. Show me any study ever published and I or any expert in that field can find a legitimate methodological concern. The sample size could have been bigger. The follow up period could have been longer. The control group wasn't perfectly matched. There's residual confounding. The primary endpoint was a surrogate the population study doesn't quite generalize the way the authors generalize. Those are real concerns, and they matter, but they apply to everything. So the question is not can this study be criticized? The answer to that question is always yes. The question is, is this study informative despite its limitations? And answering that question requires a kind of judgment, a willingness to synthesize, to weigh evidence. To say, this isn't perfect, but it moves the needle that pure criticism doesn't require. There's a concept called Brandolini's law, the bullshit asymmetry principle, which says the energy needed to refute bullshit is an order of magnitude larger than what's needed to produce it. Someone committed to casting doubt can always outrun someone committed to building understanding. As Mark Twain said, a lie can travel halfway around the world while the truth is putting on its shoes when hot button issues arise. The goal of science is to move the field forward, gathering better evidence, improving models. Be wary of people who only criticize and never synthesize. It's not that scientists should be shielded from the public, but the goal should be focusing on what is built and what data exist and how this helps us get closer to the truth, not playing whack a mole with whatever mess seems to be sticking to the wall this week. Okay, the fifth and final Outsource your thinking carefully. It is important to recognize that no human being can exist in the modern world, given the sheer expansion of knowledge, without relying on the expertise of others. This has nothing to do with intelligence. It is simply not possible. It is metaphysically impossible for any one individual to be a true expert across all domains. And every day each of us make decisions from boarding an airplane to navigating a complex system that depend on the judgment and competence of people whose domain knowledge exceeds our own. There are effectively infinite examples of this reliance, which makes the central question unavoidable. How do we decide in whom to place our trust? Before we dive in, I'll take a moment to step back and tell you where I want to take us. The goal here is for you to build what I think of as a personal board of advisors for any topic that matters to you. Identify two or three people or outlets whose judgment you trust, and be honest about why you trust them. When you find these people, have them help you cut through the noise. When I'm evaluating whether or not to trust someone on a scientific topic, I run through a set of questions. Not necessarily formally, not with a clipboard or an Excel sheet, but these are the things I'm thinking about when I'm reading a paper, listening to a podcast, or watching someone on YouTube. I'm thinking about it in three layers. First, who is this person? Second, how are they thinking? Third, what should make me cautious? And we'll walk through each of these layers. Okay, layer one. I ask who is this person? What is their actual expertise? We'll get this out of the way. First, Credentials aren't conclusive, but they're a meaningful starting point. If Someone has a PhD in molecular biology and they're talking about a molecular biology finding, the starting probability that they know what they are talking about is higher than for someone who learned about it from a YouTube video last week. This is not elitism. It's called Bayesian reasoning. Credentials set a prior, but that prior should be updated based on how the person actually reasons. Do they show their work? Do they engage with criticism? Do they acknowledge what they don't know? The worst mistake is dismissing the importance of credentials entirely. The second worst mistake is treating them as conclusive. Which is to say that credentials aren't the whole story. With or without them, the question is the same. Has this person done the work? Are they deeply embedded in the field? Or are they weighing in on something they're passingly familiar with? How long have they been at it? What is their track record? And it's worth remembering nobody is an expert in everything. There are several examples where a Nobel laureate in one field goes on to make outlandish and conspir that contradict every single expert in a different field. In fact, this was the case with Kary Mullis, the inventor of PCR and winner of the Nobel Prize in Chemistry, who denied that HIV was the causal virus in aids. His ideas drove policymaking in South Africa in the early 2000s and likely led to the death of more than a quarter of a million people. The people you trust on one topic may be completely out of their depth in another. Someone can be the right person to trust in one domain, but the board of advisors needs other people to fill in the gaps. Another big question to ask when judging someone's credibility is how they approach presentations. Are they explaining or performing? Science uses a lot of technical language or jargon that isn't necessarily a part of normal day to day language. How the jargon is used matters. Using jargon without context is kind of a hallmark of an attempt to mislead a listener, deploying technical terms to impress an audience rather than inform them. It is the performance of being scientific sounding in an attempt to appear credible, rather than using technical terms to add precision to complex ideas. We use jargon on this podcast, but we try to use it to bring you with us, not to create a gate. If someone is hiding behind jargon instead of using it to elevate their audience, they could very well be hiding something in hopes the audience won't catch it. Credentials and familiarity with technical language are good on paper, but how they're utilized, how this person interacts with their field and with their audience, are the deciding factors in building trust. Which brings us to the second layer. How are they thinking? Now, this layer is, of course, critical, but it's a bit more nuanced because again, science is a process we want to know the person we are listening to is engaging with a process, not simply a series of conclusions. So the first question here is, do they show their reasoning? Not just the final answer, but how they got there, why they believe what they believe, what evidence they're relying on, and what alternatives they've considered. Transparent reasoning is one of the clearest signals of someone worth listening to. We also want to know how they treat disagreement. Pay attention to how an expert talks about people they disagree with. A steel manner presents the strongest version of the opposing argument and then explains why they still disagree. A straw manner presents the weak position so that it's easy to knock down. If someone consistently engages with the best versions of the other side, they're doing real intellectual work. And they're far more likely to update when the opposing case gets stronger. If they only attack the weakest version, they're performing and not seriously engaging with the fundamental possibility that their conclusions could be wrong. And to know how they're reaching their conclusions, how they are engaging with disagreements, we are their opinions anchored to data. Let's go back to Richard Feynman, one of my favorite thinkers, but this time in his own words from a lecture that he gave probably sometime in the 1960s.
