Podcast Summary
Podcast: Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Host: Jonathan Green
Guest: Robert Brown (Senior Director of Cyber Resilience, Author, AI & Decision-Making Expert)
Episode Title: AI Can’t Go Solo: Why the Human Touch Still Matters
Date: May 26, 2025
Episode Overview
This episode delves into why keeping a "human in the loop" remains essential for effective artificial intelligence (AI) adoption, especially in business. Host Jonathan Green and guest Robert Brown dissect practical pitfalls of current AI rollouts, the real reasons behind high failure rates of AI and data analytics projects, and provide a masterclass in decision-making frameworks that balance data-driven tools with distinctly human insight. The conversation is grounded in concrete advice for business leaders, entrepreneurs, and anyone experimenting with AI tools but struggling to translate innovation into impactful, risk-managed decisions.
Key Discussion Points
1. The Backwards Approach to AI Adoption
- Businesses too often buy new AI tools hoping to find a use for them, instead of identifying critical needs first and selecting tools to match (01:06).
- Many organizations lack a written, structured decision-making process against which to judge if a tool or solution genuinely supports the company's direction.
- Quote: "We're doing everything backwards ... here's a tool, I bought a hammer, now I need to find some nails." – Jonathan Green (01:06)
2. Why Humans Need Decision-Making Structures
- Human decision-making is not naturally built for our complex modern world; we rely on mental shortcuts that worked in primitive environments, not today’s business landscape (02:06).
- Good decisions conform to a standard of quality, not merely the achievement of preferred outcomes, because the world is inherently uncertain (03:24).
- Quote: "A good decision is not necessarily one that gives you the outcome you desire. It's one that conforms to a quality of standard ... If we can conform to that standard of quality... we make decisions that [more likely] lead to the outcomes we want." – Robert Brown (03:24)
- Decision-making should be about making bets with incomplete information, aiming to win more often than lose—not winning every time.
3. Clarifying Vague Requirements: The “Find Me a Rock” Problem
- AI projects often suffer from ill-defined desired outcomes ("I'll know it when I see it"), which erodes clarity, timelines, and budgets (05:00–07:09).
- Insisting on a clear definition or example of ideal output is crucial; iteration without an outcome in mind leads to scope creep and inefficiency.
4. Shifting to Values and Objectives
- Humans (especially leaders) resist changing their decision-making styles unless faced with repeated failures or wasted resources (07:42).
- Decision-making success starts with explicit articulation of values, objectives, and preferences—not just data.
- Reference: Ralph Keeney’s “Value-Focused Thinking” (1996) is highlighted as a pivotal manual for this shift (09:36).
5. Ambiguity in Language and Metrics
- Words such as “agent” or “automation” mean different things to different stakeholders; this lack of clarity causes confusion in AI system implementation (10:50).
- Mission statements and key metrics should be a guiding filter—primarily to help say "no" and avoid mission creep.
- Quote: "The point of a mission statement is to say no ... it's a tool to measure your other decisions against." – Jonathan Green (12:09)
6. Silo Mentality and Stakeholder Misalignment
- In larger organizations, silos develop distinct languages and priorities, so definitions of success differ across departments, undermining unified AI efforts (13:29).
- Most departments focus on metrics they're measured on, not the company’s holistic objectives.
7. More Data ≠ Better Decisions
- A prevailing myth is that more data leads to superior decisions. In reality, too much irrelevant data clouds judgment (16:17–18:10).
- Quote: "We have so much data that we now have a duplicate of the problem we had before ... we don't know what to do with [all of it]." – Robert Brown (25:56)
- Historical data reflects only the past and often fails for forward-looking, strategic choices.
- AI and analytics initiatives have high failure rates (up to 80–85%) because they often begin with solutions seeking problems, rather than solving well-defined needs (18:10–20:51).
8. Science Fair Projects vs. Business Value
- Companies often invest in "cool" AI ideas rather than solving real, costly problems (20:54–22:49).
- Solutions must match substantial, common pain points and willingness to pay (the "three Ps": people, problem, pay).
9. Decision Quality and Structured Predictive Thinking
- A formal, visual approach—like influence diagrams—helps map objectives, decompose them into measurable factors, and assign probabilities to outcomes (31:17–38:42).
- Relying on subject matter experts for explanation (not prediction) anchors the process in real-world mechanics and variability (33:46–38:42).
- Use experts to describe why systems vary, not what will happen.
10. Strategic Use of Experts and AI
- Experts are just as susceptible to bias as anyone else; their unique value is articulating “why” not forecasting “what” (41:19).
- Quote: "Don't ask for predictions, ask for explanations. That's the way to get around it." – Robert Brown (42:17)
- AI should augment human reasoning, but humans must frame objectives, define relevant data, and actively manage risk.
Notable Quotes & Memorable Moments
-
On Decision-Making Quality:
"I can always tell you beforehand whether or not you made a good decision before you ever experienced the outcome ... but I can't guarantee that you'll get the outcome you want."
– Robert Brown (03:40) -
On Vague Project Requirements:
“It's the classical 'find me a rock' dilemma ... Can you tell me what you think is pretty? 'No, but I'll know it when I see it.'”
– Robert Brown (06:02) -
On Executive Failures:
“They came up with a solution, then they went looking for a problem. The same thing is happening right now with artificial intelligence initiatives.”
– Robert Brown (19:41) -
On Data Overload:
“Now we're doing digital hoarding and all of this data's out there ... more data that you're not using doesn't help. And if more of the wrong data doesn't help and accelerating [a] broken process just means you crash faster.”
– Jonathan Green (23:11) -
On Using Experts Wisely:
“Experts are terrible at making predictions in an unaided way ... but they serve a very good purpose when utilized in the right way. Don’t ask for predictions, ask for explanations.”
– Robert Brown (41:19–42:17)
Important Timestamps
- 01:06 — Backwards approaches to AI adoption
- 02:06 — Human heuristics and the need for structure in decision-making
- 03:24 — What makes a decision “good” versus “lucky”
- 06:02 — The “find me a rock” problem with unclear requirements
- 09:36 — Value-focused thinking as a foundation for decisions
- 12:09 — Mission statements as decision tools, not vanity
- 13:29 — Silo mentality and business objectives in conflict
- 18:10 — AI project failure rates and misaligned incentives
- 25:56 — Data overload and the new complexity
- 31:17 — Visualizing and modeling probability for future decisions
- 38:42 — The value of subject matter experts for causal explanations
- 41:19–42:17 — Use of experts: explanations vs. predictions
Actionable Takeaways
- Always start with clear objectives, values, and definitions before selecting or implementing AI solutions.
- Articulate and document what a successful outcome looks like (avoid “I’ll know it when I see it”).
- Structure decision-making: map out influences, break objectives into measurable factors, and work backward.
- Resist the urge to hoard or analyze irrelevant data—focus only on what distinguishes between alternative strategies.
- Use experts to clarify system mechanics, not to predict uncertain future outcomes.
- Moblize probability and scenario-based thinking for real-world planning, especially in uncertainty.
- Remember that even the best decision-making process increases your odds of preferred outcomes, not guarantees.
Where to Find Robert Brown
- LinkedIn: Robert D. Brown, III (Best method)
- “If a person is interested in having a truly professional, collegial relationship... I answer emails that I get there.” – Robert Brown (43:22)
- Email: robbrown@iberresilience.com
- Current Role: Senior Director of Cyber Resilience at Resilience (insurance company, strategic internal consulting, not AI-specific but highly related thinking)
Final Thoughts
This episode offers a reality check on AI adoption—decision quality, not tool quantity, is what drives business success. By combining structured planning, value-driven objectives, skeptical use of data, and the unique human talent for asking “why,” leaders can escape the cycle of failed “science fair” projects and move toward actionable, resilient AI strategies.
For further episodes, practical AI frameworks, or AI revenue calculators, visit: artificialintelligencepod.com
