Podcast Summary: The Analytics Power Hour — Episode #254: Is Your Use of Benchmarks Above Average? with Eric Sandesham
Introduction
In Episode #254 of The Analytics Power Hour, hosts Michael Helbling, Moe Kiss, Tim Wilson, Val Kroll, and Co-Host Emeritus Jim Cain delve into the pervasive use of benchmarks in business analytics. Joined by guest Eric Sandesham, Founder and Partner at Red and White Consulting Partners, the discussion explores the effectiveness, pitfalls, and alternative perspectives on benchmarking as a metric for business performance.
Setting the Stage: Understanding Benchmarks
The episode opens with host Tim Wilson referencing the iconic "All the children are above average" segment from the radio show A Prairie Home Companion ([00:05]). This sets a thematic foundation for the episode's focus on benchmarks and averages in business analytics.
Guest Introduction: Eric Sandesham ([01:00])
Tim introduces Eric Sandesham, highlighting his extensive experience in business decisioning and operating processes across various industries. Eric's credentials include roles at Red and White Consulting Partners, adjunct faculty positions, and leadership roles at SAS and Citibank Singapore. His recent article on Medium, "The Problem with Benchmarks," serves as the catalyst for the episode's exploration of benchmarking.
Human Nature and the Obsession with Comparisons ([03:40] Val Kroll)
Val Kroll articulates the inherent human tendency to compare, both personally and professionally. She explains that organizations frequently request benchmarks as a "yardstick" to understand their position within the market or relative to competitors. Val emphasizes that while benchmarking provides a reference point, it often serves as an external evaluation rather than a tool for informed decision-making.
Challenges with Benchmark Requests ([04:57] Tim Wilson & [05:05] Eric Sandesham)
Tim shares his personal frustration with frequent benchmarking requests, hinting at the common sentiment of viewing benchmarks as unproductive or overly simplistic. Eric agrees, noting the nuances between benchmarking for startups versus established businesses, particularly in understanding opportunity sizing and strategic positioning.
Distinguishing Benchmarks from Information Signals ([04:57] Val Kroll)
Val introduces the concept of "information signals," differentiating raw data from the actionable insights derived from it. She categorizes benchmarks into "front end" and "back end" signals:
-
Front End Benchmarks: Used as inputs for decision-making, helping set targets based on various data points.
-
Back End Benchmarks: Serve as evaluative metrics to assess whether an organization is on track post-decision.
Val criticizes the predominant use of back end benchmarks as lazy management, suggesting that organizations rely too heavily on external metrics without integrating internal data collection and analysis.
The Fluid Nature of Benchmarks ([07:24] Eric Sandesham & [07:29] Tim Wilson)
Eric discusses how benchmarks can be misaligned with an organization's unique context, such as differing business strategies or customer bases. Tim echoes this sentiment, expressing skepticism about using benchmarks as definitive measures of success, arguing that they can distract from meaningful internal evaluations.
Blending Benchmarks with Internal Metrics ([10:41] Eric Sandesham & [11:16] Val Kroll)
Eric proposes a hybrid approach, using external benchmarks as one of multiple inputs alongside internal performance data to set realistic and contextually relevant targets. Val cautions that blending benchmarks introduces challenges in attribution due to external noise and varying market conditions, advising that such metrics be used with discretion.
Cultural and Contextual Limitations of Benchmarks ([22:14] Eric Sandesham)
Eric highlights the dangers of applying benchmarks across different cultural contexts, using Net Promoter Score (NPS) as an example. He points out that NPS does not account for cultural variations in feedback, leading to misleading interpretations when applied globally.
Case Study: Net Promoter Score ([24:48] Tim Wilson & [25:12] Val Kroll)
Val counters a question about the validity of NPS by referencing academic research that has debunked its effectiveness as a reliable predictor of business performance. She notes that while NPS is popular for its simplicity, it lacks the diagnostic power of more comprehensive customer satisfaction measures.
Internal Benchmarks vs. Baselines ([32:35] Eric Sandesham & [33:35] Val Kroll)
Val differentiates between benchmarks and baselines:
-
Benchmark: A comparative metric against competitors or industry standards.
-
Baseline: An internal threshold that represents the minimum performance required for business viability.
Tim expresses frustration with the interchangeable use of internal benchmarks and baselines, arguing that relying solely on internal data can lead to complacency and lack of competitive insight.
The Problem with External Benchmarks ([37:19] Eric Sandesham & [37:43] Val Kroll)
Eric questions the reliability of external benchmarks, suggesting that external data often comes from sources with conflicting incentives, such as large consulting firms that might benefit from client performance appearing below benchmark. Val agrees, emphasizing that external benchmarks often lack the methodological rigor to account for organizational differences, rendering them more noise than signal.
The Importance of Context in Benchmarking ([43:00] Val Kroll & [43:59] Mo Kiss)
Val underscores that benchmarking must be contextual, considering factors like marketing budgets and operational differences. She argues that without understanding the underlying context, benchmarks become meaningless comparisons. Moe adds that sentiment analysis and competitive intelligence should be approached as components of broader market research rather than standalone benchmarks.
Final Thoughts: Rethinking Benchmarking ([50:09] Eric Sandesham & [32:57] Val Kroll)
As the conversation wraps, Eric reflects on the diverse interpretations of "benchmarks" across departments, particularly between finance and analytics teams. Val reinforces that benchmarks should not dictate business strategy but rather inform a spectrum of data points used for strategic decisions.
Last Call: Shared Insights ([51:32] to [64:20])
The episode concludes with a "Last Call" segment where each participant shares personal insights unrelated to benchmarks:
-
Val Kroll: Discusses a Medium article on artificial intelligence, highlighting the distinction between problem-solving and problem-finding as key to human intelligence.
-
Moe Kiss: Commends Eric's writing style and promotes the upcoming Experimentation Island conference, where the hosts will be speakers.
-
Eric Sandesham: Recommends engaging podcast episodes on companies like Costco and Hermes and shares admiration for an Instagram account focused on growth and personal development.
-
Tim Wilson: Praises Eric’s weekly posts and endorses Tyler Vigen’s Spurious Correlations, particularly enjoying the humorous academic-style explanations of unrelated metric correlations.
Conclusion
Episode #254 of The Analytics Power Hour presents a critical examination of benchmarking in business analytics. Through insightful dialogue and expert perspectives, the hosts and guest Eric Sandesham challenge the conventional reliance on benchmarks, advocating for a more nuanced, contextual, and internally informed approach to measuring business performance. The discussion emphasizes the importance of understanding the limitations and potential biases inherent in benchmarking practices, encouraging listeners to seek a balanced and informed methodology in their analytical endeavors.
Notable Quotes
-
Tim Wilson ([00:08]): "Analytics topics covered conversationally and sometimes with explicit language."
-
Val Kroll ([03:40]): "It's such a built-in phenomenon as a human species to always compare while we're growing up."
-
Eric Sandesham ([10:41]): "What if you are using those external benchmarks as an input to help you set your own target as one of, say, many inputs?"
-
Tim Wilson ([09:10]): "You get somebody… and that just smacks poor business management thinking."
-
Val Kroll ([24:51]): "Net Promoter Score… has been debunked academically because it doesn't hold up to scratch."
-
Eric Sandesham ([22:14]): "iPhone users versus Android users… customer lifetime value is different."
-
Val Kroll ([37:43]): "If you can fiddle a number to make the client happy, that's not going to be useful."
Key Takeaways
-
Benchmarking as Comparison: Benchmarks are often used as external comparison metrics but can be misleading without considering contextual differences among organizations.
-
Front End vs. Back End Benchmarks: Benchmarks serve different purposes depending on whether they are used to inform decisions (front end) or evaluate performance post-decision (back end).
-
Limitations and Biases: External benchmarks may carry inherent biases and lack the methodological rigor needed for accurate comparisons, making them unreliable as sole performance indicators.
-
Internal Metrics and Baselines: Establishing internal baselines is crucial for setting realistic performance thresholds, independent of external benchmarks.
-
Cultural and Contextual Sensitivity: Benchmarking metrics like NPS can falter when applied across different cultural contexts, leading to inaccurate interpretations.
-
Holistic Approach to Data: Effective decision-making should integrate multiple data points, including internal performance metrics and contextual market research, rather than relying solely on benchmarks.
Recommendations for Listeners
-
Critical Evaluation: Approach benchmarking requests with a critical eye, questioning the relevance and applicability of external benchmarks to your specific organizational context.
-
Integrate Internal Data: Prioritize internal performance data and establish baselines that reflect your unique business environment and strategic goals.
-
Contextual Analysis: Always consider the broader context—such as cultural differences and market conditions—when interpreting benchmark data.
-
Educate Stakeholders: Engage in conversations with executives and stakeholders to clarify the purpose and limitations of benchmarking, fostering a more informed and strategic use of performance metrics.
Further Engagement
Listeners are encouraged to engage with the hosts and guest through various platforms:
-
Social Media: Follow @nalyticshour on Twitter.
-
Website: Visit AnalyticsHour.IO for more resources and to submit topics or guest suggestions.
-
LinkedIn Group & Slack: Join the Measure Slack group for ongoing discussions and community support.
Upcoming Events
- Experimentation Island Conference: Scheduled for February 26-28, the inaugural conference promises insightful sessions on benchmarking and optimization, featuring speakers from the hosts' team.
Final Note
As the hosts humorously riff on the episode's content in the closing remarks, the underlying message remains clear: benchmarks, while popular, should be navigated thoughtfully and supplemented with robust internal analytics to truly drive business success.
#KeepAnalyzing
