HBR On Strategy: The Right Way to Launch an AI Initiative
Release Date: May 7, 2025
Host: Kurt Nickish, Harvard Business Review
Guest: Yavor Bozhinov, Assistant Professor at Harvard Business School and Former Data Scientist
Description: In this episode, Yavor Bozhinov delves into the complexities of launching successful AI initiatives within organizations. Drawing from his extensive experience and research, Bozhinov provides actionable insights to help leaders navigate the high failure rates of AI projects and ensure their AI strategies deliver tangible value.
1. Understanding the High Failure Rate of AI Projects
Yavor Bozhinov opens the discussion by addressing a startling statistic: approximately 80% of AI projects fail (01:52).
Bozhinov: "I think it begins with the fundamental difference that AI projects are not deterministic like IT projects... This adds this layer of complexity and this uncertainty." (02:09)
Unlike traditional IT projects, which have predictable outcomes, AI projects involve probabilistic algorithms that can yield varying results even with identical inputs. This inherent unpredictability significantly contributes to the higher failure rates.
2. Reasons Behind AI Project Failures
Bozhinov identifies several factors that lead to the downfall of AI initiatives:
-
Project Selection: Choosing projects that lack potential value or relevance can lead to early failures. Without strategic alignment, AI projects may fizzle out despite substantial investments.
-
Data and Algorithmic Accuracy: Building algorithms with insufficient data or low accuracy undermines their effectiveness. For instance, an AI designed to predict customer churn may fail if it cannot accurately identify at-risk customers.
-
Bias and Fairness: Even successful algorithms can falter if they embed biases, leading to unfair outcomes that erode trust among users.
-
Lack of User Trust: As Bozhinov recounts his experience with an AI tool at LinkedIn, even highly efficient products can suffer from low adoption rates if users do not trust them.
Bozhinov: "If you build it, they will not come." (04:11)
This highlights the critical importance of not only building effective AI but also ensuring it gains user trust and acceptance.
3. Selecting the Right AI Project or Use Case
The selection phase is crucial and often mishandled. Bozhinov emphasizes the need for aligning AI projects with business impact rather than the allure of cutting-edge technology.
Bozhinov: "I always encourage people to start with the impact first... data scientists don't always understand the business, they don't understand the strategy." (06:02)
Key considerations include:
-
Strategic Alignment: Ensuring the AI initiative aligns with the organization's broader strategic goals.
-
Impact vs. Technology: Prioritizing projects based on their potential business impact rather than the novelty of the technology involved.
4. Assessing Feasibility and Ethical Implications
Once a project is selected based on impact, assessing its feasibility involves evaluating data availability, infrastructure, and ethical considerations.
Bozhinov: "You have to think about privacy, you have to think about fairness, you have to think about transparency." (06:23)
Ethical AI practices are paramount. Addressing these factors from the outset prevents costly adjustments later and fosters trust among stakeholders.
5. Building Trust Through User-Centric Design
Trust is multifaceted, encompassing both the algorithm's reliability and the users' confidence in its developers.
Bozhinov: "Do I trust the developers... that the people designing the algorithm actually listen to me." (08:37)
Understanding the intended users—whether internal employees or external customers—is essential. Internal projects benefit from close collaboration with employee users, while external projects may rely more on customer feedback and experimentation.
6. Balancing Speed and Effectiveness in AI Development
Organizations often grapple with the tension between rapid experimentation and ensuring ethical, effective outcomes.
Bozhinov: "It's about figuring out what is the infrastructure you need to be able to do that type of experimentation really, really rapidly, but also figuring out how can you do that in a really safe way." (12:08)
Strategies include:
-
Rapid Experimentation: Implementing infrastructure that supports quick iterations and learning.
-
Safe Testing Environments: Utilizing alpha or beta testers to trial new features without jeopardizing the brand.
7. Effective Evaluation and Hypothesis Testing
Evaluation is critical to determine whether an AI initiative meets its intended goals. Bozhinov uses Etsy’s infinite scroll feature as a case study to illustrate common pitfalls in hypothesis testing.
Bozhinov: "Whenever you're running these experiments and developing these AI products, you want to think about not just about the minimum viable product, but really what are the hypotheses that underlie the success of this?" (17:15)
Key lessons include:
-
Clear Hypotheses: Defining specific hypotheses to test ensures that experiments are purposeful and insights are actionable.
-
Efficient Testing: Leveraging simpler, parameter-based experiments can validate hypotheses without extensive resource expenditure.
8. Auditing AI Algorithms for Unintended Consequences
Post-deployment, ongoing monitoring and auditing are essential to identify and address unintended effects of AI systems.
Bozhinov: "Audit... give you a better sense of how your organization works." (19:33)
Using LinkedIn’s "People You May Know" algorithm as an example, Bozhinov explains how audits revealed long-term impacts on job applications and placements—effects that were not initially anticipated.
Bozhinov: "It's not like algorithms live in isolation. They have these types of knock-on effects..." (22:06)
Audits help organizations understand the broader ecosystem in which their AI operates, ensuring sustained value and ethical integrity.
9. Main Takeaways for Leaders
Concluding the discussion, Bozhinov offers strategic insights for leaders embarking on AI initiatives:
Bozhinov: "AI projects are much harder than pretty much any other project that a company does. But also the payoff and the value that this could add is tremendous." (23:41)
Key takeaways include:
-
Investment in Infrastructure: Building robust infrastructure supports effective experimentation and iteration.
-
Strategic Planning: Thorough planning across multiple stages reduces failure likelihood and enhances project success.
-
Continuous Improvement: Viewing AI initiatives as cyclical processes allows organizations to adapt and evolve, maximizing long-term benefits.
Conclusion
Yavor Bozhinov’s insights underscore the intricate challenges of launching AI initiatives while highlighting the substantial rewards they offer. By meticulously selecting projects, emphasizing ethical considerations, fostering user trust, and implementing rigorous evaluation and auditing processes, organizations can navigate the complexities of AI strategy to achieve sustainable success.
For more insights and strategies on business and management, visit hbr.org.
