The AI Daily Brief: Artificial Intelligence News and Analysis
Episode: A Framework for Choosing Winning AI Use Cases [Agent Readiness Part 3]
Date: November 9, 2025
Host: Nathaniel Whittemore (NLW)
Guest: Nufar Gaspar
Episode Overview
This episode, the finale of a three-part “Agent Readiness” series, delivers a hands-on framework for identifying, evaluating, and managing AI agent use cases within organizations. Host Nathaniel Whittemore and guest Nufar Gaspar (Superintelligent) move beyond theory, laying out steps to discover promising use cases, select and balance them as an "investment portfolio," and rigorously track results to maximize real ROI. Practical, hard-earned advice makes this episode valuable for leaders, technologists, and anyone operationalizing AI in the enterprise.
Key Discussion Points & Insights
Framing Use Case Readiness (01:41)
- The focus is on finding genuinely impactful opportunities for AI agents, not just deploying agents for the sake of it.
- Many valuable use cases are not obvious to leaders—they’re best sourced bottoms-up from employees closer to business processes.
Quote
"In many cases what I'm hearing is that like a CEO or a company leader will go into a room and they will say we need to build an agent, let's build an agent. And this is probably one of the worst way to source a use case idea because you'll probably build the wrong agent."
— Nufar Gaspar (02:41)
Step 1: Identifying Good AI Agent Use Cases (03:50)
- Best Sources: Bottoms-up or mid-level managers; these teams know the needs and feasibility better and must be bought in.
- Favorable Traits:
- Complex, variable decision-making (e.g., resolving unique customer issues)
- Bottlenecks where humans are required but in low supply (e.g., contract review)
- Areas needing 24/7 response (support roles)
- High personalization opportunities (targeted outreach)
- Some tolerance for error (avoid mission-critical, error-averse domains like payroll)
- Repetitive, tedious tasks employees want automatized
- Well-defined, documented processes (agents cannot replace tribal knowledge)
- Measurable outcomes (goal-driven implementation is essential)
- Unfavorable Use Cases: Fixed processes easily mapped with decision trees—use traditional automation, not agents.
- Ideation Sprint: Recommended as a practical step to gather ideas, educate staff, and quickly prune non-agent-appropriate suggestions.
Quotes
"If you can describe a fixed process or a decision tree with limited amount of branches, you shouldn't build an agent. You should just go for the simpler technology because the agent will probably not be worth it."
— Nufar Gaspar (05:31)
"You only focus on use cases where you do have some tolerance for errors."
— Nufar Gaspar (07:09)
Step 2: Selecting & Prioritizing Use Cases (09:55)
- Use a framework focused on feasibility, investment required, and value potential.
- Prioritize "low-hanging fruit"—use cases with high feasibility and low investment, or use cases that are critical for organization.
- Start small for momentum and learning, then expand to higher-stake use cases.
Classic Use Cases (10:52)
- FAQ/policy bots: modern agents able to handle nuance.
- Company knowledge retrieval: universal need, often extending beyond off-the-shelf solutions.
- Operational workflow automation: routine tasks, status reporting, etc.
- Market watchers: monitoring competition/regulation.
- Vertical cases (customer support, software engineering), content generation for marketing/sales, regulatory compliance, data cleaning, and industry-specific growth opportunities.
Quotes
"Selection should always be driven by the holy trinity: feasibility, investment, and value."
— Nufar Gaspar (10:10)
Step 3: Managing the Use Case Portfolio (12:38)
- Treat agent projects as an investment portfolio:
- Efficiency-driven (cost-saving) vs. growth-driven (new value creation)
- Balance between both; don't over-index on efficiency alone.
- Use a version of the Boston Consulting Group matrix: value vs. complexity.
- Avoid high-complexity, low-value projects.
- Start with easy wins but aim for a mix including “moonshots”—high-value, high-complexity projects led by specialized teams.
- Consider balance along vertical/horizontal and build/buy axes.
Quotes
"You want to balance between efficiency or the cost-focused use cases and also those that are more focused on the growth... often the biggest value is on the right hand side of the growth."
— Nufar Gaspar (13:37)
Step 4: Tracking & Measuring Use Case Impact (17:29)
- Build-phase is important but left for another episode; focus here is on measurement.
- Continuous measurement is critical: Many pilots underperform in production; surging to production without constant monitoring risks disappointment.
- ROI calculation: Return (benefit, usage, impact, minus error/refund costs) minus total investment (all build, buy, usage, maintenance, owner, and tool costs).
- Incorporate buffers for investment estimates; costs may rise with agent sophistication despite falling compute/model prices.
- Periodically reevaluate—many hidden costs lurk.
Quotes
"We often see that once something is deployed to the hands of actual users... things go haywire and yield completely different value than expected."
— Nufar Gaspar (18:04)
"I provided you here with a formula of how you measure the agent ROI... And whenever there is a judgment call, I encourage you to be conservative rather than anything else so you can get a realistic view..."
— Nufar Gaspar (18:31)
Memorable Moments & Further Insights
Knowledge Retrieval—A Surprising High-Value Use Case (20:46)
- Internal knowledge sharing/retrieval is “gateway drug” for AI, helping staff realize new possibilities.
- Delivers double value: solves practical needs, and triggers broader organizational buy-in.
Quote
"We see this so frequently as a gateway drug for people... it is like a light bulb moment for a lot of folks when they use those tools."
— Nathaniel Whittemore (21:17)
The Challenge of Defining ROI in AI Projects (22:51–26:11)
- Traditional SaaS ROI metrics don't fully capture agent value.
- Many companies are still searching for new systems to measure agent impact more accurately.
- CEOs’ attitudes are tracking ROI more aggressively—timelines halved year-over-year, with real examples like Morgan Stanley now publicly ROI-positive.
- Measuring is complicated by attribution, changing table stakes, and the difficulty of quantifying incremental efficiency.
Quote
"There's such an assumption that... these things are happening. To your point that so much of this is table stakes that they have to figure out how to go beyond that sort of layer one analysis."
— Nathaniel Whittemore (27:07)
Efficiency vs. Growth—No Either/Or (26:11–28:40)
- Efficiency alone won’t differentiate; as AI becomes “table stakes,” leadership must look for growth opportunities.
- Clients are demanding more for less—AI is now an operational expectation, not a unique advantage.
- The real innovation (and returns) are in exploring uses that create new value—often in long-tail, previously neglected opportunities.
Quotes
"You're not going to get gold stars for having 50% more marketing content output, when everyone has 50% more marketing content output, that's just the way that it's going to be."
— Nathaniel Whittemore (27:16)
"In most cases the big bucks are hidden in the ghosts... all of a sudden you can start deploying agents and you have so many millions hidden just in things that you never prioritize..."
— Nufar Gaspar (28:43)
Concrete To-Do List (19:58)
- Identify only agent-relevant, worthy use cases—not “vibes.”
- Select those with high ROI and visibility; manage as a balanced portfolio.
- Track impact rigorously and adapt over time; measurement and adaptation are as vital as initial selection.
Summary Table of Timestamps
| Segment | Timestamp | |------------------------------------------|--------------| | Framing the Use Case Discussion | 01:41 | | Identifying Promising Use Cases | 03:50–09:55 | | Prioritizing & Portfolio Approach | 09:55–12:38 | | Use Case Examples & Balance | 10:52–14:15 | | Tracking & Measuring ROI | 17:29–20:46 | | Knowledge Retrieval as Key Use Case | 20:46–22:49 | | The ROI Challenge | 22:51–26:11 | | Efficiency vs Growth | 26:11–28:40 |
Final Thoughts
The crucial takeaway: Enterprise AI success is built on choosing use cases where agents genuinely add value—balancing quick wins, strategic growth, and rigorous ROI tracking. The agent “portfolio” model, coupled with new measurement practices and broad employee involvement, is emerging as a best practice as AI rapidly shifts from novelty to necessity.
"You have to measure because when you don't measure, it's just a matter of vibes and it's not the way to make business decisions."
— Nufar Gaspar (25:29)
Feedback and questions? Reach out via Spotify, YouTube, or email to let the hosts know what you'd like to hear next.
![A Framework for Choosing Winning AI Use Cases [Agent Readiness Part 3] - The AI Daily Brief: Artificial Intelligence News and Analysis cover](/_next/image?url=https%3A%2F%2Fd3t3ozftmdmh3i.cloudfront.net%2Fstaging%2Fpodcast_uploaded_nologo%2F41472609%2F41472609-1752234663609-8665756a468e5.jpg&w=1200&q=75)