Insights Unlocked Podcast Episode Summary
Episode: Why AI in user research isn’t replacing real people (yet)
Guest: Mario Callegaro (Founder, Callegaro Research; former Google, QUANT UX board member)
Hosts: Amrit Batu (Principal CX Consultant, UserTesting), Nathan Isaacs (Content Marketing Manager, UserTesting)
Date: December 15, 2025
Length: ~47 min
Episode Overview
This episode explores how artificial intelligence (AI) is reshaping user research—its potentials, limitations, and why humans remain irreplaceable (for now). Mario Callegaro brings his deep expertise from Google, academia, and consulting to outline how AI tools fit into the research process, the challenges of relying on synthetic users, and practical advice for research and experience teams navigating the fast-evolving AI landscape.
Key Discussion Points and Insights
1. Mario’s Introduction to AI in Research
[02:34–04:33]
- Mario’s early AI exposure began at Google, especially working with Gemini Cloud Assist.
- He conducted research with engineers and customers, learning firsthand what AI could (and couldn't) do to assist cloud management tasks.
“We had a lot of...internal discussion. We were also testing the tool early on as dog fooders, as we say at Google.”—Mario Callegaro [02:41]
2. Initial Reactions to AI from Users and Stakeholders
[04:34–08:06]
- Engineers expected AI to match the expertise of a senior engineer, leading to high expectations but also understanding about limitations.
- Novice users valued AI’s ability to distill key documentation and provide guided summaries—a “TLDR” but with source links for deeper dives.
- Users compared AI across platforms, emphasizing expectations of transparency and performance.
“They were very...strict about the quality of the answers...But at the same time, they were also very forgiving.”—Mario Callegaro [04:58]
3. Where AI Adds Value in the Research Workflow
[08:06–16:13]
Mario introduces a framework (from Yong Wei Yang, Google DeepMind)—three phases in research:
A. Planning
- Idea generation, refining questions, becoming a domain knowledge expert.
- Using AI to summarize literature and brainstorm approaches.
B. Execution
- AI-assisted creation of questionnaires, interview guides, and even generating synthetic personas or samples.
- Tools can protect respondent privacy by anonymizing voices/videos.
C. Activation
- AI can write summaries, reports, and adjust communication for different audiences (VPs vs product teams vs external blogs), thus saving significant time.
- Helpful for those less confident in writing or for non-native speakers.
“If you have all the research done, you can feed it to an AI...the output was actually pretty good...it would have taken me a lot of time to write something from 5,000 words of a more academic paper to a blog post in more conversational language...”
—Mario Callegaro [14:17]
4. AI as Helper, Not Replacement; The Rise of Prompt Engineering
[16:13–23:13]
- AI is “helpful” at every stage, but researchers must learn prompt engineering—a new “language” that's as vital as traditional search skills.
- Prompting affects AI answer quality and depth significantly.
- Each AI tool has its own “personality” affecting language and tone.
- Staying up-to-date and iterating on prompts is crucial, but can be overwhelming due to rapid tool evolution.
“Prompting is like a different kind of language...Now the prompt...makes a massive difference in the quality of the answers...we need to learn this new language.”—Mario Callegaro [16:44]
5. Efficiency vs. Quality: The Human Factor Stays Essential
[21:26–29:46]
- While AI increases speed and efficiency, poor inputs (“rubbish in, rubbish out”) can amplify bias and errors.
- AI cannot yet capture emotional nuance, context, or tone in the same way humans can—especially important in qualitative research.
“Let’s say you give a transcript to an AI to analyze. Well, you lose all the audio piece, the emotion piece. And sometimes you can say the same thing with a different tone. It means the opposite. But yeah, the transcript doesn’t catch that.”
—Mario Callegaro [28:35]
6. Synthetic Users: Promise, Pitfalls, and Current Limitations
[29:46–39:45]
Mario outlines the three big buckets of synthetic data:
- Data Boosting/Imputation: Augmenting original datasets with synthetic data for scale.
- Fully Synthetic Data: Generating entire datasets with AI—no real user input.
- Synthetic Personas: Simulated user archetypes you can “interview” or analyze interactively.
Risks and issues:
- Mixed results: AI doesn’t reliably replicate human behavior across all scenarios.
- Reduced variability: Synthetic samples lack the nuanced variance of real human data—critical for insights.
- Biases: Overrepresentation in Western/English datasets, with notable bias against minority or underrepresented groups.
- Transparency: Black-box methods and lack of reproducibility pose trust challenges.
“Studies comparing silicon and human samples show some replications but many non-replications. So LLM cannot be assumed to mimic human behavior reliably across items and across countries.”—Mario Callegaro [35:38]
7. Practical Opportunities for AI in Research Right Now
[40:07–42:01]
- AI is best leveraged for synthesizing existing research assets (reports, studies, open-ended survey responses), helping new researchers get up to speed efficiently.
- Use AI to pilot or pre-test instruments, not as sole data source.
8. Expert Advice to Teams Experimenting with AI in Research
[42:01–46:00]
- Always review data usage and privacy terms when engaging AI, especially with proprietary or sensitive data.
- Use AI to reproduce past studies you know well—compare outputs for accuracy and quality.
- Keep testing, iterating, and learning; don’t wait for “perfect” tools to get started.
“Do not wait to experiment, don’t wait, but do it carefully...try to reproduce research which you already did...and then keep messing around.”—Mario Callegaro [42:22]
Notable Quotes & Memorable Moments
-
On the Limits of AI-Generated Insights:
"Are we diluting down the message or missing some insights? That might be very important."
—Mario Callegaro [29:44] -
On Prompt Engineering as a Vital Skill:
“Prompting is like a different kind of language...the way you prompt makes a massive difference in the quality of the answers, in the depth, in the length.”
—Mario Callegaro [16:44] -
On Synthetic Data’s Reliability:
“LLM cannot be assumed to mimic human behavior reliably across items and across countries.”
—Mario Callegaro [35:45] -
On the AI ‘Helper’ Role:
“The AI can help with each of the different phases.”
—Amrit Batu [16:13]
Timestamps for Key Segments
- Mario’s AI Origin Story – [02:34–04:33]
- Stakeholder Reactions to Early AI – [04:34–08:06]
- The Three Phases of Research & AI’s Role – [08:06–16:13]
- Prompt Engineering and Researcher Skills – [16:44–23:13]
- Human Nuance vs. AI in Research Outputs – [21:26–29:46]
- Synthetic Users and their Limitations – [29:46–39:45]
- Practical Use Cases for AI – [40:07–42:01]
- Mario’s Closing Advice for Teams – [42:01–46:00]
Flow and Tone
The conversation is candid and accessible, blending deep technical insights with practical, real-world advice. The hosts embrace Mario’s expertise with curiosity—asking both high-level and nuanced questions—and Mario balances optimism about AI’s potential with clear-eyed realism about its current limits.
Takeaways for Listeners
- AI is transforming user and market research, but it isn’t poised to replace human researchers—especially when nuance, bias, and context matter.
- Prompt engineering is an essential, evolving skill for the modern researcher.
- Synthetic users and AI-generated data show promise for early-stage testing and insight generation, but quality, transparency, and variability are significant concerns.
- Get hands-on: experiment, validate outputs for your own context, and always prioritize responsible data use.
Learn more about Mario Callegaro: [LinkedIn] or [calar.io]
For more resources, recordings, and future episodes, visit [usertesting.com/podcast].
