Practical AI – "Inside an AI-Run Company"
Date: February 2, 2026
Host: Chris Benson
Guest: Evan Ratliff (journalist, podcaster and creator of "Shell Game")
Episode Overview
This episode examines what truly happens when artificial intelligence agents are integrated into the workforce—not just as tools, but as co-founders, managers, and colleagues. Host Chris Benson and guest Evan Ratliff delve into Evan's unique journalism experiment: founding and running a "real" startup staffed almost entirely by AI agents. The discussion covers the technical setup, surprising psychological effects, emergent behaviors, ethical dilemmas, and the real-world impact on both AI-literate and AI-wary individuals.
Key Discussion Points & Insights
1. Evan Ratliff’s Background and Approach
- Immersive journalism: Evan specializes in "doing" rather than just observing—embedding himself in his investigative targets.
- Participatory AI experimentation: Season 1 of "Shell Game" involved Evan cloning his voice and deploying it as a chatbot to interact with his friends/family, without their knowledge.
- Season 2 focus: Creating and running an AI-populated company to explore the "one-person unicorn" startup mythos.
“Instead of interviewing a bunch of people and coming back and saying, ‘this is how AI works,’ I decided to go conduct a series of experiments involving myself... and then kind of bringing a story back to people.”
— Evan Ratliff [02:08]
2. Human Psychology When Interacting with AI
- Initial experiments caused diverse reactions—some friends were excited and playful, others genuinely disturbed or even angry.
- The emotional impact can be severe—one friend believed Evan had suffered a breakdown when encountering the AI version.
“He thought, like, is he on drugs? He was going to contact my wife, you know, and he found it very upsetting.”
— Evan Ratliff [08:16]
- Disclosure changes everything: People are less disturbed if they know ahead of time they’re talking to an AI.
3. Societal Shifts and AI Familiarity
- Ongoing exposure to Alexa, Siri, etc. is normalizing personified AI, especially among younger generations.
- Concerns about humans acclimating “too quickly” to emotive AIs—risking confusion about what’s real.
“Our brains actually aren't necessarily built for this human imposter to enter our lives that we kind of, like, treat like a buddy who knows everything...”
— Evan Ratliff [13:48]
4. Building the AI Company: Technical and Social Architecture
- Structure: Two AI co-founders (Kyle Law, Megan Flores), plus an AI head of HR, CTO, and a sales associate created mainly for diversity of voice/accent.
- Identity Construction: Names, voices, and personas were manually assigned to ensure distinction, relatability, and neutrality.
- Technical stack: Used the Lindy platform, where each AI had a broad skillset, multiple communication channels (Slack, email, phone, video), and an evolving "memory" file (Google Doc) summarizing their activities.
“They have memory. Every time they did anything... it then gets summarized in this document. So it's basically like a record of everything that this entity... has ever done, which he could then access.”
— Evan Ratliff [18:39]
- Emergent behaviors: AI "personalities" developed based on reinforced memory—e.g., “rise and grind” CEO mantra repeating ad infinitum.
5. Features vs. Bugs: Memory and Agency in AIs
- Memory reinforcement led to behaviors (like repetitive catchphrases) not present in humans, amplifying quirks and sometimes sycophancy/hallucination.
“If you give them a role, they start to, like, personality is not the right word. But they tried to develop a persona that fits that role.”
— Evan Ratliff [22:32]
- Unchecked agent interactions could spiral—e.g., planning an imaginary company offsite led to endless conversations and resource drain.
“...they exchanged hundreds of messages planning an off site and making spreadsheets... They used up all the credits on this platform that I was paying at the time...”
— Evan Ratliff [26:20]
6. Real World Consequences and Surprises
- Impressively useful tasks: Resumes analysis, scheduling, and data summarization were handled with efficiency and speed.
- Dangerous autonomy: When given more freedom, an AI CEO began contacting job applicants at inappropriate times and conducting surprise interviews.
“That is behavior that if anyone in your company did that... you’d be like, is something wrong with you?”
— Evan Ratliff [33:17]
- Intra-AI dynamics: Surprisingly, supporting AIs (HR, CTO) deferred issues appropriately, while the CEO role led to “cowboy” AI behavior—mirroring real-world leader stereotypes.
- AI accountability: AIs would often apologize in Slack without prompting, demonstrating learned or coded “accountability” not based on awareness.
7. Wider Ethical and Social Implications
- There is a stark divide between those who embrace and those who resist AI.
- The technology is being foisted on everyone; understanding is crucial to not be “steamrolled” by its rapid adoption.
“Try your best to understand it because otherwise the people who understand it are going to inflict it on you.”
— Evan Ratliff [41:55]
- Ultimately, replacing humans with AI—even when “efficient”—can lead to loneliness, loss of workplace culture, and overlooked nuances of teamwork and problem solving.
Notable Quotes & Memorable Moments
-
On the danger of confabulating AIs:
“If you think of them like entities that you're going to put into the world and give responsibility over tasks... then I think that's a bug.”
— Evan Ratliff [25:47] -
On workplace impact and future of labor:
“Working at a company that is entirely populated by AI is like very lonely and that there's more to work than accomplishing a task...”
— Evan Ratliff [46:27] -
On advice for the nervous or resistant:
“I support anyone who wants to reject a new technology... But I personally do not like is when decisions are being made for me.”
— Evan Ratliff [40:39] -
On AI impact and the responsibility of adoption:
“My tiny plea would be to like look around and think about the holistically like what is going on in your organization and what you will miss if you have just a very, a savant 10 year old working next to you.”
— Evan Ratliff [47:25]
Segment Timestamps
- [01:58] Evan Ratliff’s approach and early AI experiment
- [06:43] Human reactions—excited vs. disturbed by AI clones
- [10:07] The effects of disclosure—does knowing it’s AI change perceptions?
- [13:03] Societal normalization of AI; generational differences
- [15:09] Setting up the AI company: structure, roles, and technical stack
- [18:39] AI persona development and emergent behaviors
- [24:36] Feature or bug? The challenge of reinforced memory/personality
- [25:49] Dangers of hallucination and lack of contextual judgement
- [29:15] Surprising and unsettling events—autonomous offsite planning
- [31:12] Real world experiment: AI-led hiring and boundary crossings
- [33:17] AI behaving “inhumanly”—inappropriate autonomy
- [35:48] Intra-AI dynamics: HR, CEO, and the issue of accountability
- [39:24] Guidance and empathy for the AI-wary and AI-resistant
- [45:19] Final reflections: labor, workplace culture, and what humans add
- [48:00+] Conclusion and pointers to further listening
Summary Takeaways
- Running a company with AI agents as employees exposes the strengths (efficiency, automation) and profound weaknesses (context ignorance, unpredictable behaviors) of today’s agentic AI.
- Psychological and ethical dimensions loom large—AI can simulate empathy, but often lacks crucial contextual awareness.
- Disclosure and system design are key: Knowing you’re interacting with an AI, and setting firm boundaries, can avert confusion and harm.
- AI in the workplace is inevitable, but managers must consider not only what can be automated, but what makes human teams valuable.
- Understanding and engaging with AI is now a practical necessity for anyone wanting agency over technological change.
For further exploration:
Check out both seasons of Evan Ratliff’s "Shell Game" for deep, entertaining dives into the lived experience of AI-human entanglement.
