Podcast Summary: Successfully Coding with AI in Large Enterprises Host: Claire Vo | Guest: Zach Davis, Director of Engineering at LaunchDarkly
Introduction In the July 21, 2025 episode of How I AI, host Claire Vo engages in an insightful conversation with Zach Davis, Director of Engineering at LaunchDarkly. The discussion centers on effectively integrating AI tools within large-scale engineering teams, emphasizing the importance of centralized rules, managing technical debt, and enhancing team training. This summary delves into their key discussions, providing valuable insights for engineering leaders aiming to harness AI's potential in enterprise environments.
1. AI Tools at LaunchDarkly Zach Davis shares an extensive list of AI tools employed by LaunchDarkly's technology team, highlighting their diverse applications across design, product, and engineering.
"[...] we're exploring a bunch of things, so lovable v0figma make we're using. On the product side, obviously we're using Chat PRD and then on the engineering side for code, Heavy Cursor users, Heavy Devin users, we're also using now, like Cursor's background agent."
— Zach Davis [03:09]
Davis emphasizes the experimental approach the team takes, trying various tools to determine what best fits their workflows. He mentions specific tools like Cursor, Devin, Clouds Code, and Copilot for code reviews, illustrating the multifaceted AI ecosystem at LaunchDarkly.
2. Organizational and Cultural Approaches Claire Vo and Zach Davis discuss the necessity of dedicated leadership in driving AI adoption within large engineering teams. Davis underscores the importance of having a person close to the codebase to oversee AI integration effectively.
"I think having a person who's close to the code helps a lot because you don't really know what's working and what's not working unless you're in the code, at least on some basis."
— Zach Davis [04:37]
They highlight the cultural shift required to embrace AI tools, moving away from isolated experimentation to a more structured, team-wide implementation. Davis reflects on his initial skepticism and how hands-on experience with LaunchDarkly's codebase transformed his perspective on AI's potential.
3. Implementing Centralized Rules and Documentation A significant portion of the discussion focuses on creating centralized rules and comprehensive documentation to ensure AI tools operate cohesively across the organization.
"What's good for humans is also good for LLMs. And so I really started with how do we make sure that the repo is well set up for humans to know how to work in it?"
— Zach Davis [00:22]
Davis explains the consolidation of various documentation sources into a unified docs directory within the repository. This approach not only facilitates human access but also enables Large Language Models (LLMs) to interact more effectively with the codebase.
"Instead of a cursorrules, I have this agentsrules and the idea is to kind of centralize all of this knowledge in one place."
— Zach Davis [08:07]
By centralizing rules, LaunchDarkly avoids duplicating efforts across multiple tools, ensuring consistency and scalability in AI interactions.
4. Using AI to Reduce Technical Debt Davis elaborates on leveraging AI to address and mitigate technical debt, a common challenge in large codebases.
"Technical debt is my favorite use case for AI to supercharge like a medium sized organization."
— Zach Davis [21:50]
He provides a concrete example where AI tools were used to clean up noisy test logs, identifying and prioritizing warnings that were previously overlooked due to their volume. This structured approach not only streamlines the debugging process but also enhances the overall code quality.
"I could slack Devin and say, Evan, can you pick up the next task in the front end test noise cleanup? I can do it here in cursor and watch it go."
— Zach Davis [32:38]
This method mirrors traditional team workflows, allowing AI tools to act as extended team members that handle repetitive and time-consuming tasks efficiently.
5. Enhancing Hiring Processes with AI Beyond coding and technical workflows, Davis discusses the innovative use of AI in refining the hiring process at LaunchDarkly.
"I created this custom GPT. I gave it the rubrics and I gave it examples of good, good scorecards, bad scorecards, and gave it as much, kind of like you helped me write the prompt."
— Zach Davis [37:12]
By integrating a tailored GPT model, Davis ensures that interview scorecards are consistent and adhere to established rubrics. The AI evaluates scorecards, provides ratings, and offers constructive feedback, thereby elevating the quality of candidate assessments.
"It also crafts a short Slack message that I can, if I want, just copy and paste and send to the person who created the scorecard."
— Zach Davis [38:35]
This automation not only standardizes the hiring feedback but also saves valuable time for engineering leaders, allowing them to focus on more strategic aspects of talent acquisition.
6. Challenges and Best Practices The conversation acknowledges the hurdles in AI integration, particularly in configuring AI tools within complex, large-scale repositories.
"Devin is running its own machine, which has a lot of upside. [...] the downside that it takes a little time depending on your repo, it takes a little bit of time to actually set that machine up and get it running."
— Zach Davis [19:50]
Davis recommends an incremental approach to implementing AI tools, allowing teams to adapt gradually without overwhelming their existing workflows. He also stresses the importance of high-quality documentation and centralized rule sets to maximize AI tool effectiveness.
7. Conclusion Claire Vo and Zach Davis wrap up the episode by reiterating the transformative potential of AI in large engineering organizations when approached thoughtfully. By fostering a culture of experimentation, centralizing knowledge, and utilizing AI to tackle both technical and operational challenges, engineering leaders can significantly enhance their team's efficiency and code quality.
Davis shares his preference for the tool Windsurf, noting its intuitive user experience and rapid impact.
"Within an hour, I think I was paying for it because I just. It really clicked for me and the agent workflow just really quick clicked and I was hooked."
— Zach Davis [42:22]
The episode concludes with actionable insights and a testament to the benefits of integrating AI as a collaborative extension of the engineering team.
Key Takeaways
- Centralized Documentation: Unified repositories enhance both human and AI tool interactions.
- Dedicated Leadership: Assigning responsibility for AI integration ensures structured and effective adoption.
- AI for Technical Debt: Automating mundane tasks with AI improves code quality and developer productivity.
- AI-Enhanced Hiring: Tailored AI models standardize and improve the hiring process, ensuring consistency and fairness.
- Incremental Implementation: Gradual adoption of AI tools prevents disruption and allows teams to adapt efficiently.
This episode serves as a masterclass for engineering leaders seeking to implement AI tools in large organizations, demonstrating practical strategies and the profound benefits of a well-executed AI integration plan.
