Latent Space: The AI Engineer Podcast
Episode: Brex’s AI Hail Mary — With CTO James Reggio
Date: January 17, 2026
Host: Alessio (Kernel Labs), Zwicks (Latent Space)
Guest: Jason Rangiosi (CTO, Brex)
Episode Overview
This episode features Jason Rangiosi, CTO of Brex, offering a deep dive into Brex’s aggressive AI transformation across its product, operations, and internal workflows. The conversation covers how Brex structures its engineering and AI teams, the architecture of their agentic platform, approaches to multi-agent systems, internal AI adoption and upskilling, and reflections on the new realities—and risks—facing AI-first enterprises in financial services.
Key Discussion Points & Insights
Jason’s Journey: From Mobile to CTO
-
Jason is a rare CTO with a mobile engineering and frontend background. He attributes his rise more to his experience as a founder than a technologist.
-
Quote:
“Working for somebody else becoming CTO is very much like a leadership and general business role as much as it is a technical role.” – Jason, [01:54]
-
Brex leans into hiring ex- and future founders, touting a "Quitters Welcome" EVP encouraging entrepreneurial growth—either inside or outside the company.
Engineering Organization and AI Team Structure
- Brex’s engineering org has ~300-350 people, structured around product domains (card, banking, expense management, travel, accounting), each as a full-stack team.
- A dedicated centralized AI team (about 10 people) focuses on LLM applications and agentic product offerings.
- Pods mix young, "AI-native" engineers with experienced staff who know the “skeletons” in the codebase.
- Quote:
“You want to be very, very careful about talent density and very deliberate, like only hire when you absolutely need it.” – Jason, [08:19]
Cultural Dynamics
- AI is not seen as “special”—engineers are encouraged to transfer to the AI group if interested, but core revenue teams (e.g. the card team) are also prized.
- LLM and agentic coding tools are broadly used, with notable adoption among all engineering levels.
Brex’s AI/LLM Platform & Architecture
Pillars of AI Strategy ([00:00], [23:14], [23:32])
- Corporate AI: Adopting best-in-class AI tools across all business functions (procurement, experimentation, upskilling).
- Operational AI: Automating internal processes (fraud, KYC, card disputes, underwriting) to reduce cost of operation.
- Product AI: Enabling customers to include Brex in their own corporate AI stack, delivering agentic and AI-powered features externally.
Product Agent Platform
- Early investment in building a custom LLM gateway for prompt/version management and evaluation ([11:55]).
- Use of a “simple is elegant” approach: custom frameworks for observability, data routing, model switching—preceding mature solutions like LangChain.
- Migration towards Mastra (for agent workflow ergonomics) and a homegrown multi-agent orchestration framework (for “agent networks”).
Multi-Agent Orchestration: The Agent Network ([16:02])
- A single “Brex Assistant” agent interacts with sub-agents (policy, travel, reimbursements, etc.) behind the scenes, modeling the behaviors of an executive assistant.
- These agents can “DM” each other, handling complex, multi-turn conversations and fluidly cross domains.
- Quote:
“We have agents that are able to basically DM with other agents and have multi-turn conversations amongst themselves to coordinate, to complete a task or to complete an objective.” – Jason, [16:02]
- Architecture allows teams closest to product lines to own their associated agent, promoting encapsulation and modularity.
Design Insights
- Attempts to overload single agents with too many domains/tools or use naive prompt context-switching did not perform as well as the specialized, networked agent approach.
- The orchestrator—your “assistant”—routes, clarifies, and contextualizes user queries, escalating to appropriate sub-agents as needed ([20:31]).
Internal AI Platform & Tooling
- Not dogmatic on stack; engineers can opt for different LLM frameworks (Mastra, internal, or otherwise). Infrastructure supports rapid iteration; code half-life is now “significantly declined.”
- Homegrown prompt/tool/eval managers facilitate low-code/visual operation, used extensively by non-engineers (especially in ops).
Internal AI Deployment, Experimentation, and Culture
Tooling and Procurement ("Conductor 1") ([28:39])
- Employees can self-provision licenses for ChatGPT, Claude, Gemini, etc., and experiment with coding tools like Cursor and WindSurf through a central internal platform.
- Analysis of usage patterns guides contract negotiations and avoids vendor lock-in.
Adoption and Impact Measurement
- No focus on vanity AI adoption metrics (“80% of our code is AI-generated”); instead, focus is on business value, code quality, and impact.
- Quote:
“The adoption is there and now we have to figure out how to mature in our usage of these tools so that quality or long term maintainability doesn't suffer.” – Jason, [31:52]
Challenges of Code Quality and Team Synchronization
- Concerns about excessive “slop” from rapid AI code gen, need for better code reviews, and risk of decreased engineer cognition on changed codebases.
- Use of traditional linters and advanced tools like Greptile and Codex for code review and agentic code quality control.
Operational AI: Automation in Financial Services
- Biggest wins are in automating high-cost, compliance-heavy ops: onboarding, KYC, fraud, card dispute resolution.
- Discovery: Simple agentic (“tool use”) and prompt-based LLM applications outperformed attempts at reinforcement learning or heavier ML for decisioning ([40:26]).
- Prompt and SOP (standard operating procedure) alignment is key; domain experts (Ops, CX) directly refine prompts and manage evals through visual tools like Retool.
Knowledge Management
- The “knowledge base mismatch”: GPT’s public or outdated knowledge of Brex vs actual product/operations reality.
- Investing in internal/external/product documentation curation to ground LLM outputs, reduce hallucinations, and synchronize all customer-facing and operational agents ([49:01]).
Evaluations (Evals)
- Vary between operational (objective, SOP-aligned, regression-based) and product (harder: multi-turn, more subjective, integration-test style).
- Continuous, human-in-the-loop refinement. Considering ways to track desired-but-unimplemented capabilities as “aspirational” evals ([56:27]).
AI Upskilling & The “Fluency” Framework
- AI fluency levels: user, advocate, builder, native. All roles—engineers and operations—are encouraged to move along this path ([58:51]).
- Honest, positive culture: jobs are changing, not disappearing. Spot bonuses, biweekly all-hands AI spotlights highlight novel uses and build psychological safety.
- Internal project-based interview now requires agentic coding, and all engineers/managers were reinterviewed in this new style to upskill the org.
- Quote:
“The fluency framework and then the training and support and the positive sort of culture where we celebrate people making progress has been really helpful for avoiding a culture of fear.” – Jason, [59:19]
Industry Reflections & AI-Era Engineering Risks
- Headcount won’t shrink because of AI, but AI drives far more leverage and forces more efficient, selective product choices ([63:51]).
- Agentic development amplifies both good and bad—sloppiness, overengineering, or misalignment scale just as easily as positive outcomes.
- Industry-wide, patterns like internal AI Centers of Excellence (AICoEs) and internal AI platforms are solidifying. Fluency frameworks are gaining traction.
Notable Quotes & Memorable Moments
-
On Founders as Employees:
“We celebrate that... we welcome in people who want to get a different experience... we can give them problems... with instant distribution.” – Jason, [03:19]
-
On Multi-Agent Networks:
“It means you can have your Brex assistant... and behind your assistant... expense management agent, reimbursement agent, travel agent. It’s like software encapsulation, but for agents.” – Jason, [16:02]
-
On Overengineering:
“We made this big investment [in RL], and the performance we got was inferior to just building a web research agent.” – Jason, [40:26]
-
On Internal Knowledge Management:
“The world knowledge... about what GPT5 thinks Brex does and how our business operates is quite different from what our business offers today... we’ve had to work on building a corpus...” – Jason, [49:02]
-
On Staff Code Review Tools:
“We’re big fans of Greptile... the comments that it leaves are very, very high signal. I never regret going through all 65 comments it leaves on my diffs.” – Jason, [38:31]
-
On AI's Impact on Engineering:
“I felt like I had all the predictions back then and at this point now I’m just very interested to watch the phenomenon continue to unfold in front of us.” – Jason, [33:58]
Timestamps for Important Segments
- AI Strategy Pillars & Team Organization: [00:00], [05:13], [23:14]
- Brex Agent Platform / Architecture: [11:40], [11:55], [16:02], [20:31]
- Multi-Agent Systems Discussion: [15:55], [16:02]–[22:43], [69:13]–[72:44]
- Operational AI / Automating Finance Ops: [23:32], [40:00], [45:05]
- LLM Knowledge Base Curation: [49:01]
- AI Tooling, Internal Platform Use: [28:39], [30:08]
- Internal AI Adoption and Fluency: [58:51]
- Evaluations (Evals) and Testing: [52:12], [54:50]
- Reflections on AI Engineering and Risks: [63:33]–[66:39]
- Call to Action for Multi-Agent Collaboration: [67:34]
Closing: Future Directions & Call to Action
- CTO Jason invites collaboration and discussion specifically on multi-agent networks and agent orchestration frameworks, suggesting the industry is still inventing core patterns and could benefit from more robust, “networked” agent interactions ([67:34]).
- Quote:
“Trying to craft LLMs into deterministic workflows and DAGs is kind of underselling... I just want to see the industry lean in more on these agent to agent interactions.” – Jason, [67:34]
A technical, transparent view into how one of fintech's biggest disruptors is refactoring itself as an AI-first company—with frank takes on what works, what fails, and what’s left unfigured-out in AI engineering at scale.
