TechTank Podcast Summary
Episode: "Will AI democratize financial services?"
Date: October 6, 2025
Host: Dr. Nicol Turner Lee
Guest: Aaron Klein, Senior Fellow, Brookings Institution (Center on Regulation and Markets & Miriam K. Carliner Chair in Economic Studies)
Overview
This TechTank episode explores the promises, pitfalls, and policy questions around the growing use of artificial intelligence in financial services. Host Dr. Nicol Turner Lee and expert Aaron Klein dive deep into how AI is shaping credit, fraud detection, underwriting, and insurance—and debate whether it can truly make finance fairer and more accessible, or if it risks amplifying entrenched discrimination. The conversation balances optimism about innovation with hard questions about bias, regulatory oversight, and the need to fundamentally rethink how creditworthiness is measured.
Key Discussion Points and Insights
1. The Role of AI in Financial Services
- Historical Context of AI in Credit
- FICO scores, based on algorithms that many would classify as early AI, have dominated credit allocation for decades.
- "Most people know there's an artificial intelligence that's been dominating the world of credit allocation for a long time. And it's called FICO."
— Aaron Klein (03:34)
- AI’s Broader Applications
- AI is used for fraud detection, back-office processing, and increasingly, in credit underwriting—sometimes leveraging unconventional data like cash flow or even social media activity.
- “Banks are using [AI] for processing, back office processing, fraud detection, etc.”
— Nicol Turner Lee (11:11)
2. Transparency, Fairness, and Discrimination
- FICO and Systemic Problems
- FICO is a flawed system, full of errors and bias due to inaccurate credit reporting and historical societal inequalities.
- “Credit bureaus have no legal requirement for accuracy. In fact, they have every economic incentive to be inaccurate... as long as you're inaccurate in both directions, it all comes out in the wash.”
— Aaron Klein (08:13)
- Bias and New Data Sources
- While new forms of data (like cash flow) can predict credit risk better and more equitably, AI models can also measure problematic proxies—such as device type or racialized names—potentially leading to new forms of discrimination.
- “Historically, AI is attached to the computer devices that people use... you're more likely to get credit if you're on a Mac versus a PC? I mean, that's the part that scares me.”
— Nicol Turner Lee (07:04) - “Latonya Sweeney tells us, like if your name is a certain racially ethnic sounding name that you'll be denied credit.”
— Nicol Turner Lee (07:42)
3. Regulation and the Sandbox/Greenhouse Debate
- The Tension in Regulation
- The U.S. is currently moving from a cautious approach that sometimes leads to “analysis paralysis” to an innovation-driven approach potentially lacking in oversight.
- “What I fear is the current crowd is, you know, with, especially with the defunding of the Consumer Financial Protection Bureau, they're going to fire every enforcement attorney and not bring any new case. If you take the cops off the beat, yes, crime reports will go down. That doesn't mean crime has.”
— Aaron Klein (24:15)
- Regulatory Sandboxes vs. Greenhouses
- Sandboxes, as used in the UK, allow financial innovation in controlled, consequence-laden environments, but U.S. approaches can sometimes be too lax or too restrictive.
- Klein recommends “greenhouses” over sandboxes: controlled, transparent trial spaces for new technologies where regulators and consumer advocates can “look in.”
- “I tend to think that the right answer is to try some of these things in more controlled environments... I’ve always kind of thought that there’s an interesting way to describe them as greenhouses... maximum transparency and sunlight so everyone can look in...”
— Aaron Klein (19:34, 20:34) - “Bingo. You said it. You said it. Because I think it's brilliant. Go ahead.”
— Nicol Turner Lee (23:07)
4. Core Tensions: Risk vs. Non-Discrimination
- Society’s Mixed Approach to Protected Classes
- Laws strictly prohibit discrimination by gender in credit but allow it in car insurance.
- “I've picked the same metric gender, I picked two different financial products alone and car insurance. And I've told you that in one aspect society has said discrimination on this characteristic is illegal. And on the other, I've said it's totally not just legal, it's the norm.”
— Aaron Klein (15:06) - AI will find obscure correlations with risk, challenging policymakers and society to constantly redefine what is “fair.”
- “This is one where we don't understand the technology is going to start to find these connections, and it's going to find them faster than we can sit around in a room and say, okay, not okay.”
— Aaron Klein (16:10)
5. Practical Concerns: Errors, Bias, and the Limitation of Data
- Anecdotes about Identity Mistakes and Systemic Flaws
- Errors in credit reports can haunt individuals for years without recourse, due to lack of legal requirements for accuracy.
- “There’s an Aaron Klein in New Jersey who didn’t pay his cell phone bill, who held down my FICO score for 10 years. And I was absolutely powerless to get that off my credit report because I’d lived in New Jersey too.”
— Aaron Klein (08:13)
- Cash Flow Underwriting as Superior
- Citing FinReg Labs research, Klein notes computers looking at daily bank account balances are better predictors of repayment than FICO.
- “If I take a computer and I just look at the amount of money in your bank account every day for the last two years, I have a better guess as to whether or not you’re going to pay back that loan than if I look at your FICO score.”
— Aaron Klein (09:33)
6. Social and Political Climate’s Impact
- Challenges in Honest, Civil Dialogue
- The erosion of civility and politicization of tech policy inhibit real progress on tough, controversial issues like AI and race.
- “It is very difficult to have this conversation in this type of environment.”
— Aaron Klein (27:39)
- Examples of Politicized AI Misuse
- The President’s use of deepfakes for racially charged political messaging is referenced as an example of how tech can distort public debate.
- “You talk about the president using deep AI fakes on the official, his official channel that had clear racist overtones and put words quite literally in the mouth of the minority leader."
— Aaron Klein (27:18)
7. Human Stories: Divorce, Health, and the Limits of Risk Scoring
- Major Reasons for Loan Default Are Personal Crises
- Elizabeth Warren’s research shows the top predictors of default are medical problems and divorce—neither of which should be held against people in risk models.
- “The top two reasons people defaulted on loans were medical problems and divorce. And those were the two leading predictors of default. Now, medical problems...not necessarily your fault. Right?”
— Aaron Klein (29:03)
- Personal Reflections
- Turner Lee shares her experience with divorce and acknowledges the deep economic vulnerability such life events create, especially for minorities.
- “As a person who's gone through a divorce, you're completely correct that there are many economic vulnerabilities that come with that.”
— Nicol Turner Lee (31:48)
Notable Quotes & Memorable Moments
- On AI’s Unintended Discrimination:
- “If your name is a certain racially ethnic sounding name that you'll be denied credit.”
— Nicol Turner Lee (07:42)
- “If your name is a certain racially ethnic sounding name that you'll be denied credit.”
- On Future Policy Directions:
- “I kind of like this greenhouse idea...folks can go in, they can set up a controlled environment, they can let the regulators look in, they can let the consumer groups look in...”
— Aaron Klein (23:04)
- “I kind of like this greenhouse idea...folks can go in, they can set up a controlled environment, they can let the regulators look in, they can let the consumer groups look in...”
- On the Persistent Flaws in Credit:
- “FICO is, is a horrible system that we're all stuck on that is, has rampant discrimination and harms millions of people and doesn't protect us financially.”
— Aaron Klein (24:52)
- “FICO is, is a horrible system that we're all stuck on that is, has rampant discrimination and harms millions of people and doesn't protect us financially.”
- Real-World Example of Bias:
- “I've had bankers tell me that they've developed little AIs and started flagging people as credit risks, and they realized they were just picking up divorce.”
— Aaron Klein (30:08)
- “I've had bankers tell me that they've developed little AIs and started flagging people as credit risks, and they realized they were just picking up divorce.”
- On the Difficulty of Progress in the Current Climate:
- “I’m very pessimistic that we’re going to be able to have an honest conversation at this moment with everything else that's going on in society.”
— Aaron Klein (31:18)
- “I’m very pessimistic that we’re going to be able to have an honest conversation at this moment with everything else that's going on in society.”
Timestamps for Important Segments
- [03:15] – The FICO score as legacy “AI” in credit
- [07:00] – Concerns about bias: device data, names, and racial implications
- [09:33] – Cash flow underwriting outperforms FICO
- [11:11] – AI’s use in fraud detection and insurance risk pricing
- [14:30] – Gender: a protected class in lending but not insurance
- [18:40] – U.S. regulatory “sandboxes” explained
- [19:34]/[23:04] – Klein’s “greenhouse” model for transparent, controlled experimentation
- [24:52] – Why the status quo (FICO/system) is not worth defending
- [27:18] – Politicization and toxicity of technology debates in current climate
- [29:03] – Top two causes of default: medical issues and divorce (Elizabeth Warren’s research)
- [31:48] – Economic vulnerability, the wealth gap, and AI’s role going forward
Summary
This candid and wide-ranging conversation illustrates the complexity of integrating AI into a highly regulated, deeply flawed financial system. Both Turner Lee and Klein urge listeners to understand that simply automating old processes won’t eliminate bias—it could codify it further. Their solution is not to stall innovation, but to move forward with experiments, oversight, and transparency via “greenhouses,” ensuring new technologies are trialed where all stakeholders—regulators, consumer advocates, and the public—can observe, learn, and adjust as needed.
The episode is a clarion call for nuanced, civil dialogue in tech policy, and a reminder that true democratization of finance will require both new tools and a willingness to rethink old rules.
