
In this candid and fast-moving episode, Charles sits down with Richard White—founder and CEO of Fathom AI, the top-rated AI note-taking platform on G2 and HubSpot—to unpack the truth behind the AI gold rush. Richard shares why only 5% of internal...
Loading summary
A
Welcome to the proven podcast where it doesn't matter what you think, only what you can prove. Richard proved it in a time where everyone's trying to be successful in AI and they're rushing around. He did it five years ago. He's the CEO and founder of Fathom. He's also a really great guy until he starts telling you the unforgiving truth of what's actually going to happen with AI in the next 24 months. It's terrifying. Anyway, I hope you enjoy it. The show starts now. Hey, everybody. Welcome back. I am excited to have you on the show. Richard, thank you so much for joining us.
B
Hey, thanks for having me.
A
So for the four or five people who don't know who you are, can you explain what you've done, what your success has been? Sure.
B
I'm the founder and CEO over here at Fathom AI. We are the number one AI note taker on G2 and HubSpot. No one likes taking notes on their meetings. And so we have basically an AI that will join your meeting, record it, transcribe it, summarize it, write the notes, write the action items, fill in your CRM, you know, slack it to you, email it to you, you name it, so that you can just focus on your conversations and not doing a bunch of kind of data entry work.
A
So I think most people are familiar with your product. I think the stuff we're going to talk about now is stuff that people aren't familiar with about the reality of AI. A lot of people think AI means artificial intelligence. It also means always incorrect. There's also a side of this that you believe about what it means for you as well. And some of the harsh realities of what AI does. Can you kind of share what some of those harsh realities are?
B
Yeah, I mean, I think one of the things, you know, I've been doing software for 20 years, and AI is completely upended. How we think about building software. Yes, made it much more of like an R and D process now, whereas before it was more of like a manufacturing process. It's also made the failure rates much higher. Right. Like, you know, we. It takes a long time to sometimes ship an AI feature because it'll fail three times before you get something to work. And so that exists for both when we're building features for our product. It also exists like when we're trying to buy AI products to basically, you know, you know, move our business forward. We actually have a goal at Fathom of getting to a hundred million revenue while staying below 150 employees. And so we have this big emphasis on efficiency and automation. And it's been interesting because we had this, you know, I just gave this talk where I expected to give a talk about how, you know, we've transformed everything with AI and we actually have like a 60% failure rate on AI initiative. So I think there's a lot of really interesting gotchas when you're trying to build or deploy AI solutions.
A
So what you're trying to tell me is that AI isn't the Holy Grail all of a sudden I'm not going to start floating and curing cancer because I was bored on the toilet one day. That's not how things actually work. Damn you, man. You've ruined it all for us forever.
B
I'm so sorry.
A
So as we go into these and you're talking about failure, what do you mean by failures? I mean 60%, that's. I mean, I want to get on a plane that had a 60% failure. I mean, I would get married because that's a 62% failure ratio. But okay, we'll get on a plane that has a 62% failure. What do you mean there's a 60% failure ratio in AI?
B
I mean, so actually there's this MIT study which came out that said like the average company right now is like actually a 95% failure rate on like AI initiatives. What I mean for us is basically like it produced the outcome we wanted. And I think that's actually the hardest part is like in the AI lane, it's easy to get it to produce something. It's easy to get the AI to spit out something, right? Our part is getting it to spit out the right thing. And what is the right thing? So for example, in our business, right, you know, I could, you could build an AI that gives you an accurate summary of a meeting that's six pages long. Long but accurate may not be enough. Like, that's too verbose. I don't want a six page. You know, it's a ten minute meeting. I don't want six pages. So there's like this whole new nuance of like quality that I think is hard for us to adjudge. We're not used to judging it, right? We're used to software as binary. It works or it doesn't. I click the button, the thing moves on the screen, right? And now we're in this world where like I click the button and it spits out some words. I'm like, are those the right words or not? Right. It makes a judgment call. Is that the right judgment call or not? And so I think one of the things that's really changing everything is we have to rethink how we evaluate tools because we have to actually get in there. And it's almost like evaluating a hire. Right? It's more like a virus because you're, you're basically buying thinking, not features now. And so it's kind of upended how we think about purchasing products.
A
So I can't even get chatgpt not to put dashes in the damn responses that it gives me, which I, I can't tell you how many cursing that I've done at that thing. You're talking something significantly higher. How do we get it to produce content that we actually want or, or go from that 10 page, you know, dissert that's so verbose into what we want. How do we do that on, at the home level for the, your everyday consumer? And then also, you know, as the CEO, as a very successful company, because every single meeting I'm in, your damn software is there before anyone else joins. Thanks for that. I'm a little angry about you with that one. How do we do that at both the personal level and the professional level?
B
Yeah, I mean it's actually that same study said that like the success rate for things like ChatGPT is like actually 40%, which is still not great, but way higher than 5%. Right. And I actually think, I think AI is actually easier for individuals to use because individuals are basically taking ownership of that output. Right. Like it's like, oh, it's writing this email for me and yeah, I hate that. Always puts the EM dashes in there too, but I can at least remove them. Where it becomes problematic is when we're using these things at scale and no one's basically been properly equipped to qa. The thing, you know, we have a whole team at fathom that all they do all day is play what I call kind of like a, you know, AI version of Jenga where we think about this as like all day we are experimenting with like, you know, basically models and use cases. Right. And is this model good at this use case? Can, can this model find action items from a transcript? And I call it Jenga because like if you push on a block and it, you know, gives any resistance you give up, you find another block that moves smoothly. Right? Because there's a weird kind of like problem you've got now where you got so many models with differing kind of performance parameters, cost parameters, and so many different things you want to do so it's like this really big problem where you almost need a full time team if you're building stuff, either you're building or evaluating it, to like, you know, evaluate multiple vendors in parallel and try, okay, we're going to hire three vendors, we're going to put each of them on a 90 day pilot, which by the way, we make every vendor give us a 90 day pilot for AI and going to have a whole team that QAs it. And when we don't do that, it almost never works.
A
So when, when new GPTs or new models come out, there are so many times where I personally I've spent so much time training my old model and trying to teach it and say, hey, do this, do that. And I have very specific calls for it to do that when a new one comes out. Do you guys over at Fathom have the same puckering motion that we have on our side? We're like, oh God, everything's about to blow up again. Is that, is that something you guys are facing as well?
B
Yeah, on two dimensions. Well, I mean one, we get excited because usually the know new models unlock something for us, right? For example, GPT5 for the lackluster kind of reaction it got from the market did actually solve a significant problem for us. Hallucination rates are way down and that actually opens up a whole new class of problems that we were trying to solve before but couldn't. But it causes also other problems in that none of these models are forward compatible. Right? You get something working on GPT4, it's not necessarily going to work the same on GPT5 and even more problematically, and I think this is something that everyone in the industry is starting to realize, the EOL cycles on these LLMs is now measured in months. So Anthropic puts out Sonnet 3.5, they put out six months later Sonnet 3.7. Sonnet 3.7 is more powerful. But now there's a limited number of GPU compute in the world, right? And so they're shifting all of their compute to this new model. So now you end up on what we call the LLM treadmill, where if you don't upgrade your models, all of a sudden you find out you're getting all these errors because there's no compute to service them. And so now you're spending as much time upgrading your models as you are basically building new stuff from scratch. So the maintenance load on these, on, on these tools and processes is way higher than anything you've ever seen in software land.
A
It's it's, it's one of those things and I'm going to date myself here. But the original Warcraft 2 because I'm that old before, okay, So I can see by the smile you played it before, I would go and I would attack the orcs or I would attack the knights or whatever it was. I would save my military formation. Okay, if this doesn't go well, I'm going to go attack. If they all die, I could just go back again. I wish that existed inside ChatGPT where or any GPT that we're going were like, okay, you to try and try and do this. Quit giving me dashes. Or I want you to word it this way. And then for some reason AI becomes always incorrect and it just goes off on a tangent. I'm like, excuse me, sir, can you just go back 30 seconds? That would be nice. And then it sounds like what you're saying is, hey, I did this great game. Can I pick it up and drop it in over here as well? And it seems like both of those things are absent in the market, even at the highest levels, which is where you are.
B
Yeah, that's right. I mean, I think a lot of advice I give to companies, like if you can try to solve a problem with building something in house, but know that that in house solution has like six to nine months of shelf life and know you're going to throw it away and probably buy a vendor at some point. Right. But by building in house you have a better sense of like, cool, we at least know we got it to do the one small critical thing we needed to do. Right. A lot of times vendors throw a lot of things at you. 10 different features and 2 out of 10 of them work sort of thing. But yeah, this is, it's kind of like this whole new again, this whole new paradigm that's very much an R and D lab. It's very much a. Not an assembly line. Right. It's like, it's not as predictable as what we had before this in SaaS.
A
I wish this was something new in a sense of tech because I remember, because I'm old enough to remember when, you know, we had the dot com boom and everything was going on with the Internet. We're like, oh, this is going to be amazing. And then pets.com is going to be amazing and this is going to be amazing. Obviously blew up all the time. So not just on the personal level, but professional, professional level. Companies you thought would be Fortune 500 companies are going to be there forever, would be Gone two, three weeks after. Are you seeing that with established companies are sitting there going, oh, we've got, you know, there's a. The light at the end of the tunnel is not a light, it's a train. We got to adjust because what worked today just won't even exist. And how short is that time frame?
B
I. I mean, I think the exciting thing as an entrepreneur right now is that a lot of the big companies are really struggling to release good AI features because it breaks their paradigm of how to do software, right? Their use against this assembly line where it's like, how do we build software? We say we want to build this feature, we spec it out, we build it for three months and then, oh, you know, we click the button and it moves, you know, 10 pixels to the left. We're done, right? And it requires a whole new way of doing QA that most of these companies are good at doing. Which is why I think if you look at most of the big new AI features from a lot of these companies, they're really mediocre, right? Like, because they just don't know. They don't have the muscle in the company, which is what is quality, right. They don't know how to judge basically, like, subjective quality. And they're still looking at from their kind of, like, objective, like, did it do the thing? Did it spit out words? Yes, great, pass qa, ship it sort of thing. So I actually think there's a challenge if you're buying software because a lot of times the bigger incumbents actually have inferior products to the new startups. New startups have their own problems, right, of like, you know, instability and whatnot. But if you're an entrepreneur, I actually think it's a fantastic time because it's like the incumbents are completely out of their depth in how to build software in this new era. And so I think I'm exciting, actually, as much as it is also terrifying.
A
Yeah. I think that the best example I've heard of this is imagine you're on a train that's going as fast as possible and you're on one car of the train and you're fixing as much as you can, but all of a sudden it's going to unlock and that car is going to be gone. So you better jump or good luck. I wish you nothing but the best. Because that's just the reality that we're going to be in as someone who's kind of tip of the spear, who has been to come very successful with what you're doing and has Created a company that, as much as I do hate your thing, showing up to all the meetings is something that everyone uses. Where do you see AI? Because everyone's like, oh my God, it's the greatest thing since fire. And there's other people who are like, oh my God, it is fire. It's going to burn down my house. There seems to be people who are very polar opposite. Either you're completely madly in love with AI or oh my God, it's the devil incarnated. And they have this paradigm shift, right? Where do you see it going? Since you are again, you're you, you're in there, you're with the CEOs, you know what's going on better than even someone, the regular person would be. How does this look in five years?
B
Yeah, I mean the one thing I will say is this is to me the greatest technological shift of my lifetime. Bigger than, it's really bigger than mobile, bigger than social. You know, I don't even say bigger than itself. Right. Like there is real there. Right. Like for all the failure rates and stuff like that, they're also. The denominator is huge. Right. Everyone's trying stuff because this is closest thing I've seen to magic. One of the challenges, it's like, yeah, what is, you know, I have board meetings and we're kind of talking about like what, you know, what's our five year plan? What's our ten year plan? I don't know. If you get to AGI in five years, does anything really matter? Can you really plan beyond AGI type things? Smarter people than I. I think the real question is kind of the open question right now in the market is, you know, my kind of corefront group, the same folks that I kind of leaned on five years ago before Gen AI got good, to make me feel confident building a business, betting on Gen AI getting really good. It's kind of like we started this company in 2020, 2021, we launched, we put AI in the name of the product. And all my investors were like, what are you doing? Everyone hates AI. It's easy to forget that it was only four years where AI was being marketed in 2015, 2016, 2017. And it was terrible, right? It was not AI. It was, you know, it was, it was basically fraudulent kind of stuff. But now we're at this point where everyone's like, oh my God, AGI is going to happen in two years. And you know, there's some people still believe that we're going to keep accelerating. I think that group of People that I'm kind of surrounded with thinks it's about 50, 50 between like we're going to reach a plateau of what you can do with the current tech and we're going to find kind of Moore's Law style the next, you know, step up. It's clear that we're getting diminishing returns from the current generation of transparent based AI like GPT5. I think everyone kind of sees all the latest models are now more optimized for efficiency. They're not like wildly smarter than the previous model but they're cheaper to run, which is important. Companies running for their margins and all that sort of stuff. I've kind of taken the approach we have to kind of assume that things are going to kind of slow down because we assume they're going to continue accelerating. It's almost impossible to plan for anyways so. And we're kind of again I think GPT5 was one of those, was a good data point of like okay, like it seems they were plotting towards a plateau and we're waiting for whatever the next thing is after transformer models alone. But it is the most volatile market I could ever imagine. Right. Like you know, we've, we've been on by. This company's been on objectively a rocket ship by the last 10 year standards and we're now just doing pretty good by modern standards where you see companies go from zero to a hundred million, a billion in revenue in two years. Right. It's. And then go back down to zero two years later. Right. Like look at the, and stuff like that. So is insanely volatile market full of tons of opportunity. But how long lived those opportunities are I think is to be seen.
A
Yeah. I think to your point of what does this mean to the human race? I will give a little bit of pushback. I don't, I don't think it's better than Internet. I don't think it's better than the industrial revolution. I don't think, I think it's better than. The only thing better than this is fire. That's as far as the human race is concerned. This is, this is fire. As far as what can do now, fire was good and bad. It could burn down your entire village. Yes. But it also makes good food. We can, you know, do these things. As far as what I'm concerned, what I've seen with it AI is as good as fire. Now what that means going forward, good luck. I wish you nothing but the best because it's going to be pretty, pretty interesting. You mentioned there's companies that go from zero to a billion dollar valuation and then two weeks later, gone. Do you think we're going to see in our lifetime the first hundred million dollar company run with just a single employee? Do you think that's going to happen?
B
Yeah, I mean, I think Sam Altman talks about the first billion dollar company with a single person. Right. I think that's highly possible. And, and then you can extrapolate all the concerns you have about like, societal upheaval and wealth inequality and from that pretty, pretty easily. Right. But yeah, no, I think that's perfectly reasonable to expect.
A
Yeah. And I think when it, and this is something that people don't understand, this is no longer a luxury. We don't get to sit back and say, hey, I wonder if this is going to happen. I wonder if this is going to affect me. This is going to create wealth distribution issues on the equivalent of basically India. When you look at how people are distributed, especially here in the United States, you're going to see that. So if for those of you who are playing at home, who might not understand that thing that Richard's talking about and what he's doing, you do not have the luxury to sit on the sidelines. So either you're going to be panhandling or you're going to embrace AI because this is just, this is what it is. This is electricity. So if someone's walking into that and they're like, oh my God, this is terrifying. You know, you're telling me that, hey, I need to embrace it, but then you're telling me the company's going to disappear in five months. When you're an entrepreneur, you're like, oh, God, I have to go into this. I know I have to go into this, but I could get punched in the face or I most likely will. How do you advise entrepreneurs? How do you advise business owners? And hey, these are some proven tactics that work. Let's, let's do these, just do these for now. Make sure that if you do get knocked on your butt, you can get back up somewhat gently and, you know, go from there. What are the things you advise with?
B
I mean, honestly, I think it's never, there's never been a better time to start something that's really narrowly focused. Right. You, you hear a lot about the big, you know, the big platforms that are, you know, again, going from zero like a Jasper, or going to zero to 100 million right back down. But the real beauty of this stuff is like, it can get, you can really tailor the stuff to specific use cases, specific problems and you can build faster and cheaper and better than you ever have before. Right? It's completely, you can, you don't have to have a CS degree like I have in a team of six engineers anymore to build something useful. You can just be a pretty good, you know, hobbyist, prompt engineer, plus some magic patterns and some, some prototyping tools and you can build something of value. Right. And so, you know, I remember 10, 15 years ago, everyone was kind of doing like the, you know, the, was it the lean startup stuff where they're like, oh, you know, they're selling stuff before they really even built it. And you know, that got taken to an extreme, but now you literally can really narrow down and find a very specific niche and you can build a really good like. And I know this is kind of a pejorative in a lot of markets, but like lifestyle business out of like, great. I've got the best new software that solves this one burning problem for car washes, right? Like, yes. And I actually think that's where a lot of the gold is. I actually think a lot of the gold is at the application layer. A lot of the investment and noise and all this stuff is all kind of at the like foundational layer. It's all about who's building the big infrastructure stuff. But that's a billionaires game. You need a lot of money up front to do that. I think there's a lot of money to be made at the application layer sitting on top of these tools and if you can get good at bringing them. And that's where I think, I think that person that's going to be the single person company doing 100 million revenue. I don't think there's going to be a foundational model. I don't think they're going to be something like Fathom. I think they're going to be something that sits above something like fathom. Right? Or above these foundational. Foundational models. Right. Just finds a really good niche that just happens to catch like wildfire.
A
So I think that's for the entrepreneurs, I think for the employees there needs to be this conversation of what's happening because you're seeing in their orgs, you're seeing people where entire divisions are getting eradicated. People with master's degrees from Ivy League schools or trying to get jobs at McDonald's right now and they're terrified. I rightfully think they should be this welcome to this new world. When we were me, when I was younger, being an entrepreneur was not Sexy. They did not like that idea. Being into comic books, not sexy. Being a dork, not sexy. And then all of a sudden now we're like, it's our time, our time has come. And same thing with entrepreneurs. The employees that I know are terrified. They are, they are fundamentally, they're like, hey. And they go back to their old model, which is, I'm going to get another degree. I'm like, that's, that's not going to help you. That, that's over. Those times are gone. So what do you say to those, you know, mid level medium, you know, mid level managers, kind of just, you know, senior directors, VPs. What do you say to those guys who have said like, I built and I have, you know, busted my butt to fit into this model of this process, of this American dream. And as George Carlin said, really well, he goes, it's called the American dream because you have to be asleep to believe it. If you no longer believe this model and you, you are no longer built for this and the thing you were built for does not exist anymore. How do you adapt?
B
Yeah, I mean, it's, that is the question. Like, that will be the question of the next five, ten years. Right. I remember, you know, I was a big proponent of like, I was telling everyone to listen about UBI 10 years ago. And I was worried about truck drivers back then, right. I was like, truck drivers number one profession in like 30 or 40 states. Right. And it's gonna, you know, it's gonna go away soon. And it's kind of funny, it's really hard to break these things. I think everyone would be assured that that was the first thing, the first kind of like industry get through. And years ago, here we are, 20, 25, and actually it's no, it's artists, it's copywriters, it's pretty soon going to be lawyers, middle level management. It's all knowledge work therapists. Yeah, yeah, exactly. And so, so, you know, what would I say? You know, honestly, it's like it's, there are no. I would tell you, like, your fear is well founded, first of all. Right. Like, and unfortunately, like, I'd love to sit here and tell you that you've got nothing to worry about. And I think you do. Right. You know, I think what you're seeing when you look at what's happening at, you know, college enrollment is down, trade school enrollment is up. And I think like, you know, the people that are kind of solving this, first principles, the folks coming out of high school are looking at that saying, Gosh, you know, there have been a better time to be in the trades. Now am I going to tell some VP to like, hey, you know, you should go back to community college and become a, you know, a plumber? You know, I think that's a tough sell too. I think there's like a middle ground where if you really become a student of this stuff, I still think there's a lot of opportunities the next couple years again at the application layer, where you could be the person that helps companies get from a 5% success rate that we're seeing to a 25% success rate, and there'll be a lot of opportunities there. I think it really depends a lot where you are in your career. I mean, I, you know, I've been building software for 20 years and I've always thought that like, you know, I can always fall back. I know how to organize people to build great software. I'm not sure that'll even be a skill set in five years. Right?
A
That's correct.
B
You know, I very much pointing, like, if I don't have kind of an exit or retirement plan over the next five, 10 years, we need to be thinking about what we can, what value can provide beyond that. But I do think very tangibly, I think trades will be coming back in a big way. I think, you know, there's a lot of opportunity for people to learn how to become experts of if you can be an expert in replacing your own job with AI, that gives you a job over the next couple years.
A
So, you know, we talked about entrepreneurs, we've talked about employees, we talked about where we think this is going and how this is the new fire. What are some of the conversations that none of us are here having? Let me rephrase this. None of us, other than you are having in these boardrooms with these people who are, you know, very much a tip of the spear. What are the things that you guys haven't made as public yet, if you can. This is, hey, this is what we're talking about. And these are the things that keep up us at night because we know what keeps the entrepreneurs up at night. We know what keeps the employees up at night. Like, here are us, as you know, founders. This is what keeps us up at night as well.
B
I mean, I think, you know, the, I think the boardroom conversations are more about like, pace of AI change and kind of like, you know, how quickly will it used to be you build a software company and usually at least 10 years before someone really disrupted you, and then now it's like five years. Pretty soon it'll be two years where it's like there's so much psychological change that just undoes, you know, if you valuations for SaaS businesses, you look at them today versus five years ago.
A
Dramatic, right?
B
So I think there's a, the boardroom, I think there's a lot of conversation about that again, about like AGI and like, what would that mean? Could that just, you know, render a lot of businesses irrelevant? I think the conversation we should be having is the one we're kind of tiptoeing around, which is like, how do we as a society handle this? There's a really good short book called Mana M A N N A by this guy, Marshall Brain. Do you remember howstuffworks.com awesome website. The guy's actually from my hometown of Raleigh, North Carolina. He wrote this 25 page book and it was kind of a tale of two cities. One city actually set in the US that was like dystopian AI future where the robots are in the ears of the humans telling them exactly what, walk 10 steps this way, turn over the burger, that sort of thing. And another city where it's like, oh no. A lot of the gains from AI are more shared amongst society and it's like, it's a little hyperbolic. Right? But I think really interesting thought experiment of like this is coming and which, you know, I don't know that we'll get as dystopian as one example or as utopian as the other. But I don't think we're. I think we're. Everyone's busy fighting, trying to put the genie back in the bottle. The genie's not going back in the bottle. We need to talk about what's. Where do we want to like, put guardrails and push the genie in one way or another. Right? Like. And so I think the other thing people are talking about is also candidly, like AI regulation. The other thing I would say is I think a lot of folks in Techland voted for Trump and one of the reasons they voted for Trump is because he wouldn't regulate AI. And a lot of folks see that basically there's an arms race between us and China around AI. And if there's this belief, right or wrong, that if China gets to AGI first, if you believe in western style democracy, bad things happen, it. Right? And so I think that's another. There's like, you know, kind of so many different levels to this upheaval, but those are the three I would think about.
A
So where do you think things are going? Because people do have this dystopian fear that all of a sudden it's going to be Terminator, right? You're going to have the day cuts over, and then the robots are going to take us over and turn us into cottage cheese. Where do you think and what's more realistic for that?
B
I. I think all the paths are still open. I, you know, I don't, you know.
A
I think it's not the answer I wanted to hear, but. Okay, yeah, I just peed on myself.
B
A little bit there. I think, you know, I think we would be foolish. I think there's a lot of folks in AI land that are concerned about AI safety. Like, a lot of the, you know, a lot of the kind of open revolt that they had at OpenAI a year ago was about this fear that, like, this thing was founded on the premise of AI safety, and it seems to gotten off that mission sort of thing. So a lot of people way smarter than me seem to be very concerned with that. And so I think, you know, don't want to be alarmist, but I think we should all be, like, alive to the danger. This feels like the critical moment in, like, a. In. In human civilization. And everyone needs to educate themselves a little bit and, you know, do what they can to make sure we're nudging ourselves in the right direction.
A
So for all of you who have just caught the podcast, we've decided that we're all going to die. We're all going to be out of jobs, and it's completely over, and it's a horrible time to be like, okay, so let's try to give people a little bit more hope about what's done. So there's this lot of conversation about what AI can do and what AI has done, and not only just the basic stuff with business, but what's been done medically? Like, hey, we've made XYZ discoveries and we've pushed the envelope with that. And, hey, how we've looked at a problem that couldn't have been solved by humans for 100 years? It's in 27 seconds. So there are some amazing things with AI. Can you kind of share some of your favorite ones that, you know, you've seen that have kind of personally like, oh, my God, I can't believe it just did that. Or it figured out that.
B
I mean, I think you just touched on one, the big one. Just like a lot of the stuff you're seeing happening in healthcare, right, where, like, weak things that used to Be really expensive, right? Like analyzing scans, the early detection, like our healthcare system. My father was in emergency medicine for 30 years. He'll be the first one to tell you. We are really reactionary in healthcare for a number of reasons, but first and foremost it's like it's very expensive to be basically proactive in healthcare because you know, someone's got to analyze these scans, you got to look at these blood markers, you got to do all these things both on the like, kind of like the preventative maintenance, preventive medicine stuff as well as research. And this is going to drive down the cost of all that stuff dramatically to the point where, you know, you don't have to be rich to get kind of life extending care well ahead of some acute medical crisis. I think there's a lot of, I think that's going to be the thing we're going to look back at and be like, wow, we're going to cure, you know, hopefully cure or greatly reduce the harm on a lot of diseases in a very short period of time. But it's going to be kind of the wild west in the meantime because also our medical regulations haven't really caught up with that. Right. How to handle it. But I think that's probably one area you can look at and point to and be like it's going to be a lot of good done there. I think, you know, for all the disruption that we're going to see self driving cars that's also in place, we're going to point to, right, like, you know, the number one cause of death of people, the number one use of like urban land. Like you think about housing affordability, think about what happened when you don't have to dedicate, you know, 40% of your city to parking. Think about what happens when, you know, people aren't getting in car accidents left, right and center. So I think there's going to be, you know, on the other side of this crucible. There are a lot of things look forward to in the same way you look at the same thing like the Industrial Revolution and stuff like that. There are a lot of painful things in that transition. A lot of terrible things happened. Humanity was better for that transition in the end. Right? But it will be.
A
I don't think we have to go as far back as the industrial revolution either. Even with the IT boom, when technology kicked in, people like, oh my God, these are going to wipe out jobs. Yeah, they did. When tech rolled out, when we had the dot com boom and everything took off the Internet, it wiped out walls of jobs. But the job that you have right now did not exist before that. The jobs that I did, the careers I had. So, yes, it will wipe out a ton of shit. It will also create a ton. So I think there is. And I think to your medical point now, there's a difference between our DNA and DNA. There's different with that, how we measure those things. Some things don't change because even if you die of cancer, your DNA, that's your DNA. But the other stuff, we can analyze and say, hey, you know what? We say that everyone should take these medicines. However, based off your stuff, your individualized goodies, you should be taking this. I was sitting with the CEO of one of the companies that does that. We broke down. He's like, yeah, let's run your blood work. And within him a day. He's like, okay, this is what you need to stop eating right now. I was like, I'm sorry, what? He's like, yeah, you're. I'm like, yeah, but that's supposed to be healthy. He's like, yeah, for everyone. But you don't eat that. He regrettably did not say that I could have ice cream every day. So I'm still mad at him, but that's for his little. I was like, well, I can't have ice cream every day. What the hell? So there is that. So we get it.
B
We.
A
We understand. I think, you know, for every single level that we're at, be it employee, entrepreneur, founder, there is this optimism and there's also a little bit of fear. So as we get through that, I think having the tools and the techniques right now, like, what are the things. What are the tools that you're using other than obviously everyone needs to use your software. I get it. Please stop using my meetings, you bastards. So everyone needs to use their software. What are some of the tools that you use every day, and how do you use them differently than everyone else?
B
I mean, you know, I think that. I think everyone thinks that. That in Silicon Valley we have a whole different set of tools than everyone else. We actually don't.
A
I think there's, you know, I'm done. Yeah, we're.
B
We're all using, you know, we're using ChatGPT. We're using things like magic patterns. Another one I love where it's, like, easy way to kind of just like you can. Basically, it's a. It's an AI for generating prototypes, like, if you want to use your interface to something. So, like, we build a lot of prototype tools, you know, at A. At the, at the high end, I think the, the secret is to actually build good products with AI. You actually end up using multiple models, like any feature in Fathom, whether it's generating meeting summaries or finding action items or you know, answering questions based on transcripts. There's a pipeline and we're using three or four different models from different providers in that pipeline. We use, you know, some from Gemini, some are anthropic, we use some self hosted ones. So at the high end, when you're actually building really sophisticated stuff and trying to build, you know, the highest quality AI, right, and take it to market, it's a whole different game. But an individual, frankly, I don't think there's a lot of. I don't, I. There's actually, I think there's so much word of mouth adoption of these tools. Right. It's wild. Tools go from zero to a hundred million so fast because they're so good that there aren't a lot of like secret tools that people are using. Right. It is a lot of flaw chat GPT, you know, make, you know, et cetera, et cetera.
A
Yeah. I think the other thing that's really important is when you pick a new tool you have to understand what you used to do, how you used to operate will also have to change. And the simplest example I can give this is we gave very specific PowerPoint presentations that looked a very specific way at a very specific class level. For that, that took a ton of time. We use a tool called. And again, I don't do sponsorships or affiliates. I refuse to do it. So this isn't that we use this tool called Gamma. And my team got a hold of it and they're like, I was like, okay, this looks completely different. They're like, yeah, but we created 300 slides in a week versus a month and a half. And I was like, okay, I guess our slides now look different. So having that adaptive adaptivity. Adapt. Adaptability was vitally important. What are some of the ones that you've used that you're like, hey, okay, yes, I used to do it like this. That doesn't work anymore.
B
That's how. It's hilarious. That was actually the example I was going to give, right. Which is like, oh, I want my slides to look like this. This Gamma's great. Getting slides out. Are they gonna be exactly what I had before? No, no, but you, that's the thing. It's like, you know, it's like using chat GPG and Google search is exactly what I got. Out search results. No, it's actually better, but you have to be flexible. And we, like, rethink. What do I actually need out of this tool? Right. Yeah. Gamma would be the same exact one I do. Right. Like, I love creating. I honestly, I spent. I do kind of waste more time now. And they're generating fun AI images from it.
A
I do too, at the Bear. I'm glad you brought it out because I didn't want to be the first shameful one to say, spend way too long in there just messing with the images because it's fun.
B
I'm like, yeah, I'm a. I'm a put an image in two words on the slide kind of guy. And our branding, actually, we just rebranded and we put astronauts in it. It's part of the reason we put astronauts in it is because I had so much fun in every deck we've had for the last nine months. I've got astronauts fencing on the moon, astronauts fighting monsters, astronaut doing math with their helmets on. Like, it's. I love it. Right. Yeah.
A
Yeah.
B
Not to be fun is not to be discounted in the workplace. It's worth doing.
A
No, it's still got to be fun out there and get that rock and roll. I'm glad that you, you stepped up and said that you too are a dork like me. So I appreciate that you. You stepped into that world for me. So when people are sitting there and they're looking at this, one of the things that they're concerned with is, you know, if I go to Google and I type in, you know, what's the best food in my city? I'm going to get thousands of answers with chat GPT. I'm going to get one right. People are a little afraid of that. Like, okay, we're now getting it, so I don't have the option to think on my own. I'm now being told and I've now had the data so synthesized down to this one thing. Is that something you're concerned with as well? Because if I go to the library and there's one book on history, I know I'm missing a lot.
B
Yeah. I mean, there's a big concern about, like, we've already had this kind of bifurcation. I feel like a. Of what reality or truth is in America to a certain degree. Right.
A
Well, what do you mean?
B
We're not going to get into that. But we'll. But, yeah, it is interesting. Like, for as much as you have to get things right, there's certain corner cases where it's really, really bad. Like, you know, I think my girlfriend the other day was like, oh, looking up someplace that would like, I think, you know, sew something for her, right? Oh, I need like a. And it gave her three answers, and all of them were completely made up. Like, it's. I mean, that one's at least easy to spot because you can easily verify. Like, oh, that's not a real place. But it is a little scary because we are outsourcing judgment. Not like, why we like it is because we're outsourcing judgment, right? Because who wants to go through a thousand restaurant recommendations, right? I just want three. Help me pick three. But yeah, we're outsourcing judgment to this AI. And that's why I think, again, I'm grateful at least that there are reasonable competitors. And it does seem to be that there isn't as much moat in building foundational models as. As we thought. Now, there's a ton of moat in that. Like, you know, from a consumer brand perspective, chat GPT has 98% of the market, but I would encourage people to, like, get a second. You know, whether it's Gemini, whether it's Cog. Like, you know, when you're skeptical, ask us, get a second source, grok, you name it, right? Like, I think all the smart people generally are diversifying. You know, I don't just. I don't rely on one lamb to answer the question right for that very reason.
A
I also think if you are trying to trapped in one ecosystem by your own choice because no one traps it to this point and to your point with, you know, the girlfriend asking for place for selling, I'm like, yeah, okay, schmucko. Now go check Yelp and compare your options you're going to get. So having that cross reference is important. It's one of the things that I've coded into mine, which is one of the things I love about GPT so much. I'm like, okay, if you give me an answer like this, always do this after. And outside of dashes, it seems a result. But I think everyone. I will just celebrate so much when the dashes and emojis are no longer included. Stop it. No one writes like that. That doesn't sound like a human. What is wrong with you? So if anyone, by the way, on a side note, who's listening to this knows how to get rid of the dashes permanently, please send me a message. I will pay you for it. It drives me out of my mind. So with that done, OpenAI actually don't.
B
Even know how to get rid of the EM dashes. I think I read something that they're aware of this and they're like, we're not sure how this got in there. It feels like it's the AI's fingerprint sort of thing. I don't know.
A
It really is. Yeah. So. And it's funny because I will sit there and I will tell it over and over and over and over and over. And now I'm just like, undash this. Because it's just like, I'm not, I can't teach it. So when people are like, oh, AI is so intelligent, it can learn. And it's no, it can't even get rid of dashes right now. So just breathe here, sweetie. So for those of you who are sitting there and okay, we've got tools, we've got Adapter Shift, when those are going through and we talk about what's next for you not in five years, but what's the immediate next 90 days for? Again, you're. You're kind of tip of the spear with what you're doing over at Fathom. What is the next 90 days for you having conversation with your staff. Because you have to lead differently. Because now we're in an AI age. How do you lead differently? How do you show up differently in that environment? How do you build 90 day plans? Because anything beyond that, you're.
B
Yeah, come on.
A
We don't know.
B
I mean, you know that we've been fortunate in some ways. This kind of has come back to our strength. Even from the beginning. This company, I've always been like, we only build 90 day plans. I actually think that, I think in a lot of companies, planning is this like art of self deception and like false prevention. Right. Where it's like you, technology. Even before AI, you really can't know exactly where you're going to be in a year. And so I think it's important to have hypotheses about the future. Right. We believe the future will take this and not this. Right. You know, but then we kind of react. We're more reactionary on a local level. You know, I, you know, I mentioned our goal earlier. If we want to get to 100 million revenue and have less than 150 employees, that's way easier to achieve when you start from 10 employees than you start from 500 employees. Right, right. And we're also a fully remote business. And so we're kind of pushing the envelope on two dimensions of like, how do you use AI to basically streamline communication in 100 person org that doesn't ever see each other in person more than once a year. But I'll tell you that right now, it's still, I think, really exciting times. I mean, we, for our business, the thing we've been really excited about is not just writing notes for meetings. That's never been our goal. Our goal is what happens when we get all of your meetings, all of your team meetings, all of your company's meetings into one data repository. Because it's a really big, big data set. It's really hard to move. You know, historically, never been captured, certainly not structured. But if you get all that into one place, we're finding a place where the modern LLMs can actually do really interesting things. Like we did an example with prototype the other day where we said, hey, you know, you know, Fathom, tell us what's the history of transcription engines at Fathom? And it went back to every all hands, every engineering meeting for four years and it wrote a six page article about everything we've ever done. You think about for knowledge man management, right? Yeah. Yeah.
A
We don't also see where your loopholes are and where your vulnerabilities are. Say, hey, you know, you've listened to four years of my conversation. I don't remember what I had for dinner last night, let alone anything else. So being able to sit there and analyze, okay, where are their holes in our things? What have we missed? That was mission critical. That's something that, because again, I love picking on Fathom because it shows up and it annoys me all the time, says I want permission. I'm like, bugger off. But the ability to do that and then query everything down the road, that data set is infallible, right?
B
Once you get to a point where it's like, you know, everyone hates meetings, but we love having great conversations, right? And I think we're moving towards a world where you can have meetings and just kind of speak things into existence. We could talk about it when we get done with the meeting. It's done. The sow is written, the email's drafted, the power, the gamma PowerPoint is already queued up sort of thing, right? And we get to a world where we get this really interesting dissemination of knowledge across the org in like a fun way. One of the things we're experimenting with is like, you know, everyone hates sitting in all these meetings where, like, I didn't need to hear most of this. How do we start building everyone, like a customized podcast that listens to every meeting adjacent to your op, like, adjacent to your Function and gives you kind of like having crossed the org today. There's just so many fun things you can do now that you literally couldn't do even six months ago with the LLMs we had then. So I still think, you know, I still wake up every day feeling pretty optimistic. Yeah, I look inside my window, feel less optimistic, but like, I feel like we'll get there. You know, humans, humans always solve things at the absolute last possible minute, but.
A
We usually test them. So, yeah, Churchill said it really well. It says americans always do the right thing after they've done everything else.
B
Exactly.
A
So that's, that's kind of where we are on this. And I'm like, oh, God, here we go. Here we go. All right. Just survive and hold your breath long enough. Go in that one. How are you dealing with. Because a lot of. And this is getting away kind of from the AI. You've created a very successful brand and a very successful company. It's all remote. A lot of founders, a lot of owners of companies have problems with that, be they, how do we keep my team motivated? How do I keep them honest? How do we keep them unified? How do we build this cohesive culture? How have you survived and thrived in that environment?
B
I think so. The one of the reasons why I have this goal around 100 million with less than 150 employees, is I've had a lot of very successful friends that go ipo, get to really big companies and all of them say, gosh, when I tell them we're like 80, 90 people, they're like, oh, I missed that. That was so much fun. And I always ask them, when did it stop being fun? And they're like, well, you know, answers vary, 100, 150, 200, but it's all in that range. And I hypothesized from talking to them, like, there's some point at which you switch from a high trust environment to a low trust environment. And you know, you know, I picked 150 for our goal. Cause that's like the Dunbar number, which is like this theoretical limit of how many real friends you can have. And so I kind of think once you get above that number, it's impossible for everyone to be friends in the org. And you're almost inherently going to be a low trust environment. And so I think it's interesting I'd see all the same stuff where it's like, ah, I let my employees work from home and like, they're not really working that hard. And da, da, oh, oh, that's because you have a low trust environment. And I don't exactly know what creates high trust versus low trust. I mean, I think it's a cultural thing, right. I think it's something you could like, you know, I think it's a lot about like maybe how we lead and how we communicate and how we motivate folks. But I do know you should just be aware of when you have what environment you have. And you're right. If you have a low trust environment with your employees, one, maybe you should get curious about like, how did that happen? And two, yeah, like they need to get people back in the office. Because if you can't trust that they're going to the work, you know, put in the work. Right. Incentive structures might be evaluated. But I think we've been very fortunate in that, like we have an amazing team that loves the work they do. They're each are given enough autonomy and given trust. I think high trust environments happen because when we hire people, I, I say tell our team, tell our execs, you should trust by default. You didn't want to trust them by default. You shouldn't have hired them. But you should direct them by default. You should give them room to run. You should. It's kind of like kind of gambling. You shouldn't be prescriptive about it. The deck needs to look exactly like this. Is it 80% what you thought it was, but 100% what it needed to be then?
A
Yes.
B
Right.
A
And I think that's an important factor. 80% of what you thought it was, 100% of what you needed it to be. And I think when hiring people that one of the best advice I ever heard was, would you trust this person to feed your children? In other words, if you got in an accident and you couldn't provide for your family, would you trust that these people could do it for you? And if you can't say yes to that, then you have failed in the hiring process. So my, I guess my next question is as you built this high trust environment, which takes time and it takes personalities and there's very specific things help quick argue on getting rid of someone who does not fit into that environment.
B
Our goal is usually 90 days. Like you generally, you usually know by 60, 45, 60 days. And then, you know, just out of an abundance of caution, like I think you can go as long as 90 days. You really can't go any longer than that. But that's our, that's our goal. Right? I mean, I think it's generally pretty clear the nice thing Is once you have a high trust organism, the organism will reject any organs that don't seem to fit in with that. Right. And they themselves like as long as you've got a good way to have of listening posts that are not just, you know, like that that's what gets harder things we get bigger is like how do we, how do people trust? They can tell me, hey, this new executive brought in is not like. Is not our DNA sort of thing. But the organism knows if you can find a way to observe it.
A
It's interesting that you do 45 days. I'm much faster on that. Yeah, we, you know, we, we're very quick. I mean my grandmother said it really well when you're dating someone you will know know within three weeks and if you don't know, you know and she, she's just bulletproof with that and I miss her greatly. She's no longer with us. But when it comes to hiring someone normally within the first 48 hours and we don't pull the trigger that quickly. But within the first 48 hours you've got enough of an icky. You've got enough of okay, this. I don't know if I'd want a second date. This might have not Son of a good. So I love that you have a big heart and you have high empathy. So model tough to you and your people.
B
Well what I'd say actually we used to probably have. I would say that number used to be lower. But then every time we looked at we said the anytime we find that's not a fit in the first week, that is a real indictment of our hiring process.
A
A thousand percent.
B
And so I think now, now we're generally getting to things like okay, we think our hiring process is pretty good, which means no one should be failing inside of three weeks, four weeks. Right. We shouldn't be able to tell. It shouldn't be anything that, that like, like crazy some you but you can't test for everything in, in the hiring process. Right. That's where I think like okay, even with the best hiring process, those issues will show up month in. That's when oh they were all on their best behavior in the hiring process. And we got unlucky in our references and stuff like that.
A
Right? Yeah, we, we normally give people tests. We're like hey, need you to do this, need you to do that. We kind of go through that process like hey, do these things and we still have people actually test of what they need to do. And so that helps us out with what, with what we're doing. So as we go through this and as things are changing as an organization and as for you, as you had a level of success that you never thought you were going to have doing something you never thought you were going to do, what's next? What's the next big thing that you're like, hey, I really want to accomplish this.
B
You know, I think one of my superpowers as entrepreneurs, like I have like these built in blinders sort of thing, right? Where I get really. I get so passionate about what I'm working on. I actually think, like one of my superpowers is just getting passionate about things I can get. I always say, like, I always like to hire passionate people, like, because passionate people get passionate about anything. You get passionate about plumbing, they come back to our, like, transition your career into, you know, I think if you told me like, hey, Rich, go be on plumber, I would get so excited about fittings and the right and stuff like that. And I think right now it's like, there's just so much like it's the most fun time to build. It's the most volatile, volatile time to build. It's also the most fun time to build. I do, on a personal level, get really passionate about, you know, what I see happening in public discourse and what I'll hesitate to call politics. I will. I met with another entrepreneur yesterday who told me he's running for city council. And I think he expected me to be disappointed, like, be kind of confused by that.
A
No, I think that's amazing.
B
I was like, that's amazing. I was like, politics not enough. I think people of high character, good judgment go into politics because they judge it to be ev negative. And it is ev negative. But that's not why you do it, right? You do it after you've gotten so much from the society that you feel like you should give back. And I, you know, I think there's a lot of stuff that I would love to do in that sphere in the future because I think, I think our country could use some help. I think it could use some high judgment people that are not out for.
A
Themselves a thousand percent. And I. It's interesting because it's a similar conversation I had over the weekend. We're talking about, hey, we've all been very blessed, we've all been very successful. Maybe it's time to get back and offset and maybe course correct some of the things that are going on that have been going on not just for this gen, this administration, but for many, many, many, many administrations. We're going back double digit. You know, it's like, oh, my gosh, we got it. We have to pivot this. And it's time to kind of to have these people take over and do something different. So other than you running for president in the next 27 minutes, if someone wants to track you down and they want to learn more about you and they want to connect, because I'm just super grateful that you shared this stuff, what's the best way? How do they get a hold of you? How do they get a hold of Fathom? What's the best idea?
B
Yeah, check out Fathom. Fathom AI. It's free to use. Please give it a shout. And then you can find me on the only social media that I use, which is LinkedIn. So find me on the stodgiest of the social media is LinkedIn messenger. There we'll be.
A
I really appreciate you coming on. Thank you so very much, Charles.
B
This is awesome. Thanks for having me.
A
Absolutely. All right, guys, that wraps up our episode with Richard. I want to thank him for going out and sharing some insights and where things are going and the unforgiving truth of what's next with AI how it has two very specific paths, and it's in our ability to dictate where that goes. All right, guys, I'll see you in the next one.
Podcast: Proven Podcast
Host: Charles Schwartz
Guest: Richard White (Founder & CEO, Fathom)
Episode: The Surprising Future of AI
Date: October 15, 2025
This episode explores the realities and future trajectory of artificial intelligence with Richard White, CEO of Fathom. White shares candid insights drawn from his experience building one of the leading AI companies in the productivity space, providing a grounded perspective on AI's promise—and pitfalls—for business, society, and individual workers. The conversation cuts through hype to examine both the existential hopes and practical headaches facing AI practitioners and end-users, offering strategic advice for entrepreneurs, employees, and leaders navigating rapid technological upheaval.
Richard's background: Founder/CEO of Fathom, market-leading AI note-taking tool.
Harsh realities: Building with AI is fundamentally less predictable than traditional software.
Judging AI output ("Always Incorrect"): It’s often binary with old software—now, subjective quality is central, and that’s hard to measure.
Vivid analogy: Comparing a 60% AI failure rate to boarding a plane with such odds: "I want to get on a plane that had a 60% failure...I mean, I would get married because that's a 62% failure ratio." (02:34)
AI's impact: "This is the greatest technological shift of my lifetime. Bigger than mobile, bigger than social." (12:12)
The path to AGI: Split among experts—will we plateau, or see another leap? Latest models show diminishing returns.
Market volatility: Some companies skyrocket to $1B, then vanish—opportunities are immense, but ephemeral.
Wealth and organizational transformation: The prospect of a billion-dollar company run by a single person is now plausible.
"Sam Altman talks about the first billion dollar company with a single person. I think that's highly possible." (15:50)
Rapid disruption: Boardrooms are bracing for existential questions—AGI, regulatory risks, global competition.
Societal adaptation: Reference to "Manna" by Marshall Brain—a parable on possible AI-driven futures.
National and geopolitical competition: The AI arms race between the US and China is a factor in regulatory politics.
Societal fear: Paths are open—could go dystopian or utopian; vigilance and education are key.
"I think we would be foolish...This feels like the critical moment in human civilization." (25:54)
Medical breakthroughs: AI makes preventive healthcare affordable, enables earlier detection, and accelerates research.
Automation in public safety: Self-driving cars could reshape cities, reduce fatalities, and free urban space.
The new application of tools: Success now depends on adaptability; expectations and operational models must change.
“Having that adaptability was vitally important.” (32:16)
Workflow examples: Tools like Gamma can generate hundreds of slides in a week, changing the nature of presentations.
"Are they gonna be exactly what I had before? No, no. But...you have to be flexible." (33:05)
90-day planning cycles: Anything beyond 90 days is guesswork due to rapid industry shifts.
Leading remote and high-trust organizations: Optimize not for size, but for trust and autonomy (Dunbar’s Number—~150).
“You should trust by default. You didn’t want to trust them by default, you shouldn’t have hired them.” (42:14)
AI’s role in knowledge management: AI tools can query years of meeting data, surfacing insights no human could process alone.
"It's like, it's not as predictable as what we had before this in SaaS." (08:45, Richard White)
"Companies go from zero to a hundred million, a billion in revenue in two years...and then go back down to zero two years later." (12:12, Richard White)
"You do not have the luxury to sit on the sidelines. Either you're going to be panhandling or you're going to embrace AI because this is just, this is what it is. This is electricity." (16:08, Charles Schwartz)
"If you can be an expert in replacing your own job with AI, that gives you a job over the next couple years." (22:36, Richard White)
"It used to be you build a software company and usually at least 10 years before someone really disrupted you, and then now it's like five years. Pretty soon it'll be two years." (23:09, Richard White)
"There's an arms race between us and China around AI." (24:11, Richard White)
"When you pick a new tool, you have to understand what you used to do, how you used to operate, will also have to change." (32:16, Charles Schwartz)
"I still wake up every day feeling pretty optimistic. Yeah, I look inside my window, feel less optimistic, but like, I feel like we'll get there." (41:36, Richard White)
“You should trust by default. You didn’t want to trust them by default, you shouldn’t have hired them. But you should direct them by default. You should give them room to run.” (42:14, Richard White)
Richard White’s frank, nuanced outlook underlines that AI is both existentially disruptive and full of opportunity—for those able and willing to adapt at every level, from individuals to boardrooms. The conversation leaves listeners with a sense of urgency and agency: the future isn’t guaranteed, but isn’t doomed—if we act, adapt, and keep questioning the path forward.