
Loading summary
Roelof Botha
Support for the show comes from Crucible Moments, a podcast from Sequoia Capital. Every exceptional company story is defined by those high stakes moments that risk the business but can lead to greatness. That's what Crucible Moments is all about. Hosted by Sequoia Capital's managing partner, Roloff Botha, Crucible Moments is returning for a brand new season and they're kicking things off with episodes on Zipline and Bolt, two companies with surprising paths to success. Crucible Moments is out now and available everywhere you get your podcasts and@CrucibleMoments.com Listen to Crucible Moments today.
Matan Grinberg
Every story you love, every invention that moves you, every idea you wish was yours, all began as nothing, just a blank page with a blinking cursor asking a simple question. What do you see? Great ideas start on Mac. Find out more on apple.com Mac avoiding.
Roelof Botha
Your unfinished home projects because you're not sure where to start. Thumbtack knows homes, so you don't have to don't know the difference between matte paint, finish and satin or what that clunking sound from your dryer is. With Thumbtack, you don't have to be a home pro, you just have to hire one. You can hire top rated pros, see price estimates and read reviews all on the app download today.
Ed Elson
Welcome to First Time Founders. I'm Ed Elson. Seven and a half billion dollars. That is how much money has poured into AI coding startups in just the past three months. And it's not that hard to see. Across the industry, developers are embracing generative AI to speed up their work. It's efficient, it's impressive, but it's still under the careful watch of human engineers. Well, my next guest wondered if AI could do more. What if it could handle routine tasks like debugging or migrations on its own? What if it could be autonomous? To turn that idea into reality, he launched an AI startup which uses agents to handle the mundane work that developers would rather Skip. With a $50 million investment from Sequoia, JP Morgan and Nvidia, his company is reshaping the future of software development. This is my conversation with Mitan Grinberg, co founder and CEO of Factory. All right, Matan Grinberg, thank you for joining me.
Matan Grinberg
Thank you for having me. How are you?
Ed Elson
I'm good. We should probably start off by saying we go way back. We do indeed.
Matan Grinberg
We're friends from college.
Ed Elson
I knew you back in college. I knew you when you were studying physics. You were a budding physicist. I mean, just for those listening, Mattan was Basically the smartest guy I knew in college. And then you go on and you're, I know you were getting your PhD in physics and then eventually you tell me, no, I'm actually starting an AI company. And now here you are and you're running one of these top AI agentic startups, figuring out how to automate coding. Let's just start with like, how did we get here? How do we go from Princeton physics, going to be a physicist, and then now you're an AI person.
Matan Grinberg
Yeah, so obviously that was not the arc that I think I was expecting either. Probably goes back to eighth grade, which is why I got into physics in the first place. Spite is a very big motivator for me. And in eighth grade, my geometry teacher told me to retake geometry in high school. And, and I was like, screw that, like what? Like, I'm good at math, I don't need to do that. And so in the summer between 8th and 9th grade, my first order on Amazon ever was textbooks for algebra 2, trigonometry, pre calculus, calculus 1, 2, 3. Differential equations.
Ed Elson
A true nerd.
Matan Grinberg
Yeah, exactly. And so I spent the whole summer studying those textbooks. And going into freshman year of high school, I took an exam to pass out of every single one of those classes. So I had credit for all of them. And then I went to my dad and I was like, what's the hardest math? And he said, he said, string theory, which is actually physics, it's not math. And I was like, okay, I'm gonna be a string theorist. And then basically for the next like 10 years of my life, that was all I really cared about. I didn't really pay attention much to anything about like, finance, entrepreneurship, like, anything like that. Went to Princeton because it was great for physics, then did a Master's in the UK, came to Berkeley to do the PhD. And at Berkeley, it finally dawned on me. Wait a minute. I was just studying for 10 years, like 10 dimensional black holes and quantum field theory and all this stuff originally because of this, like, spite. And obviously I came to love it. But I realized that I didn't really want to spend my entire life doing that. Taking 10 years to realize that is a little bit slow. But I had a bit of an existential crisis of, you know, like, what is it? What should I do? Almost joined Jane street in a classic, like ex physicist, like, what should I do? Decided not to because I feel like that's the thing. Like, you know, once you go there, you kind of don't move on from that. So I ended up taking some classes at Berkeley in AI, really fell in love in particular with what was called program synthesis. Now they call it code generation. And the math from physics made it such that like jumping into the AI side was relatively straightforward. Did that for about a year, and then realized that the best way to pursue code generation was not through academic research, but through starting a company. And so then the question was like, okay, well, I know nothing about entrepreneurship. I've been a physicist for 10 years. What should I do? And this was just after Covid. But I remember on YouTube, in my recommended algorithm, I saw a podcast on Zoom with this guy whose name I remembered from a paper that I wrote at Princeton. This guy used to be a string theorist, but it was a podcast and it was like, uh, Sequoia Investor, like talks, you know, everything from like crypto to physics. And I was like, what the hell is this? And I remember watching the interview and the guy seemed relatively normal, like had social skills, which is rare for someone, which is rare for someone who had published in string theory.
Ed Elson
That was the other interesting thing about you is you're kind of a social person who's also this physics genius, which again is quite rare. But. So you found someone in common.
Matan Grinberg
Yeah, so. So found someone who was like, okay, you know, maybe there's, there was someone else who has this, this similar background. And I remembered the name correctly. And so I looked him up and saw that he was a string theorist who ended up, you know, getting his degree, then joining Google Ventures, being one of the first checks into Stripe, then one of the first checks into like SpaceX. On the way he had built and sold a company for $1 billion to Palo Alto Networks. And I was just like, this is an insane trajectory. So sent him a cold email. And I was just like, hey, I'm Aton. I studied physics at Princeton, wrote a paper with this guy named Juan Maldacena who's like a very famous string theorist. And I was like, would love to talk. And that day he immediately replied and was like, hey, come down to our office in Menlo park, let's chat. What was supposed to be a 30 minute meeting ends up being a three hour walk. And we walk from Sandhill all the way to Stanford campus and then back. And funny enough on the walk, so we realized that we had a lot of like, very similar reasons for getting into physics in the first place. Similar reasons for wanting to leave as well. And this was in April of 2023, so just after the Silicon Valley bank crisis, and also very soon after The Elon Twitter acquisition. And after the conversation, he was basically like, matan, you should 100% drop out of your PhD and you should either join Twitter right now, because if you voluntarily go to Twitter, of all times, now that's just badass. It looks great, you know, on your resume, or you should start a company. And I knew what the answer was, but didn't want to, like, corrupt what was an incredible meeting. So I. Okay, thank you so much. I'm going to go. Think about it.
Ed Elson
Good advice for meetings. Don't give your answer right away.
Matan Grinberg
Yeah, yeah.
Ed Elson
Take some time. Come back.
Matan Grinberg
Yeah. And so crazy thing, the next day, I go to a hackathon in San Francisco. In this hackathon, I run into Eno. We recognized each other at this hackathon and we're like, oh, hey, like, you know, I remember you. We ended up chatting and realizing that we were both really obsessed with coding for AI. And then that day, we started working on what would become Factory. He had a job at the time, I was a PhD student, so I could spend whatever time. Time I wanted on it. And over the next 48 hours, we built the demo for what would become Factory. Called up Sean and I was like, hey, I was thinking about what you said. I have a demo I want to show you. And so we got on a zoom. I showed it to him. He was like, this is all right. And I was like, all right. Like, I think this is pretty sick. Like, I don't know. He's like, okay, would you work on it full time? And I was like, yeah, 100%. And he was like, okay, drop out of your PhD and send me a screenshot. And I was just like, fuck it. Okay, so go to. Go to the, like, Berkeley portal. Like fully enroll and withdraw. Didn't tell my parents, obviously, send him a screenshot. And he's like, okay, you have a meeting with the Sequoia Partnership tomorrow morning. Like, be ready to present.
Ed Elson
Wow. So now, backed by Sequoia, you just raised your Series B. You are one of the top AI coding startups. But there are a lot of AI coding companies. We spoke with one a while ago, which was Codium, which eventually became Windsurf, got folded into Google in this kind of controversial situation. Point being, there are people who are doing this. What makes Factory different, what made it different from the get go and what makes it different now?
Matan Grinberg
Our mission from the when we first started is actually the exact same that it is today, which is to bring autonomy to software engineering. I think when we first started In April of 2023, we were very early. And what I've come to realize is that, and this is kind of a little bit of a trite statement, but being early is the same as being wrong. And we were wrong early on in that the foundation models were not good enough to fully have autonomous software development agents. And so in the early days, I think the important things that we were doing was building out an absolutely killer team, which we do have. And everyone that we started with is still here, which has been incredible, and having a deeper sense of how developers are going to adopt these tools. So that was kind of in the early days and I think something that we learned that still to this day I don't really see any other companies focus on is the fact that coding is not the most important part of software development. In fact, as a company gets larger and as the number of engineers in a company grows, the amount of time that any given engineer spends on coding goes down. Because there's all this organizational molasses of like needing to do documentation and design reviews and meetings and approvals and code review and testing. And so the stuff that developers actually enjoy doing, namely the coding, is actually what you get to spend less time on. And then these companies emerging saying, hey, we're going to automate that one little thing that you sometimes get to do that you enjoy, you don't get to do that anymore. So your life as a developer is just going to be reviewing code or documenting code, which it just, I think really misses the mark on what developers in the enterprise actually care about. And I think the reason why this happens is because a lot of these companies have like, in their composition, the best engineers in the world graduating from, you know, the greatest schools and they join startups. And at a startup, if you're an engineer, all you do is code. And so there's kind of this mismatch in terms of empathy of what the life of a developer is. Because, you know, if you're a developer at one of these hot startups, yes, coding speed that up, great. But if you're a developer@some50000engineer.org, coding is not your bottleneck. Your bottleneck are all these other things. And with us focusing on that full, like end to end spectrum of software development, we end up kind of hitting more closely to what developers actually want.
Ed Elson
Microsoft. I know, I think Satya Nadella said something like 30% of code at Microsoft is being written by AI right now. I think Zuckerberg said that he's shooting for, I think half of the code at Meta to be written by AI. You're basically saying what software developers want is not for someone to be doing the creative part, but they want someone or an agent or an AI to be doing the boring drudge work. What does that drudge work actually look like? You said sort of reviewing code, documenting code. In what sense is factory addressing that issue?
Matan Grinberg
Even the idea of like 30% of code is AI written I think is a very non trivial metric to calculate. Because if you have AI generated like 10 lines and you manually go adjust two of them, do those two count as AI generated or not? So there's some gray area there, but.
Ed Elson
You think that they're kind of just throwing numbers out there a little bit.
Matan Grinberg
It's just a very hard, it's hard to calculate. And so even if you were trying to be as rigorous as possible, I don't know how you come up with a very concrete number there. But regardless, I think that directionally it's correct that the number of lines of code that's AI generated is strictly increasing the way that Factory helps. So I guess generally like software development life cycle, very high level looks like first understanding, right? So you're trying to figure out what is the like lay of the land of our current code base, let's say, or our current product. Then you're going to have some planning of whether it's like a migration that we want to do or a feature or some customer issue that we want to fix. Then you're going to plan it out, create some design doc, you're going to go and implement it. So you're going to write the code for it. Then you're going to generate some tests to verify that it, you know, is passing some criteria that you have. There's going to be some human review so they're going to check to make sure that this looks good and then you might update your documentation and then you kind of push it into production and you know, monitor to make sure that it doesn't break. In an enterprise, all of those steps take a really, really long time because there's, you know, the larger your org, if it's 30 years old, there are all these different interdependencies and like, like imagine you're a bank and you want to ship a new feature to your like mobile app. There are so many different interd tendencies that any given change will affect. So then you need to have meetings and you need to have approvals from this person and this person needs to find the subject matter expert for this part of the code base. And it ends up taking months and months. And so where factory helps is a lot of the things that don't seem like the highest leverage are what they spend a lot of time on. So, like that testing part or the review process, or the documentation, or even the initial understanding. I cannot tell you how many customers of ours have a situation where there was like one expert who's been there for 30 years who just retired. And so now there's like literally no one who understands a certain part of their code base.
Ed Elson
Yeah.
Matan Grinberg
And so getting some new engineer to go in and do that, there's no documentation. So now that engineer has to spend six months writing out docs for this, like, legacy code base, which is, you know, engineers spend years of their lives becoming experts. The highest leverage use of their time is not writing documentation on existing parts of the code base. In this world where like an org has factory fully deployed, that engineer can just send off an agent. Our agents are called droids. So send off a droid to go and generate those docs, ask it questions, get the insight as if it was a subject matter expert that's been there for 20 years so that they can go and say, okay, here's how we're going to design a solution. Here's how we're going to fix whatever issue is at hand.
Ed Elson
These droids, your agents that you call droids, I think one of the big differentiators that I've seen is that they are fully autonomous. They're doing it basically everything on their own. In contrast to something like Copilot, which is by definition working alongside you to help you figure things out. You guys are saying, no, these things can be completely on their own, totally autonomous. Literally, you've got robots just doing the work for you. Why is that the way to go.
Matan Grinberg
With AI at a high level? So this is true for code, but I would also say for knowledge work more broadly. But for code in particular, we're going from a world where developers wrote 100% of their code to a world where developers will eventually write 0% of it. And we're basically changing the primitive of software development from like writing lines of code, writing functions, writing files, to the new primitive being a delegation, like delegating a task to an agent. And so the new kind of important thing to consider is, you know, you can delegate a task, but if it's very poorly scoped, the agent will probably not satisfy whatever criteria you had in your head. And so if this new primitive is delegation, your job as a developer is to get good at how Can I very clearly define what success looks like, what I need to get done, what the testing it should do? Like, what are organizations contributing guidelines are, let's say. And so with this as the new primitive, the job of the developer is now okay. If I set up the right guidelines and I tell this agent to go, it now has the information it needs to succeed on its own. And this is very similar to like human engineer onboarding. Like when you onboard a human engineer into your organization, what do you do? You don't just throw them into the code base. You'll say, hey, here's what we've built so far. Here's how we build things going forward. Here's our process for deciding on what features to build. Here's our coding standards. So you have like a long onboarding process. Then you give them a laptop so they can actually go and write the code and test it and run it and mess around with it before they actually submit it. And so we need to do similar things with agents where we give them this thorough onboarding process. You give it an environment where it can actually test the code and, you know, mess around with the code to see if it's working. And having that laptop now, it has this like autonomous loop that it can go through where it tries out some code, runs it, oh, that failed. Let me go iterate based on that. Now, we do have not like fully autonomous droids, but the point is that giving people access to this, they can set up droids to fully generate all of their docs for them. So now as an engineer, that's just something you don't need to worry about because that's not the highest leverage use of your time. Thinking about instead this behavior change towards delegation. That's like the kind of biggest thing that we work with enterprises on.
Ed Elson
I think delegation is the right word, but it's also kind of a scary word because delegation implies, I mean, the way that we work today, you delegate to other people whose jobs are to do all of the things that you're describing. There are some companies that say AI is going to be your partner and work alongside you. You're saying, actually, no, this is just going to do the work that is, it would replace people. And this is obviously a big debate in AI, the automation debate. What happens to the four and a half million software engineers? What is your viewpoint on this automation debate and the idea that AI is.
Matan Grinberg
Going to take your job at a high level? I will say AI will not replace human engineers. Human engineers who know how to use AI will replace human engineers who don't. And I think the reason AI will not replace human engineers is because basically there's like a bar for how big a problem needs to be in order for it to, like, be economically viable for someone to implement a software solution to it. And suppose it used to be a billion dollars, and then slowly it's gone down to $100 million or $10 million. Like, these are like the TAMs of the problem that makes it economically worthwhile to build up a team of software engineers to work on a problem. What AI does is it lowers that bar. So now, in a world where before you could only economically viably solve a problem that's worth $10 million, now maybe it's $100,000. Now maybe it's like large enterprises can actually make a lot of custom software for any given customer of theirs. It means that the leverage of each software developer goes up. It does not mean that the number of software engineers go down. It would mean that if there was only one company in the world that had access to AI, because then they have access to AI, they can use AI while no one else does. And now they have way more leverage so they can beat their competitors while having less humans. Right. But the reality is now is if there are two companies and they're competing, one has a thousand engineers, the other has a thousand engineers, they both get AI. So now they have the equivalent output of 100,000 engineers. They're not going to start firing engineers because now one company is going to be way more productive than the other. They'll deliver a better product, better solution, lower cost to their customers, and then they're going to succeed. So then this other company is going to be incentivized then to have more engineers, right?
Ed Elson
Yes.
Matan Grinberg
So I think that's one side of it. I think the other is, like, we have really bad prediction on what we can do with these tools. Because right now, like, humanity has only seen what loosely a hundred thousand software engineers working together can build. That might be like, let's say the cloud providers. Those are some of, like, the largest engineering orgs, something that took, let's say, 100,000 engineers to build. We don't even know what the equivalent of 10 million human software engineers could build. Like, we can't even conceive of, like, what software is so intricate and complicated that it would take that many engineers to build. And I kind of refuse to believe that 100,000 is the limit. There's no interesting software after that.
Ed Elson
Yeah, I'm really glad you brought up the point of the danger here is that one company would own all of the AI. Like, the problem isn't value creation. I mean, what we're describing is technology bringing the costs down and therefore creating more incentives to build more value creation, which can only be a good thing unless it is in some way hijacked. And you don't have a system of capitalism where companies are really competing with each other and forcing each other to iterate. And also that includes many different players who can participate in that value creation. And when I look at the AI space right now, just as an example, when we interviewed and spoke with what is now Windsurf and I asked the founders this question of how do you compete with big tech? And they explained how they're going to do it and how they're going to take big tech on. And then what do you know, Google buys them. And I look at the same thing with scale AI, which was one of the biggest AI startups. Alexander Wang was this incredible thought leader and then what do you know, he gets, I mean, they get an investment which turns into he's now an employee at Meta and now he's building, you know, like Meta social media AI. And it all seems as though AI is being kind of overridden or taken over by big tech through these investments which then turn into these sort of aqua hires. And it does make me concerned that all of the power and all of the value is accruing into one place and it's the same players that we've had over the past 20 years. So how do you think about that? Do you think about this possibility that maybe Big tech comes in and says, we need your software, we need your people, we're going to acquire you. And do you worry about that concentration of power in AI?
Matan Grinberg
I think it's a very top of mind thing for people. Is like, even from the investor side, is it going to be the incumbents that win or will it be insurgents or however you want to, the startups that can come and kind of claw their way into surviving without acquisition? I think the answer is always founder and company dependent. Like, I think some examples that come to mind are like Airbnb and Uber. These are companies where there wasn't a very obvious gap in the market such that anyone could start a company like Airbnb or Uber and just, you know, succeed. Like, I think it took a lot of very intentional and very relentless work in the face of tons of adversity to actually make those companies viable and successful. And I think in a lot of These cases, it is the choice of the founders or the companies to either continue or, you know, proceed to, to joining Big Tech. And I think at the end of the day, just, it really does depend on, like, how relentless are you willing to be to actually fight that fight. Because I think both of those acquisitions were optional.
Ed Elson
Yes.
Matan Grinberg
Like, I don't think they were like, back against the wall, had no other choice. I think it was like, for whatever reason, and I don't know the exact details of either of these situations, but it was like, you know what, based on the journey so far, let's elect to do this.
Ed Elson
Presumably because they were offered so much money. I mean, when I look at Meta, hiring all of these AI geniuses, and I assume this is probably a concern for Factory and many other AI startups, what if Meta just hires our people? And I wonder if it's because these companies are so dominant, they have so much money that they're like, here, here's a billion dollars. And it's hard to say no to. Totally.
Matan Grinberg
Yeah. But I think if you, like, went back in time and you offered, like, let's say, Travis Kalanick a lot of.
Ed Elson
Money, he'd say no.
Matan Grinberg
Yeah, because he was, he was like, that was the mission. And I think similarly at Factory, we are super focused on people that are very mission driven. If you want to make a ridiculous amount of money, you can go to Meta, you can go to, you know, one of those places. The people who have joined our team have chosen this mission with this team in particular, because of that reason. And I think that's what it takes ultimately, at the end of the day, because we do not want to be acquired, we do not want to be part of Big Tech, because I think they don't have the tools to solve the problem in the way that we want to solve it.
Ed Elson
Yeah, it sounds like what AI needs in order for there to be real competition is you need a founder who wants to go to bat and who wants to fight, essentially, who doesn't want to get, I guess, in bed with big tech. But I mean, one of the big themes that we've been seeing with AI recently is, of course, this circular financing stuff where these companies are investing and then the money comes back to them when they buy their products. And it's hard to see the competition actually happening when you see everyone kind of collaborating with each other. How do you think about that? And how do people in Silicon Valley, I mean, you're very tapped into Silicon Valley, Sequoia, one of the top firms, one of your Investors. How do people view that in Silicon Valley right now? And are they concerned about it?
Matan Grinberg
People definitely make a lot of jokes about like the, the like circular investing and that sort of thing. I mean, on one hand I get it because there is a lot of interdependency of all these companies and there is a lot that they can do together, which I think on one hand is a good thing.
Ed Elson
Yeah.
Matan Grinberg
On the other hand, it's a little bit inflationary to some like, valuations or like revenue numbers or these types of things. I think on the net I will be so productive that it won't matter that much. But short term it is a little bit like eyebrow raising, I guess. But at the end of the day it's like if you're, let's say a foundation model company, you need to get the direct deal with Nvidia because you want the GPUs. So you kind of. It's just one of those things that you kind of have to do and I don't. I guess I'm not sure what an alternative would look like in a dynamic where you have four or five foundation model companies who are. Let's ignore Google because they can make their own stuff, but who are really competing over the GPUs in order to make the next best models.
Ed Elson
We'll be right back.
Shopify Ad
Support for the show comes from Shopify. If you run a small business, you know, there's nothing small about it. As a business owner myself, I get it. Every day there's a new decision to make and even the smallest decisions can feel massive. What can help the most is a platform with all the tools you need to be successful. A platform like Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e commerce in the US from household names including Mattel and Gymshark to those brands just getting started. That's why they've developed an array of tools to help make running your small business easier. Their point of sale system is a unified command center for your retail business. It gives your staff the tools they need to close the sale every time. And it lets your customers shop however they want, whether that's online, in store, or a combination of the 2. Shopify's first party data tools give you insights that you can use to keep your marketing sharp and give your customers personalized experiences. And at the end of the day, that's the goal of any small business owner, keeping your customers happy. Get all the big stuff for your small business. Ride with Shopify. Sign up for your $1 per month trial and start selling today at shopify.com propg go to shopify.com prop g shopify.com propg support for the show comes from Grunds Even when you do your best to eat right, it's tough to get all the nutrition you need from diet alone. That's why you need to know about Grunds. Gruens isn't a multivitamin, a green scummy or a prebiotic. It's all of those things and then some at a fraction of the price. And bonus, it tastes great. All Gruins Daily Gummy Snack Packs are vegan nut, gluten, dairy free with no artificial flavors or colors and they're packed with more than 20 vitamins and minerals made with more than 60 nutrient dense ingredients and whole food grunts. Ingredients are backed by over 35,000 research publications and the flavor tastes just like sweet tart green apple candy. And for a limited time you can try their Gruni Smith Apple flavor just in time for fall. It's got all the same snackable, packable full body benefits you've come to expect, but this time these taste like you're walking through an apple orchard in a cable knit sweater. Warm apple cider in hand. Grab your limited edition Gruny Smith Apple Grunts, available only through October. Stock up because they will sell out up to 52% off when you go to Grundsgruns.co and use the code Propg. Support for the show comes from Betterment. Nobody knows what's going to happen in the markets tomorrow. That's why when it comes to saving and investing, it helps to have a long term approach and a plan you can stick to. Because if you don't, it's easy to make hasty decisions that could potentially impact performance. Betterment is a saving and investing platform with a suite of tools designed to prepare you for whatever is around the corner. Their automated investing feature helps keep you on track for your goals. Their globally diversified portfolios can smooth out the bumps of investing and prepare you to take advantage of long term trends. And their tax smart tools can potentially help you save money on taxes. In short, Betterment helps you save and invest like like the experts without having to be an expert yourself. And while you go about your day, Betterment's team of experts are working hard behind the scenes to make sure you have everything you need to reach your financial goals. So be invested in yourself. Be invested in your business. Be invested with betterment. Go to betterment.com to learn more. That's B E T T E R m e n t.com investing involves risk performance, not guarantee.
Ed Elson
We're back with first time founders. In terms of AI legislation, there seems to be a lot of debate right now on how do you regulate AI. And California is trying to be a leader in regulating. What are your views on, on AI regulation? Are people going over the top trying to regulate? Is it warranted? How do you think about that?
Matan Grinberg
Maybe just to draw some parallels, in my mind, I view things like climate regulation, nuclear regulation and AI regulation to be similar in that they are global and local regulation doesn't really matter. Like for example, pick any one of those three. If you make rules about in California you can't have a gas car, or you can't build like nuclear weapons, or you can't build AI in the extreme in California that doesn't really matter because that says nothing about the rest of the world. And if the rest of the world does it, it affects what happens in California for climate, for nuclear, for AI. And so I think for AI in particular, the regulation that is interesting is less like, I honestly like, I think California just, it does, it doesn't matter regulating AI state by state, at least at the macro level. Maybe it's like in terms of usage for like interpersonal things, sure. But in terms of like training models, the relevant stage there in my mind is the global stage. And how does it affect like US regulation versus European regulation versus China? Let's say from what I've seen thus far, the time spent on like state regulation is kind of wasted, at least as it relates to foundation models.
Ed Elson
I think there is a concern probably in Silicon Valley that everyone's so afraid of AI. I mean, I've seen these surveys that I think more than half of Americans are more worried about AI than they are excited. I guess that's something to philosophically tackle on your end because you're building it. But then I would imagine that in Silicon Valley there's this feeling of everyone's just too scared because they've watched all these movies and they've watched the Terminator and so these people are getting too worried about it to the point that we are regulating in a way that actually doesn't make sense.
Matan Grinberg
It's pretty interesting. I think two things come to mind. So one, there's the classic phenomenon of you're a startup, you want no regulation, then you become big, then suddenly you want regulation. And we've seen that happen with I think basically every foundation model company, which is always a shame to see. And then the Second, and this is more just like a comment on the Silicon Valley and some of the culture there. I know so many people who work at the foundation model labs who don't have savings. Like, they just do not believe in like putting any money in their 401k. They like spend it all because of this like vision of like, something's coming or. Yeah. Which is very weird. But then there are equally as many who work at these, who are like, you know, these guys kind of drank too much of the Kool Aid. It's really important to have these conversations and think about these things because I think it's actually, it reminds me a lot of thinking about like in theoretical physics, like thinking about the Big Bang and like black holes in the universe. The first time you think about it, it's kind of like scary existential crisis. What is everything. We're in such a large universe, nothing has meaning whatever. I think thinking about AI like getting exponentially better kind of leads to similar like existential questions like, what are we? Like, what value do humans have if there's going to be something that's smarter than any one of us? And then you have the maturity of like, wait, intelligence is not why humans have value. That's not the source of intrinsic value. We don't think someone's more valuable because they're smarter. So having these conversations and thought processes is I think very important for both people working in AI and people who aren't. But yeah, there's some pretty weird people who kind of like are really, really in the bubble, inundated in it, and who kind of get these interesting worldviews of like, you know, the singularities coming. So I want to, you know, spend everything that I have now. Yet at the same time, if they think it's not going to be good, they remain working on it.
Ed Elson
So these AI engineers who are not saving any money, they're doing it because they think like the end of the world is coming or because they think that there's going to be some transformative event that will make them really rich. Like is it more of a duma perspective or.
Matan Grinberg
It's a pretty big mix. Like some people think we will just become in a world where we're like post economic and just like money will be irrelevant and like for, for anyone there's some like base level, whether it's like some UBI type thing or, or some have like the doomer perspective. It's pretty bizarre.
Ed Elson
It sounds irrational to me.
Matan Grinberg
Yes, I would agree.
Ed Elson
Okay, you'd agree?
Matan Grinberg
Yes.
Ed Elson
Yeah. And I Think that it brings up an interesting thing in AI, which is there's this incredibly transformative once in a generation technology that has come along and it causes humans, when that happens to act strangely. Yes, that behavior, not saving while you're building AI because you think that it's going to mean some event that could either end the world or dismantle the system. Maybe they're onto something. To me it seems irrational. And I also think it says something about the potential of a bubble that is emerging that a lot of people in the last few weeks have been getting more and more concerned about and that more and more people seem to believe. I think Sam Altman himself said the word bubble. There have been other tech leaders who are saying that. As someone who is building in this space, how do you think about that? Does it concern you or is it something that you're not too worried about?
Matan Grinberg
Obviously, just to be a responsible CEO, I need to have priors that there is some chance that something like that happens in the broader economy where, you know, there's some corrections. Yeah, my priors are very low in particular because like the ground truth Utilization of GPUs is just like fully, fully saturated. Now. It would be one thing if we're building out all these data centers for like the dream of, okay, we're going to saturate this compute at someday, but like we are doing that today and it's like people are still hungry for more of that compute Now. I think there's a good argument that a lot of compute is subsidized. So like Nvidia might subsidize the foundation model companies, the foundational model companies subsidize companies like us and maybe give us discounts on their inference and we might subsidize new growth users. And there's a little bit of that that I think that's the part that there's a concern of like actually drawing a similar comparison to Uber. I don't know if you remember when Uber first came out, rides were super cheap because it was very much subsidized.
Ed Elson
VCs were paying for us VCs and.
Matan Grinberg
So the LPs, all the like pension funds were basically subsidizing people's Ubers in a very indirect way. And like people kind of, you know, sometimes can make, make jokes about that even as it relates to LLMs. The reason I'm less concerned is that the ROI is just so massive and like the, the productivity gains from in particular coding. It's like the fact that we have built factory with basically less than 20 engineers. That is something that pre, we just would not have been able to do. And so I think the leverage that people are getting is what makes me less concerned. And also the speed of adoption. Like, I think even some of these enormous enterprises that we're speaking with, they missed, like, mobile by like, five years.
Ed Elson
Wow.
Matan Grinberg
But for AI, they're on it because they know if we have 50,000 engineers, we need to get them AI tools for engineering because of how existential it is.
Ed Elson
If there is a correction, and the way I see it is there will be a correction that won't wipe out AI like some people seem to think, but it'll be similar to the Internet. There's a correction, valuations come down, there is some pain, and then long term, you will see massive adoption and massive value creation. That's just my perspective. Say there is a correction. Who wins in that scenario? What happens to OpenAI? What happens to startups like yourself, like, who are going to be the winners and losers in that scenario where we do see some sort of pullback?
Matan Grinberg
So one core principle is Jensen always wins. So for the last few years, Jensen's going to stay winning. So that's, that's, I think, you know, not going to change.
Ed Elson
And why do you say that? Because he's just at the very base of the value chain. Yes.
Matan Grinberg
And at the end of the day, like, all of these circular deals, they all come back to Nvidia anytime anyone announces, hey, we're doing, like, free inference. That's free. But, you know, someone's paying Jensen at the end of the day. So I think that's kind of one baseline there, I think another. And this actually maybe relates to what we were talking about earlier about, you know, these companies and the acquisitions is as it relates to, like, startups and how many there are. There was a period that I think has been dying down at least a little bit in San Francisco where if you're an engineer who, like, worked at AI for a month, you basically just get stapled a term sheet, like, onto your forehead the second you leave. And, you know, you show up to show up to a vc, which I think is not good because you don't get like, the Travis Kalanicks or the Brian Cheskys in a world where you're encouraged to do things like that. Like, anytime anyone asks me, like, hey, Matan, you know, I'm thinking about starting a company, I will always say no. Always. Because if me saying no discourages you from starting a company, then you absolutely should not have done it and I think like there's almost like too much help and too much like, yeah, you know, go do it, go start it. Because then it leads to some of these things we were talking about where the second the going gets tough, it's like, all right, acquisition time. And this is maybe my localized view because I live in San Francisco and that's like, you know what? I see more day to day than some of like the more macro trends. But I think the first place we would see a correction like that is in, I mean, coding, for example. There are like a hundred startups in the coding space. You know, perhaps there'll be less that are funded because it's like, hey, you know what, at this point, maybe it's not as relevant or you know, the nth AI personal CRM, like that's another one that's there's been like a million companies there. The correction might look like, at least at that level, you know, funding being a little more difficult, let's say. And then the way that that relates to the foundation model companies is I think eventually you'll get to a point where they can subsidize inference less, which just means growth probably slows. Like OpenAI and Anthropic, their revenue has been, you know, ridiculously large, but also the, you know, margin on that has been pretty negative. And so it's basically like, how long can you subsidize and like deal with that negative margin? They're obviously legendary. Uber is a great example. Amazon's a example where you can operate at a loss for a period of time in order to build an absolute monster of a company and then just turn on margin whenever you're ready. The question is, how long can you sustain that? And so if there were a correction, I think that would affect that.
Ed Elson
Yeah, it does feel increasingly that AI, the danger of AI isn't adoption or technology, it's a timing and financing problem. And I look at OpenAI and the amount that they're spending, I'm starting to believe that the AI companies who are going to win are the ones who are manage their balance sheets the best. And it's really going to be a question of financial management because of the thing that you say there, where all of this money is being plowed in. And it is a question of how long can you go at an operating loss, which Uber crushed it, Amazon crushed it. There were many other companies that died that did not crush it from that perspective. So it will be really interesting to see how that plays out. As someone who is building In Silicon Valley, in San Francisco, you've built this incredible company that's generated a ton of heat and press. Like you are in AI. What does that feel like? What does it feel like to be one of the AI people? Does it feel like you're in some special moment in time? What is it like?
Matan Grinberg
It feels very much like we are still in the trenches because there is a ton that we want to do and that we need to get done. I think for me, the most surreal thing is the team that we've assembled, like every day coming in person in our office in San Francisco. It is such a privilege working with now. We're 40 of the smartest people that I've ever met in my life. You know, we're in New York right now. We're starting to open up an office here. I think that's where it's a little bit like, whoa. Like we're now, you know, we have off two offices in, on the opposite sides of the country. It's more just like, I think it's just really cool to see over the last two and a half years how dedicated effort can actually build something that is concrete and meaningful. And some of the largest enterprises that we're working with, it's just kind of crazy to sometimes stop for a bit when it's not like the non stop grind. To think like this organization now doesn't have to deal with these problems because of something that we built, because of this random cold email, because this random hackathon that I met, you know, at. I think it's, it's just, it's a very cool, visceral reminder that you can do things that affect things.
Ed Elson
Yes.
Matan Grinberg
And if you are really driven by a good mission, you can make people's lives better in relatively short order. And I think that's a really empowering thought.
Ed Elson
What is something that you think the American population sort of gets wrong about AI and also about AI founders and the people building this technology.
Matan Grinberg
Most of the world only knows ChatGPT, very few people know about. Like in San Francisco, everyone's like, oh, which model is better? Like OpenAI, Anthropic, Google, Gemini, the rest of the world, it's just like, it's basically just chatgpt, which I think on one hand is interesting.
Ed Elson
Wow.
Matan Grinberg
I think on the other hand it is really important for basically every profession to kind of rethink your entire workflow. And it is, in fact, I would say it's almost an obligation to like basically take a sledgehammer to everything that you've Set up as like your routine and how you do work and rethink it with AI. For me, this is actually something that's really important because I'm like the most habit oriented, like routine person and like constantly, you know, every few months being like, let me try and see how I could do this differently with AI in a way that's not like, oh, technology's taking over, but more just like it makes things more efficient and faster and more convenient. So I think that's one thing is there is so much time that can be saved by spending a little bit of time to, you know, try out these different tools, whether it's something like ChatGPT or, you know, if you're an engineer trying out, you know, something like Factory. I think regarding AI founders, it's hard to say because there's so many tropes that unfortunately can be really true sometimes. And sometimes it's even frustrating to me because like I grew up in, in Palo Alto and hated startups. Hated it. Like I grew up like in middle school we would spend time like, you know, walking around downtown Palo Alto. And I remember I have a very concrete memory when Palantir moved into downtown Palo Alto, there are all these people in their like Patagonias with like the Palantir logo. And I remember looking so like scornfully at all these people walking by with these Patagonias. But yeah, I mean, I think, I think it's important maybe. Actually I think the thing is less for the rest of the world about AI founders and more like some of these AI companies, it's really important to leave San Francisco, like exit the bubble. Like, it's a cliche but like touch grass, go to see the real world. Because while San Francisco is very in the future, you know, I've taken a waymo to work for the last like two years. The rest of the world is still like kind of how it was in San Francisco five years ago. And I think it's important to have that grounding because if you don't leave and if you don't have that grounding, you could do things like not put money in your 401k, right? And things like, not that you need to put it in your 401k, but you kind of get these little bit warped perspectives sometimes.
Ed Elson
That is really interesting. Does that, I mean, this idea that there is a bubble connotes the wrong thing, but there is an echo chamber. And the fact that you're building and you're saying, you know, we're building these offices in New York. And the thing that is important for AI and I think is probably really true, is to kind of go out into the world and understand like what are some real use cases where this is really going to provide value for people, not just in your enterprise SaaS startup in San Francisco, but anywhere else throughout America. Does that worry you as you go further up the chain of power and command in Silicon Valley, does it worry you perhaps that people at the very top aren't doing that enough? They're not getting out there and understanding what this technology really needs to be and do for America?
Matan Grinberg
I would say yes. And I think that's also just a very common problem. Just generally, as organizations scale or as organizations get more powerful, the people running those organizations inherently get separated from the ground truth of like let's say the individual engineers or individual people who are going and delivering that product to people. And I think similarly they lose touch with their customers as well. I think the best leaders have really good communication lines towards the bottom. Yeah, towards the like the people, the customers they're serving are the people who are like kind of in the trenches, like hands on, doing the work. And I think you probably end up seeing this in results of a lot of these companies because I think it's hard to be a successful company if you don't have some of that ground truth. Any good leader I think should be concerned about that and should always be paranoid of like, you know, am I surrounded by yes men or am I in an echo chamber and I'm not getting the real like ground truth? Yeah, yeah. So that is, that is something that's on mind.
Ed Elson
We'll be right back.
Shopify Ad
Support for the show comes from Adobe Express. With social media, email and a growing variety of online ads, there are more touching touch points than ever between your business and its customers. Adobe Express is here to make sure your smallest touch point is as polished, impactful and on brand as the biggest. The brand kits and Express make following design rules a breeze. Templates for flyers, banners, emails, social posts and more have all the professional quality Adobe is known for. And generative AI that's safe for business gives everyone the ability to make images, rewrite texts and produce effects. Using simple text prompts, you can create campaigns, resize ads with a click and even translate content automatically. Work that used to take weeks, now takes minutes or even seconds. Adobe Express also makes collaboration, approval and sharing easier so any team can become a well oiled content machine. And if you're leading your team, you can monitor it all from your admin console that means you have one centralized place where you can ensure that that every asset is right and that everyone is synced. Go from fragmented to business friendly Switch to the quick and easy app to create on brand content Adobe Express. Learn more@adobe.com Express Business True story. I've actually used Adobe Express and I was genuinely impressed with how easy it is to create professional content that you can immediately push out. Support for the show comes from Crucible Moments, a podcast from Sequoia Capital. We've all had pivotal decision points in our lives that, whether we know it or not at the time, changed everything. This is especially true in business like did you know that autonomous drone delivery company Zipline originally produced a robotic toy? Or that Bolt went from an Estonian transportation company to one of the largest ride share and food delivery platform in the world? That's what Crucible Moments is all about. Deep diving into the make or break moments that set the course for some of the most important tech companies of our time. With interviews from some of the key players that made these companies a success. Hosted by Sequoia Capital's managing partner, Roelof Botha, Crucible Moments is back for a new season with stories of companies as they navigated the most consequential crossroads in their journeys. Hear conversations with the leaders at Zipline, Stripe, Palo Alto Networks, Klarna Supercell and more. Subscribe to Season 3 of Crucible Moments and catch up on Seasons 1 and 2@CrucibleMoments.com on YouTube or wherever you get your podcasts. Listen to Crucible Moments today. Support for the show comes from hims Hair loss isn't just about hair, it's about how you feel when you look in the mirror himself. You take back that confidence with access to simple personalized care that fits your life. HIMSS offers convenient access to a range of prescription hair loss treatments with ingredients that work, including choose oral medications, serums and sprays. You shouldn't have to go out of your way to feel like yourself. It's why HIMSS brings expert care straight to you with 100% online access to personalized treatment plans that put your goals first. No hidden fees, no surprise cost, just real personalized care on your schedule. For simple online access to personalized and affordable care for hair loss, ED, weight loss and more, visit himss.com profg that's himss.com profg for your free online visit himss.com profg Individual results may vary based on studies of topical and oral minoxidil and finasteride. Future products include compounded Drug products which the FDA does not approve or verify for safety, effectiveness or quality. Prescription required. See website for details, sales restrictions and important safety information.
Ed Elson
We're back with first time founders who is like AI Jesus right now. Is it Jensen? Is it Sam Altman? Is it Mark Zuckerberg like in San Francisco? Who's the guy who do people revere?
Matan Grinberg
I mean it's gotta be Jensen. Like Sam, you know, a lot of wins, some losses. Zuck, a lot of wins, a lot of losses. Jensen, that guy just grinded for 30 years. I remember when I built a computer at home to play like video games on a PC. I bought an Nvidia chip and in my mind it was like Nvidia, you know, they're the video game graphics card company now. They are the most valuable company in human history with no signs of stopping. And he just grinded it out for 30 years like it is the most respectable thing. He's also the nicest dude and he has no, like, he doesn't have enemies. He's. So you've met with him, right? I have. He's extremely generous with his time. He also know this guy's like, knows every little detail about Factory. Like I don't know how he has the time to, to do these things, but he's, he is a killer. He's, he's really good.
Ed Elson
When you think about sort of the long term future of AI and that was, you know, for many years it was AGI is coming and, and think about all the things it can do. Think about how it could solve diseases. I think about how it could cure cancer. And then I see like Erotica GPT and I see the Sora AI TikTok feed. I'm sort of like, what happened to the big vision? We're back to sort of Pornhub meets TikTok. But it's got AI. How do we expand the vision of AI? What is the, what is the grand vision for AI? And do you think it's gonna really come true?
Matan Grinberg
Well, so I think on one hand like you know, the, the pure slop that is these like AI Sora or the one that Meta announced. I think on one hand, yeah, Vibes AI. I think on one hand it's very, it's very. In a, in a certain weird sense it is beautiful in that it is just like pure human nature. Like what do we do when we have really good technology? Like let's make porn. Like that's the first thought. And in a certain sense it's like, okay, I'm glad that Even though we're generating all this technology, we're still humans at our core.
Ed Elson
Overestimated ourselves when we thought.
Matan Grinberg
But then on the other hand, there are still people who are doing really great work. Like one of my friends, Patrick sue, who runs ARC Institute, they're doing AI for biotech research and biology, and I think they're doing a lot of really cool work. And maybe this actually relates to something we were talking about earlier, which is, you know, people kind of, at a first glance, might have a little bit of an existential crisis of, you know, intelligence is now commoditized, so there's now, like, some people are saying, you know, we both live in a world where if we have children at some point, our children will never be smarter than AI. Right. Like, we both grew up in a world where we are smarter than computers for at least a period of time, and our kids would never know that world. Which is a little bit crazy because, you know, a huge part of growing up is going to college, becoming really smart in some certain area. And so I think now we're having a little bit of a decoupling of human value being attributed to intelligence. But then there's a natural question of like, okay, well, you know, we were sold this vision about, you know, let's say even the American dream of, like, if you work really hard, get really good at this one thing, then you'll have a better life. But now it's like, you're never going to beat the intelligence of this computer. So what is the thing to strive for? And I think this actually relates to, like, the AI porn versus the AI curing cancer, which is in my mind, the new primitive or the new, maybe like North Star for humans is agency in which humans have the will instead. Like, yes, you can, like, hit the hedonism and just watch AI porn and play video games all day, but who has the agency to say, no, I'm going to work on this hard problem that doesn't give me as much dopamine, but, like, because of the will and agency that I have, I'm choosing to work on this instead. And I think that might be the new valuable thing, that if you have that in large quantities, that maybe that's kind of what brings you more meaning.
Ed Elson
Why do you go to agency versus many other things? For example, you know, you mentioned you have, you have a friend who's working on issues in biotech. Maybe that is a question of, like, having the right values or, I mean, not to get, like, mushy, but maybe a value would be kindness or a value would be creativity. There are lots of things out there that you could pick and choose from. Why is it agency in your mind?
Matan Grinberg
I guess the way that I think about it, it's like the agency to go against maybe like the easiest path for dopamine or like the like the natural like human nature, like just give me like the good tasting food, the porn, the video games, the like, you know, easy fun stuff. And I think maybe part of agency has to do with values. Like if you value creativity and if you value kindness and you know, I think that is something that might motivate more agency. But agency is basically at least the way I think about it, it's like the will to endure something that is more difficult for maybe a longer term reward. Whether that's the satisfaction of, you know, bringing this, you know, better healthcare to people or, or satisfying that curiosity.
Ed Elson
It's interesting because you say the word agency and you are building agents and there's like a parallel there. And it's almost as if the people who are really going to win are the people who can have some level of command and directive agency over these AI agents. It's the person who isn't just gonna do what they're told by the guy who controls the AI agent and says, okay, create this code. It's the person who can actually tell the agents what to do.
Matan Grinberg
Yes.
Ed Elson
And that's the direction that you believe humanity and work should be headed 100%.
Matan Grinberg
And I think that's also like if you think back to the people that you've met in your life that come across as particularly intelligent or you know, remarkable in whatever capacity, oftentimes it's not raw IQ horsepower. Like you'll note that when you meet someone with high iq, it's pretty easy to tell. But growing up in the Bear, there are so many that are very high iq, but aren't that, aren't that like high agency or like independent minded? And I think those are the people that oftentimes it's like really like leave a mark when you remember of like oh like that person was, you know, maybe they weren't even that high iq, but they were very like independent high agency. And I think that that now is going to be much more important because great, you know, you might be born have a lot of, you know, high iq. Everyone has access to the AI models that have this intelligence. So it's not really a differentiator anymore. The differentiator is do you have the will to use those in a way that no One has thought of before or in a way that's difficult, but to get some longer term tasks done.
Ed Elson
It's really interesting because what you're describing is like, how do you, what can I do that AI cannot do? And what you're saying is AI cannot think for itself. It cannot be an independent, creative minded creature. It can be a math genius, it can solve problems within seconds, but it can't have the willpower to decide, this is what I want to do. This is what is important to me. This is what has value, which I think is definitely right. We have to wrap up here. I just want to note, I saw a tweet I think from yesterday that you put out there and it shows this competition of all the different coding agents. So you've got Cursor and you've got Gemini and you've got OpenAI's coding agent. You are number one.
Matan Grinberg
That's right.
Ed Elson
In agent performance.
Matan Grinberg
That's right.
Ed Elson
What does that mean? What does it mean to be number one and how are you going to take that moving forward?
Matan Grinberg
This is a benchmark that basically does like head to heads of coding agents and they use like an ELO rating system. So it's like chess where you know, at a high level you could have, in chess, let's say if you have 100 losses against, you know, someone that's equal skill to you, but then you beat Magnus Carlson, you can have an incredibly high chess rating. So this is like an ELO rating system where it's, it gives these agents two tasks and then it just has humans go and vote which solution they liked better. Like the one from let's say Factory versus OpenAI or Anthropics. And we have the highest ELO rating. So in these head to heads, we beat them, which is pretty exciting. I think it's exciting on a couple fronts. One, we've raised obviously very little money compared to a lot of the competitors that are on that. And anything. I think that goes to show that in a lot of these cases, being too focused on the fancy, like train the model, let's do the rl, let's do the fancy fine tuning and all this stuff. Sometimes it doesn't give you the best, like ground truth, like what is the best performing thing for an engineer's given task. Benchmarks are very flawed. You know, they're not fully comprehensive of everything that it can do. But I think it's helpful when developers have a lot of choices out there to try and say, okay, well like which one should I use? This one is Nice because it's pretty empirical of like developers seeing two options and picking them and then consistently our droids win, which is pretty fun.
Ed Elson
Final question, what does the future of Factory look like? What do you think about when you look at the next 10 years?
Matan Grinberg
10 years is very hard because AI is pretty crazy and I think humans are bad at reasoning around exponentials. I would say in the next few years bringing about that mission of that world of developers being able to delegate very easily and just have a lot more leverage. Developers not needing to spend hours of their time on code reviews or documentation. And I think more broadly that that turns software developers into like more cultivators or orchestrators and allows them to use what they have trained up for so many years, which is like their systems thinking. That's what makes engineers so good, is they're really good at reasoning around systems, reasoning around constraints from their customers from the business, from the underlying technology and synthesizing those together to come up with some optimal solution. And with Factory they get to use that to its fullest extent much more frequently in their day to day. And I think that is a net good for the world because that means there will be more software and better software that is created, which means we can solve more problems and solve problems that weren't solved before, which I think on the net is just better for the world.
Ed Elson
Matan Grinberg is the founder and CEO of Factory. This was awesome. Thank you.
Matan Grinberg
Thank you, Ed.
Ed Elson
This episode was produced by Alison Weiss and engineered by Benjamin Spencer, our research associates at Dan Shalon and Kristen o' Donoghue and our senior producer is Claire Miller. Thank you for listening to first time founders from Prof. G Media. We'll see you next month with another founder story.
Mercury Ad
Mercury knows that to an entrepreneur every financial move means more. An international wire means working with the best contractors on any continent. A credit card on day one means creating an ad campaign on day two. And a business loan means loading up on inventory for Black Friday. That's why Mercury offers banking that does more all in one place. So that doing just about anything with your money feels effortless. Visit mercury.com to learn more. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group Column NA and Evolve bank and Trust members fdic.
Adobe Acrobat Studio Ad
Adobe Acrobat Studio.
Adobe Express Ad
So brand new.
Adobe Acrobat Studio Ad
Show me all the things PDFs can do. Do your work with ease and speed. PDF Spaces is all you need. Do hours of research in an instant with key insights from an AI assistant. Take a template with a click now your prezo looks super slick. Close that deal. Yeah, you want do that doing that that did that done. Now you can do that do that with Acrobat. Now you can do that do that with the all new Acrobat. It's time to do your best work with the all new Adobe Acrobat Studio.
Adobe Express Ad
Nobody knows your customers better than your team, so give them the power to make standout content with Adobe Express. Brand kits make following design rules a breeze and Adobe quality templates make it easy to create pro looking flyers, social posts, posts, presentations and more. You don't have to be a designer to edit campaigns, resize ads and translate content. Anyone can in a click and collaboration tools put feedback right where you need it. See how you can turn your team into a content machine with Adobe Express, the quick and easy app to create on brand content. Learn more@adobe.com Express Business.
Date: November 2, 2025
Host: Ed Elson
Guest: Matan Grinberg (Co-founder and CEO, Factory)
This episode of "First Time Founders" features Matan Grinberg, a former physicist turned entrepreneur, who co-founded Factory—an AI startup building fully autonomous coding "droids" (agents) now backed by Sequoia, JP Morgan, and Nvidia. Ed and Matan, old college friends, delve into the challenges, ethics, and economics of automating software development, the realities of Silicon Valley’s AI boom, and what autonomy, agency, and AI mean for the future of work.
[02:57–09:28]
"That day... I showed [the demo] to him. He was like, 'This is all right.'... He's like, 'OK, drop out of your PhD and send me a screenshot.' And I was just like, fuck it." — Matan Grinberg (09:25)
[09:29–12:57]
"As a company gets larger... the amount of time any given engineer spends on coding goes down. Your bottleneck is all these other things. With us focusing on that full, end-to-end spectrum, we end up hitting more closely to what developers actually want." — Matan Grinberg (10:49)
[15:42–18:25]
"We're changing the primitive of software development from writing code to the new primitive being a delegation: delegating a task to an agent." — Matan Grinberg (16:18)
[18:25–21:24]
"AI will not replace human engineers. Human engineers who know how to use AI will replace human engineers who don't." — Matan Grinberg (19:09)
[21:24–27:24]
"We are super focused on people that are very mission-driven. If you want to make a ridiculous amount of money, you can go to Meta... [but] the people who have joined our team have chosen this mission." — Matan Grinberg (25:03)
[31:04–33:23]
"I know so many people who work at the foundation model labs who don't have savings... because of this like vision of, like, something's coming." — Matan Grinberg (33:23)
[35:36–41:46]
"Jensen's going to stay winning... all of these circular deals, they all come back to Nvidia... someone's paying Jensen at the end of the day." — Matan Grinberg (39:16)
[43:01–44:13]
[44:13–46:44]
[52:25–62:51]
"Now we're having a little bit of a decoupling of human value being attributed to intelligence... the new primitive for humans is agency." — Matan Grinberg (54:43)
"In these head-to-heads, we beat them, which is pretty exciting... Sometimes [fancy new AI methods] don’t give you the best ground truth... developers seeing two options and picking them, and then consistently our droids win.” — Matan Grinberg (61:35)
On Founders Selling Too Soon:
"Both of those acquisitions [by Google, Meta] were optional." — Matan Grinberg (24:23)
On AI Hype and Ground Truth:
"If you’re an engineer who worked in AI for a month, you basically just get stapled a term sheet onto your forehead the second you leave." — Matan Grinberg (39:29)
On True Differentiators in the Age of AI:
"Oftentimes it’s not raw IQ horsepower… The differentiator is, do you have the will to use those [AI tools] in a way no one has thought of before..." — Matan Grinberg (58:21)
Matan Grinberg’s perspective, shaped by both deep academic rigor and Silicon Valley scrappiness, provides both a grounded and visionary take on the AI revolution.
The industry, he believes, is at a tipping point: not about engineers being replaced, but about how they will multiply their leverage—if they’re willing to adapt, redefine their workflow, and, above all, retain the agency that sets humans apart from the coming tide of AI droids.
“If you are really driven by a good mission, you can make people's lives better in relatively short order. I think that's a really empowering thought.” — Matan Grinberg (44:05)
For anyone wondering whether AI will steal their job, destroy the economy, or just make more memes and porn, Grinberg’s advice is clear:
Rethink your routine, build your agency, and get out of the bubble—because the winning move isn’t fear, but adaptation.