
Loading summary
A
Hey there agile adventurer, just a quick question. What if for the price of a fancy coffee or half a pizza, you could unlock over 700 hours of the best agile content on the planet? That's audio, video, E courses, books, presentations, all that you can think of. But you can also join live calls with world class practitioners and hang out in a flame war free and AI slop clean slack with the sharpest minds in the game. Oh, and yes, you get direct access to me, Vasko, your Scrum Master Toolbox podcast. No, this is not a drill. It's this Scrum Master Toolbox membership. And it's your unfair advantage in the agile world. So if you want to know more, go check out scrummastertoolbox.org membership. That's scrummastertoolbox.org Membership. And check out all the goodies we have for you. Do it now. But if you're not doing it now, let's listen to the podcast. Hello everybody. Welcome to this very special bonus episode on AI assisted coding with today, Lou Franco. Hey Lou, welcome to the show.
B
Thanks, thanks Pasca. Thanks for inviting me.
A
So Lou is a veteran software engineer and also the author of the book Swimming in Tech Debt. Look out for the show notes for a direct link to the episode with Lou about the book, if you want to know more. And of course to the book itself if you want to buy it. He has decades of experience at startups as well as more well known brands like Trello or Atlassian. He's seen both sides of tech debt as a code and as a leader. Today he advises teams on engineering practices, helping them to turn messy code bases into momentum. And today we're going to talk about AI assisted coding. And yes, you guessed it, we're also going to talk about how AI may or not impact tech debt. But before that, let's start Lou with kind of an understanding of how you look at AI assisted coding. So how do you define today vibe coding and how is it different from perhaps other types of AI assisted coding?
B
Sure, the origin of the vive coding concept was focused on not reading the code, only prompting, generating code, looking at the output, what it looked like or what it did, and then prompting again to change things. When I say vive coding, that's generally what I mean. The more you're not reading it, the more I'm going to think it's vive coding. That's not the only way to use AI assistance. It's subset of all AI assistants in my view. The way I use it and I more do it In a way where I'm in the loop, I'm reading all of the code, I'm doing code reviews. I don't tend to do Vibe coding, except for very simple things where I don't care about the thing beyond how I'm going to use it immediately, like a simple tool. But, yeah, so I tend to use it that way. I understand that it's expanded beyond that and that I'm okay with that. But when I use it, I mean, you're not reading the code.
A
So that's a very important distinction. And I want to emphasize this. We've done this in every episode. But just to make it explicit, there are different types of coding with AI, Right. Like, Vibe coding is one type as you defined it. Just not reading the code the way I define it is slightly different, but everybody looks at it slightly different because it's such a new thing. Don't have established practice other than that particular. I think it was even a tweet by Karpathi.
B
Yeah, I think so.
A
Yeah. Who talked about a weekend project that he had done with AI and talked about how great it could be. And of course it can be great. And we're also going to talk about that. So let's dive into that. Lou, can you share a moment or a project where you first felt that, hey, wait, AI really has the potential for changing how we look at code?
B
Yeah. So I trialed GitHub Copilot at the very end of 2022. Mostly back then, it was most. The thing that it mostly was was the autocorrect. I don't say auto. The auto complete. That was giving, like, three, four lines of code. And I trialed it around then, and I bought it, like, right. Right after that. Like, I. I trialed it. I went through my free trial, and then I bought it immediately because I was just basically using it all the time. And it was just. I mean, in the big. Back then, you know, it was good enough where it was just saving me so much typing. I mean, it was all, like, getting through. I was just getting. I knew what I was going to code. The coding that I was doing wasn't, like, beyond my ability. Like, I could. I knew what I was going to do, and it was just, like, getting me through it faster and keeping me in the zone and that never having to look up what a function's arguments were or what, you know, or exactly which function to use in if I was in an unfamiliar framework, it just, like, saved so much time. Just, like, saving me the time from going to Google looking something up, trying to find the right documentation and trying to understand it like it was just like immediate. And that alone was enough for me. Since then I now use cursor essentially and I am doing more of the prompt led, like the chat and prompt LED development and especially inline prompting. But just that immediate, the autocomplete is still like the bread and butter for me because I mostly am just coding things myself just at a much faster pace.
A
So a couple of things that come up with this is first you started with GitHub Copilot, which is quite different from what we have today. And we'll dive into what we have today in a minute. But then you also said that these days you use cursor. I'll put the link to cursor in the show notes. People can easily find it with inline prompting. So explain that for us.
B
Yeah, so you can auto complete, you type the code you want, you can do a chat, you know, and it will do multi file, you know, work or what I like to do is there's a keystroke, I think it's command K in cursor where I'm typing, typing, typing and I'm it maybe it's not getting what I want or I don't even want to type anything and I can. It's a prompt, it pops up a prompt window like right where you are and it's a very localized like okay, right here, don't go look at the. Don't. It's what I like about it is that it's fast. So right here, don't go look at the rest of my files. I'm not referring to anything else. Everything you need is right here. The context is right here. This is what I want you to do. I need you to whatever, write a function that does this. Takes this input, it takes this output, does it this way. And it's really great for between. I'm not going to type any code for you to complete. I don't want you to go into a deep thought thing about all my files and all the context and you don't know exactly where I want it. It's right here. I'm on the line of code and I just really. I like that mode of it as a happy medium in between. The downside is if it does need something else, it often won't go out to get more context or do anything else outside that window. So it's got to be very localized what you're planning to do.
A
One of the things that I really like about this inline Prompting as you define it in cursor is that the context is very important for the success of using AI. Right. Like if for some reason AI gets the wrong context, it will generate the wrong code. So what I hear you describe with inline prompting is that it gives you a tool to really define the context of what you want created to be like really, really small and to not take into account a lot of other things that might confuse the LLM when generating that particular function.
B
Yes. And then because of that, it's fast. And so if you're just in a mode where you're more pairing with the AI just keeps you in that mode. One of the problems of if you go out to the chat window, you may for whatever reason have put it into a more deep thinking mode where it might not give you a good answer for a few minutes. And that's enough for you to get distracted, get distracted and then, you know, not be totally paying attention to what it's doing. So I tend to like this mode of when I am more think of myself as a pair programmer.
A
And this is really useful because of course we're always discovering how things work and how to make them work better. And this inline prompting, which kind of narrows the context is a technique that you have found very useful. What other things have you learned about what makes AI assisted coding really work for you?
B
Okay. And I'm gonna, I really want to stress for me, right, because I know people have had success. Other ways I tend to do, to think of prompts, commit by commit, that's, that's the size of the, of the, of the, of the work I'm trying to do in a prompt and, and not for example, a one shot make me a whole website that does this and this and has these features which I know people do and that tools like cloud code seem to be really amazing at building those things. The reason is because I am going to literally read all of the code that is output. And it's really hard to read code all at once when it's a large amount and not in a commit order. I'm building a commit history that I think is the story of how this code gets built step by step. For me to do that, I need to be prompting commit by commit because I am a developer and I do know exactly what I'm trying to build and how I'm trying to build it. I can do that. Like, I understand like some people you might be doing something where you actually don't know how to do the thing. And sometimes when I don't know how to do anything. I don't do it this way because obviously it wouldn't work. But most, for what I'm doing, for most of the work I'm doing, I do know how to do it. And prompting is just making it much faster for me. And I want that my commit history to look as if I exactly how it would look when I'd make it. And so I build it that way to begin with and I'm able to understand the code in that order and take it in and approve it and do the commit. Therefore, at the end of the process, I 100% have reviewed every single line of code and I believe that it's correct.
A
So it's almost like you're using the LLM to go do some of the grunt work, kind of figure out some of the nitty gritties, but kind of give it kind of narrow contextual assignment which you call the commit, right? Like one step towards the final vision or a final solution. So it also sounds to me that when you work with the AI assisted tool, coding tool like cursor or cloth code, you're also making the architectural decisions yourself. You're kind of stepwising or you're stepping into the final solution. Maybe it kind of deviates from your original direction. But you will know when that happens, right?
B
I might refactor it and then I'm going to probably do that myself, it depends, but, but often I'm going to do it myself. And also it might not have, like, it might not have noticed that it's doing a little bit of repetition where I would prefer, you know, some common, you know, common functionality to be extracted. And again, I might prompt it to do that or I might do it myself, but I'm, it's, it's a very much organic back and forth. If I think the prompt will do it faster, I'll let the prompt do it. If I'm kind of working out in my head, like, what do I even want? Sometimes I work out what I want in a prompt. So I kind of think through what do I want. And then after I have kind of written that prompt, well, I'll try it. Let's see what it does. It's not that different from how I was programming before there was any AI. Often I would make myself like a little like I called it making a progress bar, where I would just like make a list of the things I'm going to try to do in this coding session over the next two or three hours. I'm going to try to do this, do this, do this. And it was a little to do list and I would kind of work out a little in a little more detail before I did it, and now that often those are good prompts.
A
So have you found that working with the AI has made you significantly faster if you are basically using about the same process as you were using before?
B
Oh, yeah, it's a lot faster. I mean, just from the way the speed it can generate the code. I mean, I type pretty fast, but it's more than typing. It's also navigation. Right. So I have a code base. There's a lot of files. And yeah, I know the basic structure. I know where to find things, but not like how the AI does. It can pop around the file super fast and get to the exact position and find the line of code that it has to do. I can do that like a normal human can do it, but it's like if I have to make a multi file change, it's just way faster at navigating all that and keeping the whole thing straight.
A
One of the things that you talk about is like you were describing how you guide it through the direction that you want. Maybe, you know, you might deviate, but more or less you're kind of guiding it through commit by commit. And this raises another topic. So you're the author of Swimming in Tech Debt, so you're intimately familiar with what it is, why it happens, and potentially some good practices on how to get rid of it or avoid it altogether. How's your feeling? Like, first how you use it, then vibe coding, but first how you use AI assisted coding. Is it likely to introduce a ton more tech debt than you did yourself before?
B
Okay, based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own. Right. Which is not zero. I sometimes intentionally don't do things exactly the way I would want it done because I. I might be experimenting and I might be playing and that code might end up in the code base because I don't go back, that could still happen. It's still on me. I still did that. I read all the code. The fact that I'm faster and can make more code, well, then I guess there probably be more of everything, including some tech debt. But honestly, I invest some of that savings back into cleaning things up. Like, okay, I got things done much faster than I wanted to. Well, let's do some refactoring. I could do that fast too now. And so I tend to that, that, that kind of like messy code style tech debt, which is not the only kind of tech debt like that. That's, you know, one kind. I probably have just about as much as I would have, and maybe it's honestly a little less because I have more time to deal with it. But there's another kind of tech debt which is just like big architectural, like the whole thing is just wrong. It doesn't meet. I learned something along the way that invalidates my ideas or that's still on me. I'm the one in charge of the architecture and I'm the one who's, who is causing that. So I don't think of the LLM as being responsible for that. Now, for vive coders, I think of tech debt in a completely different way, mechanically different, but mindset, it's the same. So my definition of tech, that is something like this, that you're trying to make a change to the code base that you need for your roadmap and you're getting resistance in doing that. Now that the old way would have been like, okay, the architecture is wrong. The, the, the code is messy. The, you know, the, the, the. My idea of like what the correct object structure is, is totally wrong. That's the kind of things that like, cause code, you know, tech debt to someone who's in the code base. Now, if you're VOD coding and you don't read the code, what is tech debt then? Well, and I think a lot of coders are seeing this when you start asking the AI to do things and it can't do them, or it undoes other things while it's doing them, or it can't completely do what you're doing, you're kind of experiencing the tech debt a different way. It's still the same mindset. You're trying to make changes that are on your roadmap and you're getting resistance from making those changes. And if you knew how to read the code, and I know this because I work with a lot of vibe coders and look at their code, I can show them what is happening in the code that is causing this AI to not understand it anymore. What you see is a lot of code repetition. Therefore it doesn't understand that if it changes one thing, that that kind of thing is in other places because it didn't do it in a way that was dry using don't repeat yourself principles. Another thing you often see is a very tight coupling between different areas of the code. And, and then because of that, now when you Change something, you're breaking something else because you don't completely understand how those two things work together. Of course that happens to regular programming. Of course. But it seems to happen when people are not able to impose an architecture onto the code that the vibe coding is generating. You see a lot more violations of a clean architecture.
A
Okay, so this is one of the aspects, right? Like when we don't read the code, there's going to be things in the code we didn't even know about it. And then maybe prompting, we kind of get into this never ending loops and nothing works. Then maybe that's a symptom of resistance as you call it, that you can't do the change that you want. But there's also the other aspect, right? Like you were just a moment ago saying that even though you basically use the same process you used before when programming, now you feel you are being much faster. And that means there's more of everything, more of good code, but also more of tech debt. I remember, I think it was a social media post where somebody at Google claimed that we will have 100 times more code. And I'm thinking that, okay, if you have 100 times more code first, if nothing else changes, that means you have 100 times more tech debt. Just keeping the ratio. More code, more tech debt. But then the other thing that I thought was if you have 100 times more code, there's not going to be 100 times more ability to review and improve that code. That's never going to happen because the AI will not do that for you, maybe in the future. But at least for now, the AI will not do that. So it also stands to reason that if we're going to have 100 times more code, we probably will need 100 times more developers to go in and fix the tech debt. Assuming that code matters to anyone, right? Like assuming that it actually generates some value for the entity that produced the code. What do you think about that?
B
Directionally, I think you're probably right. And again with the caveat, always, this thing is moving fast. We don't know now, I personally don't. I take responsibility for reading and understanding and approving all the code. I think a human, there's a famous IBM quote in early computing, something like, a human can't be sorry, a computer can't be held accountable. So a computer can never make decisions. A human always has to make decisions. I think that that is timeless to me. You know, like as long as I'm involved in this, I'm making the decisions of what Goes in the, goes in the repository. And so you're right, I cannot multiply. You know, AI can help me, but honestly, I'm going, I'm going to always want to read it and approve it. So, so for me, that, that is true. No matter how you multiply these things, there's going to be some bottleneck of me wanting to approve it and, or if I had to, I would get other humans involved to approve it. With assistance, of course. Of course.
A
I'm just thinking about all of those developers I've met in my career that hated code reviews, and now that's all they can do, code reviews. I wonder how that will work. Yeah, probably not very well.
B
Especially, especially if people aren't reading it before now. I'm a big proponent of that. You should code review your own code before you let anybody else review it. And so often I'll say like, you don't hate code reviews. It's like you hate your colleague's code or, or with the alarm being a big part of this, like you're, you're supposed to have some acceptance criteria on the code that, that should be considered reviewable. And like I was saying, like, I am creating these commits very intentionally to tell a story, a linear story of how to understand the entire code and change I'm doing. And if the AI is not doing that, it's just giving you a lump of code all at once, you know, oh, you know, without any reading order. Right. Like, how am I supposed to understand it? And if humans are creating code that way too, it's also not understandable.
A
And that's of course a big, I guess, acceptance criteria from your part here is that a human, in this case yourself, will read the code. Because there are of course, as you said, also people who do vibe coding, they don't read the code. So it doesn't matter what the code looks like, as long as the LLM can work with it, you're fine. Right? Because you're kind of prompting the LLM into a solution that you've designed in.
B
Your mind, which again, if you have no ability to read it or write it and you're able to make something this other way, I'm all for it. That's great. You were not going to be able to do it any other way. You might as well do that. You are probably taking some risks that you might not be totally like, you don't understand completely. So I've been, I work every week, I meet for an hour. It's a vibe coding build hour group. It's public on. If you follow me on LinkedIn you can see this. It's 12 noon on Tuesdays Eastern US time and we talk about, you know, show each other projects that almost everybody in the group is a vibe code are not coding. But I go to give a perspective of what it's like to understand the code and I show them what some of the reasons why they're running into problems and some of the things to think about, like some of the issues, security or otherwise, that might be something they want to learn about before they put this code out in public.
A
And that kind of gets us to the next level, which is we all know these things are only getting better, right? Like this is the worst they'll ever be, right. They'll always be improving from now on because not by chance, just because there's a lot of money and people working on them.
B
Yeah, yeah.
A
So then the other question is, okay, but will in the future maybe AI help us to fix the tech debt that we are introducing ourselves even in some cases?
B
Yeah, I mean in the short term it can do it with guidance. Right. Like so, you know, I definitely can help the LLM understand, understand the direction I want to take a code base when it's just purely refactoring and testing. You know, the things that you would do to remediate tech debt. And I can definitely get it to do that. It's not right now for me. There are better tools that are not AI, you know, just refactoring tools, just copy and paste is unfortunately, at least right now for me, it tries to read and regenerate code, which is a little lossy, whereas cut and paste is not lossy. And so sometimes if it's something a little complex, I'll just do it myself and it's a little faster, maybe have it do at the end, but it's really awesome at generating tests under your control. I don't see any reason why it couldn't start to do these things on its own. I will just say that right now, if you have a very convoluted code base where even humans are not even humans, but humans are having a lot of trouble with it. AI seems to have a lot of trouble with it too. A good first step is to do some of this remediation before you bring in AI because it will get it going in the right direction. And then the other thing I'm kind of wondering, I don't know how these things are totally built, but I suspect they're not built off of repository changes more like the end result code. But there's no reason why you couldn't build LLMs off of good repository practices and therefore then get it to understand how code, not just the end result of code, but how it evolves towards where from a place where it wasn't good and then to a place where it was. It doesn't seem to have gotten the effect of that very well. So I don't think it's been trained on that as much. But all of that, all that information we have, you know so much of that, all these repositories. So if we know like good remediation projects out in public code bases, like maybe we can use that as a basis for training.
A
Well, one thing that I thought about like many years ago, I don't code anymore, but when I did, I used tdd. And one of the things that I realized is that the refactor process or refactor step in the TDD cycle was the most important, right? Because if you just coded it, it was going to look like crap. But if you did the steps, write the test first, run it, it's red, write the implementation, run the test is green. Not all the time, but sometimes it will be green, great. If not, then you fix it and then refactor run the tests again. That refactor step was always very important for me, very important because it was the only way that had. I mean, I suck at writing code, but when I did TDD with the refactor step I wrote, I wrote code that even I would be proud of, right? So then when I was thinking about this and the way I've tried, I've tried to use these tools as well, I realized that actually we can have that same step in the code that AI generates, right? Like we can say, hey, there's some repetition here. Let's extract this into a method or a function call or whatever. Or hey, this is too complicated. Break out this long method into smaller methods, whatever the refactoring is, or rename classes, rename variables, whatever that might be. Is that something that you've played with, trying to build that step in terms of handling, perhaps managing, limiting the technical debt that you built when you're coding much faster.
B
You're describing exactly. Now, I wouldn't say I always do a tdd. Sometimes I do and sometimes I don't. Certainly when I have a really good understanding of what I want, but not how to do it. Tdd, that's where TDD just shines. Now the LLM is actually also good because it does it with prompting, but that's when I tend to build tests first, when I totally know the outputs I want, but I don't know exactly how to do. But that refactoring step, I am more. I'm going to say that reflection stuff is much more opinion feel. You know, there's so much I can't totally describe very well about what I want. I might want to try three, four things really quickly, you know, so I tend to do it myself. It's not. This is not an area where the typing frequency is going to help as much because I'm. I'm more thin thinking and moving things around. And it is pretty quick. I don't necessarily need it to be quicker, but I might be thinking of a couple different ways and like writing out a prompt to do that. It's just like it's not as fast as me just like trying something, trying something else and just, you know, converging on what I want.
A
So what I was going towards with this is like we can actually tell it. Look, I want you to do a refactor every time the test passes for the first time. Right. Like I just did an experiment with that and I was pretty happy with the code that it wrote. I mean, it's definitely more complicated than I would write. But then again, I don't know as much about code as the LLM does because it has read all the code available on the Internet. I haven't exactly, but I realized that that refactor step may be kind of a line in the sand, like kind of a soft border or soft boundary for it not to go into a mode that would generate technical debt on top of technical debt on top of technical debt.
B
Oh, definitely. Again, because of the way I'm doing it, I'm going to notice it immediately because I'm doing small changes at a time. So whether I have it fix it or I fix it myself, either way, the point is to do it. And you are getting so much. I'm going to say this, you're getting. So I feel that I'm getting so much productivity out of it that investing a little bit of that productivity, I'm going to say it is extremely good for another kind of productivity. So I will get it done quicker. And now I'll use some of that time. So I'm going to now use some of that gain for myself. But my whole point is to get it through code review faster. This extra whatever half hour, hour I might take to do this like refactor step and get the code being really nice is going to pay off a couple of times really quickly. If I'm on a team and someone's going to code review this, that code review is going to go much easier because it's going to be a much better PR to review. Then hopefully, because I built tests all around it, I'm going to get through QA faster because I hopefully found any of the problems. Oh, by the way, I'm going to easily get it back.
A
I was going to say that actually writing PRs is where this thing shines. You change the code or it changed the code. It writes PRs like I've never seen in my life. I've never seen any developer write PRs as clear and as well written, especially not well written as this thing does. It's just perfect. So if you're working with other people, you'll get some credits and some kudos if you have it. Writing the text for the pr.
B
Yeah.
A
Lou, this is of course an intro. We're just at the start. Let's caveat this episode by saying, September 29, 2025, things might have changed before December, we don't know because it's evolving so fast. But at this point in time, if people want to dive into this a bit deeper and understand more how to use AI assisted coding tools better for their purposes, where would you go and where would you tell people to go to learn?
B
There is a book coming out that I'm looking forward to because the authors have been on podcasts talking about their, their mech, their way of doing things. So it's by Steve Yegi. Y E G G E. People might have known he had a very popular blog. You know, 10 years ago he worked for Amazon and he worked for Google and he is co authored a book, I think it's just called Vibe Coding. So Steve Yegi Y E G G E. You can, you can find it. But if you want to hear what he has to say now, if you search for him on podcast, you're going to see that he's, he's talking in depth about his way of doing things. The one that comes to mind is the Pragmatic Engineer podcast. He was, he was on there and they went, they did a deep dive into the way he, he does, let's say vibe coding, but it's not vibe coding the way I've defined it because he absolutely does read all the code. He's a coder. And so, and I, I, when I heard it, I was like, okay, good. The, the way I'm doing things and Thinking about it is, is, you know, other people who've thought about this are doing it the similar way. So I'm looking forward to it. I already preordered that book and I'm looking forward to getting it. But, you know, I'm assuming it's going to go into more detail about. About what he's been talking about.
A
Yeah, and I guess that's the point, really, because we are all, as a community, discovering this thing. Right. Like it's not yet defined. We are discovering it together. And of course, that's why we had this series as well. Lou, it's been a pleasure. We'll ask everybody to check out Swimming in Tech Debt. That's Lou's book and the link will be in the show notes as well as to an episode with Lou about the same book if you want to know more. So make sure you check that out. And where can people go and find out more about you, Lou?
B
Yeah, so I've been blogging for over 20 years@loufranco.com and so you could see various things I've said on Tech Debt or AI when I was adopting it in real time. Talked about some of the things I was seeing and some of the things I was learning. If anyone has interest in that. And then if you see me on LinkedIn, I'm just in LouFranco. Tell me you've seen this on the show and I'd be happy to connect with you. If you have any questions about what I said, please ask me there.
A
Absolutely. And why not share your experiences and ask Lou for his own. Lou, it's been a pleasure. Thank you very much for your generosity with your time and your knowledge.
B
Thank you so much. I really appreciate you having me on.
A
All right, I hope you liked this episode. But before you hit next episode, here's the deal. This podcast is powered by people like you, the members who wanted more than just inspiration. They wanted real tools and real connection to people who are practicing Agile. Every day. We're talking access to over 700 hours of agile gold, CTO level strategy talks, Summit keynotes, live workshops, E courses, deep dive interviews, books. And if you're into no estimates, we got the pioneers of no estimates in those Deep Dive interviews as well. Agile business intelligence, creating product visions, coaching your product owner courses, you name it. You'll get invites to monthly live Q&As with agile pioneers and practitioners, plus a private Slack community which is free of all of that AI slop you see everywhere. And of course, without the flame wars, it's a community of practitioners that want to learn and thrive together. It's the best place to connect with community and learn together. So if this podcast has helped you before, imagine what you will get from this podcast membership. So head on over to to scrummastertoolbox.org membership and join the community that's shaping the future of Agile. We have so much for you, so check out all the details@scrummastertoolbox.org membership because listening is great, it's important. But doing it together, that's next level. I'll see you in the community. Slack we really hope you liked our show. And if you did, why not rate this podcast on Stitcher or itunes? Share this podcast and let other Scrum masters know about this valuable resource for their work. Remember that sharing is caring.
Podcast: Scrum Master Toolbox Podcast
Host: Vasco Duarte
Guest: Lou Franco
Date: November 25, 2025
In this engaging bonus episode, host Vasco Duarte sits down with veteran software engineer and author Lou Franco (Swimming in Tech Debt) to discuss the practical realities, potential, and pitfalls of AI-assisted coding—particularly how it’s changing the game for developer productivity, coding workflows, and technical debt management. Lou and Vasco dive deep into how tools like GitHub Copilot and Cursor are actually used in day-to-day development, how different approaches to "vibe coding" (prompt-only, often code-blind) compare to more traditional, reviewer-driven practices, and what all this means for the future of code quality, maintainability, and collaboration in fast-evolving codebases.
[02:28] Lou Franco:
Memorable Quote:
"The more you're not reading it, the more I'm going to think it's vibe coding … I don't tend to do Vibe coding, except for very simple things where I don't care about the thing beyond how I'm going to use it immediately." — Lou Franco [02:28]
[04:28] Lou Franco:
Memorable Quote:
"It was just saving me so much typing … I knew what I was going to code … and it was just, like, getting me through it faster and keeping me in the zone …" — Lou Franco [04:28]
[06:30] Lou Franco:
Memorable Quote:
"What I like about it is that it's fast. So right here, don't go look at the rest of my files. ... The context is right here. This is what I want you to do." — Lou Franco [06:30]
[09:38] Lou Franco:
[15:10] Lou Franco:
Memorable Quote:
"If you're vibe coding and you don't read the code, what is tech debt then? ... You're kind of experiencing the tech debt a different way. ... It seems to happen when people are not able to impose an architecture onto the code that the vibe coding is generating." — Lou Franco [15:10]
[18:46] Vasco Duarte & Lou Franco:
Memorable Quote:
"A computer can never make decisions. A human always has to make decisions. I think that is timeless to me. ... There's going to be some bottleneck of me wanting to approve it..." — Lou Franco [20:16]
[26:23] Lou Franco:
[28:01] Lou Franco & Vasco Duarte:
Memorable Quote:
"You are getting so much... productivity out of it that investing a little bit of that productivity... for that refactor step is extremely good for another kind of productivity." — Lou Franco [30:05]
[24:11] Lou Franco:
[32:23] Lou Franco:
AI-assisted coding is accelerating developer productivity and changing workflows, but it’s also introducing new risks and responsibilities—especially around technical debt and code maintainability. Lou Franco’s approach of tightly reviewing, structuring, and consciously steering AI output stands in contrast to vibe-coding’s “prompt and pray” model, but both are evolving rapidly. Investing time in small-step prompts, review, and deliberate refactoring is crucial for maintaining code quality in an age of code abundance. Listeners are encouraged to experiment, share practices, and keep learning as this domain continues to develop at breakneck speed.