Transcript
Adam Gordon Bell (0:00)
It's 6:07am My only company is a warm mug of strong and sweet coffee, Starbucks dark roast and hazelnut creamer poured in. And I'm looking at a messy code base, several thousand lines of Python scattered across, you know, a dozen files and describing it. Today's goal is simple. I want to take this brittle, hand wired collection of data transformation pipelines and I want to turn it into this clean modular pipeline system. I can actually describe the changes fairly specifically, but it's brutal to pull everything off because every file needs to change, every transition, every step needs its own file. The logic of the stages, you know, from transcribing to video formatting, they will stay the same, but they need to be extracted cleanly into their own files, into their own functions. And the existing tests, as far as I can tell, except for a couple end to end ones, are probably right off because I'm totally changing the structure of how everything calls everything else. So I type Claude dangerously skip permissions into my terminal and I start dictating, I start describing the refactor, what the end state is going to look like, right, how everything should be organized, and then I copy over a type signature and some key files I want to look at and I hit enter and I'm off to scrolling Twitter and I'm learning about some controversy about some jeans ad that I don't really understand. But 19 minutes later, every test is green. I mean, there's very few of them left, but the end to end ones are working. The folder structure is completely transformed. Claude ran the linter, it cleaned up the code, everything is working. I mean, it didn't update the readme, it left behind outdated instructions and it left a bit of dead code because I had this section that called Whisper locally that I was no longer using and so it missed it, but otherwise the change was flawless. I had been worried about this refactoring, I knew I had to do it, but it felt so big with so many changes that I just didn't want to get into it. But here was done. I was in the new state and I thought to myself, did I just automate myself out of a job? Are we all automating ourselves out of jobs? That's what I was thinking. And so right then I was excited but scared. And I made a snap decision about how I work. And I'll get to that in a bit. But what I did is I opened Roam, my note taking app, and I jotted down a single line, a very important thought that has been guiding a lot of My work ever since. Hello, this is Code Recursive, and I'm Adam Gordon Bell. Last episode, I shared how excited I was about coding agents, and I'm still excited. I really am. But I'd be lying if I didn't mention how, you know, I do have some fear as well. And there's this Scott Alexander story that I read years ago, and it kind of nails exactly what I'm worried about. Here's how I remember it. There's this magical topaz earring, and it's locked away in a museum vault. It to keep everyone safe from it. If you put this earring in your ear, it whispers to you, you'd be better off not wearing me. But if you ignore that advice, that's when things get interesting. Because whenever you wear the earring and you have a choice to make, it whispers in your ear, telling you exactly what to do. And it's always the best decision every time. If you're stuck choosing between two jobs, the earring just tells you which one to take. And it's never wrong. Over time, people found that the earring was always right. And so it makes the wearer's life better every possible way. And the more you wear it, the more you get used to it. And the earring doesn't just stop at big choices. It chimes in about breakfast and what to say to your friends and how to give a speech. And even you know when and how to move your arm. It's always there, working to give you exactly the advice that you need. Except then, when you die after an abnormally happy and successful life, you, the earring wearer, are found to have no prefrontal cortex left. You have long since let your brain atrophy away as the earring makes every decision for you. You have long since become its puppet. You got everything you wanted, and you made the decisions that gave you the most happiness. But were you still you? That's why the earring stays locked up and. And out of reach, to keep us all safe. Sound familiar? The earring is what life looks like if you relinquish everything to AI. It's a life without integrity standards. Integrity standards are those lines I jotted down in Rome that I'll get to later. But, I mean, it's crazy that this story was written in 2012 because it could not be more relevant today. That's what we're digging into today, because after the last episode about my excitement about coding agents, I got a lot of questions and comments from people like, how do I keep learning if an agent does all the work, what skills are actually worth building now in this new world, right? Have the value of skills changed? And honestly, like, will I even have a job in the future if all my job is now is hitting accept all on some generated code like I was doing with Claude that morning? These are great and honestly, kind of scary questions, and we'll dig into all of them and I'll share how my experience working with cloud code pushed me to set some rules and to change the way that I work. But first, let's talk about why. Just moving faster, letting coding agents do the heavy lifting and going as fast as you can can actually be a trap, because it can be like running on a treadmill. Everyone's sprinting, but no one's actually getting anywhere. That treadmill has a name. It's called the Red Queens Race. And the clearest example I've heard of, it starts with a two income family who thought they found a way to improve their lives. So back in 1999, the Parkers could support their whole family on just the husband Mark's income. His wife Joan stayed at home, so they had no daycare bills and their mortgage was manageable. If Mark lost his job or they needed some spare cash, Joan could just pick up some shifts because she had working experience. She just wasn't working right now. So they had this safety net. But even in 99, this setup was getting kind of rare in their neighborhood, but it still worked for them. Two income families were becoming more common. And so in 2007, they followed the advice of everyone else and Joan went back to work and suddenly their income doubled. Back in 1999, their $1,000 mortgage felt fine. But in 2007, with more money, they got a $3,000 a month mortgage that allowed them to stay in a good school zip code. And now they still had a little bit more money, so they could go on a bit more vacations, but they could also pay for daycare. But then a few years go by and their finances start feeling more and more tight. They need both incomes just to keep up with what they used to afford. On one, their paychecks had doubled, but their extra cash on hand was actually shrinking. Everything felt more expensive. And it wasn't just them, because the world was was changing. And then one of the Parkers gets laid off or has health issues, or things go bad. And the visa balance hits $23,000 at 19.9 APR, and the bills pile up and bankruptcy becomes real possible. This is an example from Elizabeth Warren's Book the Two Income why Middle Class Mothers and Fathers Are Going Broke. It's not a new book, and it's not a book about politics. It's a book about how chasing efficiency and status can make the whole system shakier for everyone. But it's a real thing that played out. It went like this. Families went from one income to two, and suddenly there was more money coming in. So you think life would get easier. But when everyone's income rises, certain things, like housing, just get more expensive to soak up that extra cash. Back in the 70s, most households had a single income. Now most have two, especially if there's kids. But that extra money didn't mean even bigger savings or even more comfort. It mostly went and paying for a place to live if you wanted to stay in a good neighborhood. There was only so many houses, and so prices just rose to match what people could pay. Housing is what economists call a positional good. There's only so much of it, so the price goes up as people compete. Elizabeth Warren also points out that higher education is another positional good. Parents want their kids in good schools, but there are only so many spots. So when families have more income, schools raise tuition and families are able to pay. So all that extra money, eventually from having two income earners, it got eaten up by housing and by educational costs. So after 30 years of this, you end up needing two incomes just to cover the basics that used to be able to be covered by Juan. Housing and school for your kids ate it all up, and you're just right back where you started. Except now there's less slack. You both need to work just to get by. That's not to say that life hasn't improved. We have better tech, we have iPhones, we have air conditioning. But those upgrades are a different story. They're unrelated to the switch from two incomes and the subsequent ballooning of the positional good costs. I'm bringing this up, of course, because of coding agents. We're in this moment where writing simple code is easier than ever. Maybe not in every tech stack now, but it's coming, and suddenly we're in a spot. Like the families who went from one income to two. We can produce more. Writing code isn't the bottleneck it used to be. Code is cheaper to produce, and that gives us extra capacity, just like the Parkers had when they both started working and they had extra cash, extra slack. So what do you do with that extra capacity? If you just use it to crank out features as fast as you can to pile up code, you end up in a red queens race, everybody running faster just to stay where they are. People will love you for a while. But I think there's a trap here, and so it's worth thinking twice before you fall into this trap. If you ever work in enterprise software, you might know what I'm going to talk about. Sales often hinges on RFPs, and to win it, you need to check off a long list of features. Every new feature becomes ammo for the sales team to close another deal. So there's always this push for more features, and the people buying the software aren't necessarily the ones that have to use it every day. There's a couple layers of indirection there, so you end up with these weird incentives. There's always pressure to add new features, but making them fit together or building something that people can actually use, things that people actually need, it doesn't always matter as much as getting the new features in. So you get a spreadsheet of checkboxes and someone has to crank out a button to do an export to XML, and they need to get it done in 48 hours so that we can get the deal. And the button barely works and the customer doesn't even use XML exports. But the competitor has that feature, and they aren't going to choose you if you're not at feature parity, even though your product's better. So you need to add it. You need to win the RFPs and your competitors need to win it as well. So you compete on who can add the most features, who can add the most checkboxes for the least amount of money. The derogatory term for this, right, is a feature factory. You're just cranking up features. Most engineers are just picking up tickets and moving quickly through them one after another. That's the core job. Just keep the queue moving, just keep adding features. We need to keep up, we need to catch up. So, say you bring coding agents into the mix. If they really do help you work faster, any extra breathing room they give will quickly disappear because the pace will just pick up. You might be thinking for a while, wow, I'm getting so much done and barely breaking a sweat. But. But turning out features faster will just become the new baseline. The pace that everybody will expect from you is just now increased because your company is now racing against every other feature factory out there, and they've got the coding agents too. So suddenly, just keeping up means shipping features way faster than you did before, right? That's the Red Queen's race. You can now run faster, but so can everybody else. So now you need to run faster just to stay in the same place and win those contracts. If you're a software developer and your job is mainly just cranking through tickets as fast as you can, that's a problem. I think, if anything, about the Feature Factory story sounds familiar, I think it's a dangerous time. You need to be vigilant. If you spend most of your time working on tasks in a big code base that don't really make a lot of sense to you, or you're not super clear on how the product works and how it's improving people's lives. It's a bad spot to be in if you're just going to focus on speed instead of asking how can I blast through these tickets as fast as possible with a dozen Claude code terminals open? But yeah, it ties back to what I jotted down in my roam notebook, which centers around this idea of just careful thought. Brian Kernighan once said that the best debugging tool is still careful thoughts and and a few well placed print statements. This is a controversial quote often used to criticize the use of debuggers, but I have found it to be true. Back when I was working in C, I used debuggers all the time. If something broke, my first move was to set a breakpoint and step through the code. You know, F10, F10, F10. Until I got to the state where the bug happened and it worked. But it could get kind of mindless. Sometimes the debugger was the only way to solve a problem, right? But many times I could have solved it without the debugger if I had spent some time thinking. Debuggers are powerful, no question, right? But when a tool does all the heavy lifting, it's easy to stop thinking deeply about what's going on in your code. I would just jump to the breakpoint and look at the state. Who wants to do the hard work of thinking through the code execution? That's why Brian's point, I think, is less about the print statements and more about taking the time to really think through what's happening in your code. He's saying you should pause and consider what's going on. Exercise that mental muscle by working out the problems in your head. That's why you sometimes see really skilled programmers who completely avoid debuggers. Debuggers are super helpful at helping you solve problems faster, but if you force yourself to reason through issues without them, it can build a long term skill. Similarly, I found myself at the command line before, like mindlessly hammering the up key trying to find an old command. Sometimes 20 presses up just to get to git commit amend. I could have typed that out myself, but I wanted to be efficient and I was being lazy. But that's when skills start to slip, and I see it happening to myself with coding agents too. I type in the description for a quick code change, change this to that, and let it edit it, even though it would have been just as easy for me to do it myself. The easier it gets to automate things, to look things up, the less I have to think. And it's always tempting to not think. But this act of retrieval, this act of thinking, is where real learning happens. It's harder, but if we skip it too often, we risk going on autopilot whenever something you're doing is becoming mindless. I feel like there's danger there. I have a strong sense of this danger because of my current job working in Developer Relationship. Developer relations is not always easy. I work at Pulumi right now, and if you haven't heard of Pulumi, it's a tool that lets you spin up cloud servers using Python or whatever programming language. But I don't work on the Pulumi product itself. I show people how to use it. There's this criticism about developer relations. Whether it's called, you know, Devrel or devangelist or Builder in Residence or Community Engineer, which is my current title. The criticism are that the people in this job are just shills. And honestly, that's not wrong. My job right now is to get people to use Pulumi, and I'm okay with that. I like the product. And yeah, I am a shill, I guess, right? I think it's a fair trade. I don't think that's the real issue. The real issue is this. Being in Developer Relations feels like playing a developer on tv. I can make a video where I walk through building something with Pulumi, but my day to day isn't the same as someone who's actually coding up infrastructure all day. Making those videos takes a ton of time, but a lot of the time is like making a thumbnail and doing descriptions and picking the right feature to show off. And then I'm in some marketing meeting and they're going over partnership enablement with aws and I realize I don't even know what partnership enablement means. The event is 5,000, but are they paying us or are we paying them? And what is the event even for? I'm so confused. That's why I say it feels a little bit like acting, right, Like George Clooney isn't a real doctor. And sometimes I'm really not the kind of engineer I'm presenting myself as. In my last job at Earthly, we were a tiny team. And like, I knew I wasn't shipping big features. I wasn't really shipping any features at all, but I talked to the folks who were every day, and so I still felt connected to the work, right? But now I'm at a bigger company. The company's great, but I'm in marketing and there are just two of us engineers in marketing. And sometimes it feels like we're really on the edge of things, right? We're really in this no man's land between people building the software and the people using the software. So just like mindlessly vibe coding, you can lose your way pretending to do something without actually doing it, and it can take you to a bad place. And so I had to establish some ground rules for myself. I had to find ways to stay grounded to the work. Last month I skipped a marketing and sales bowling event and I went to this infrastructure as code teams. Food crawl at an off site, sure, part of it, maybe the main part was because I wanted to try this Chicago beef sandwich from the Bear. But more than that, you know, I wanted to talk to the real engineers, the people building things, and I sometimes feel a little bit cut off from them, right? So I got to hear some of the real challenges of adding more languages to our policy as code feature. I got to hear it unvarnished from people working on it. The conversations like that give me a lot more depth. They make me feel a lot more confident in the content I'm putting out in the tutorials or examples or whatever I'm doing are more grounded in real world experience. I'm excited about making videos. Like, honestly, I'm pretty excited about figuring out how to effectively make videos. But I know that if I'm not actually building things, if I'm not coding, if I'm not using the products, I lose touch with what matters, right? So I'm trying to make myself go deeper. I'm trying to answer questions in the community, in Slack, I'm trying to use Pulumi for my own side projects, not just for demos. I haven't always been great at doing this, right. It's a work in progress. But I have to force myself to stay grounded. I have to force myself to not just mindlessly do the marketing things and let my distance away from the actual engineering eat My brain. I mean, that sounds pretty derogatory. The people in marketing are great, but. But the distance can be a challenge. And I feel the same way with the coding agents. That's why I'm trying to set up these rules for myself, my integrity standards. The first set of rules that I set up like this were around writing. My podcast, whether you've noticed or not, is built on writing. Writing up my thoughts, shaping them, sharing them. Blogging has worked a lot for me, too, in the past. Don't do it as much now. But I write things down, I polish them, I try to communicate something real. When LLMs came out right, LLMs have a place in this process. They help me edit, they help me tighten up my drafts. But it's important to me that I always start with my own words, with my own first draft, before I get any AI help involved. When I use an LLM, I treat it as a tool to improve my writing, not to do the writing for me. I go through my drafts and I cut out clutter and. And I simplify my wordings and I tighten up sentences. And LLMs help with that, helping me with tips on cleaning up my writing. Things like, you don't need to say really or very. These, these aren't adding to your writing. Let's make it tighter. I often ask for their feedback on a whole piece of writing. It's like having an editor on call. I find it super helpful. But to me, the AI is helping me polish and I'm doing the writing. Other people I've seen use LLMs to generate tutorials by, you know, feeding it a set of writing rules and a prompt. You know, something like write a guide on how to use C and connect to postgrest. And they describe what they want and the model spits out the content. And maybe they iterate on it a bit and they publish it under their own name, you know, 20 minutes later. That's not something I'm comfortable with. That is outside of my integrity standard. I'm not saying it's not useful. Maybe some great useful content is made that way. But to me, that's not something I'm going to put my name on. That was one of the first lines I drew in the sand. But that's why it was natural for me to write down a standard for coding agents. That's why the integrity standard I wrote down in my notebook that morning was I design the agent assists. It's just five words, but since then, they've guided every coding session. And I'll show you how those words affect how I do things, how I code, how I delegate, how I review. But this is an answer to one of the big questions I got after last episode. How do I keep learning and getting better when the agents are doing the coding? For me, it comes down to this integrity standard and to me focusing on system design because that's what I want to get better at. That's my approach. Yours will be different. Right? If learning is your goal, you have to be explicit about what you want to learn and carve out a time and a structure for that. If you want to get better at the actual syntax and the muscle of typing out solutions, you won't get there by letting an agent do all the work. I mean, maybe that shouldn't be your goal anyways, but we'll get to that. But you need to be clear about the mode you're in. If you're in shipping mode, go ahead, fire up cursor or whatever you're using and power through those backlog tickets. But if you're aiming to master hands on coding, you need to protect these sessions where the agent stays off. If you want to get better at writing tests or at code comprehension, one way an LLM can help you get better is by acting as a sparring partner. Say you're grinding out leetcode problems in Zig. You know, try to write out your solution and then ask ChatGPT for its take and compare who's better. These models can actually be really good at understanding a language's idioms, and I feel like that back and forth can help sharpen your own approach. But the key is to be intentional, right? You want to use the tool, don't let it set the standard for you, but be explicit about your standards for accepting code. Your boundaries are probably different than mine, but one I've also set is I have to understand every change before I merge it. I have strong opinions about where things should go and the data structures that should be used. And I not handing that off. Take that big refactoring I mentioned earlier. I let cloud code handle it. Yes, and it nailed it better than I could do. But I'm fine with that trade off. I'm going to lose a little bit of Python typing at the keyboard muscle memory over time if I outsource that. But I'm not giving up the habits of reading and evaluating and deciding how the code should be structured because I actually dictated all that structure. I'm still working on the best way to not get lazy like things like handing off little chores to the agents when it would be faster to just do it myself. It's that same problem as leaning on the debugger. But, yeah, I'm trying to keep my code reading skills sharp because I feel like that's getting even more valuable. But, yeah, you'll probably have to set and reset these boundaries for yourself. Sometimes you'll slip, and then you'll remember why that you drew that line in that certain place. But as long as you're thinking and challenging yourself, that real growth happens and you're not just coasting. The standards would probably change. The lines I drew in the sand for writing changed over time. My coding ones probably will as well. That's the shift that I'm kind of working through now, trying to talk out right here with you. And I want to explain my approach to design skills. But mastery matters as well. But it's not just pure master mastery. What matters for a successful career is mastering rare and valuable skills. That's the foundation, right? And we're living in a time where actually some skills that were rare are now priced in pennies via an API call. And I feel like that's why. One of the hardest questions that I got to answer is what skills are even valuable to learn in this new world? Because I think it's changing. Some things are tough, but that doesn't make them valuable. When I was a kid, my dad loved chess. He liked to have his friends over and beat them at chess. And he had this specific chess set, slash table that he had built for himself with these little drawers with all the chess pieces in that were like marble. And he always said, like, chess would teach me how to think. And I kind of resisted playing it. But a year or two after I had left university, I did pick up chess for a while. I downloaded Chess Master, and I ground through a bunch of matches and a bunch of lessons, and I eked my. My rating in Chess master up to 1400. And when I would go visit my parents at home, me and my dad would have a chess game. And occasionally he would let me win. But it was hard, right? The effort felt heroic just to get to 1400. And then I realized, like, 1400 in that game is something like probably 1100 in the real world. And it's barely mediocre at chess. But all that learning was hard. I mean, fun, but hard. And far less marketable than the couple of evenings I had spent after university learning C sharp. That very quickly landed me a dev job. So the real question isn't, like, what hard skill can I master, but what is rare and valuable enough that people will pay you money? And it's an interesting skill lately because I'm doing videos, right? I've been working on my video skills. I've been editing and making thumbnails and the whole package and thumbnails especially, they feel like a pretty commoditized thing to do. My job, I need to get better at them. But I can't help but notice that it would be very tough for me to make a good living as a video editor or generic content creator compared to the community engineer job that I do right now. Right? So I need to stay grounded to my engineering skills. My point is, if we're talking about a successful career, you have to be careful not to pour your energy into skills that are hard to master but are quickly losing value. Some things that used to be the core parts of a developer's learning plan just aren't as valuable anymore, I think. I don't know for sure, but like take css. I spent years thinking like I should really learn how CSS works inside and out because every time I touch it it feels confusing and messy and I hate it, I guess. But for a long time I thought I should just master this. It's not going anywhere. But it feels like these days I can just let the AI handle it for me and it's not something I'm excited to learn and maybe I don't have to. And that's great. But you know, I've written deep dive tutorials on how to learn AWK and how to learn JQ and I thought they were really good. I worked hard on them and I thought these skills were must haves tools that are not going anywhere and they're worth learning and you kind of amortize that learning over the course of your career. That makes you more powerful, more successful. Regex and various CLI tools, I feel like they all fit in that groove. But maybe they aren't as valuable and essential. Now it's easy to outsource just command line foo to an LLM. So it's up to you to decide what's worth learning and what isn't. But I can tell you that, well, I'm sad that, you know, my JQ skills might be lost. I'm not sad to relinquish my bash scripting skills that I had built over time. Some things that I've spent a hard time learning have absolutely been devalued. And you can already see the shift happening in how companies are hiring developers. Meta lets candidates use AI tools in interviews. Other companies want to actually see how you use a coding agent. That's part of the interview process. So if you've been grinding leetcode problems to land your next job, I think that path is starting to close. The focus is changing. Soon you might be asked to debug code written by an AI or how to manage several AI agents in parallel to ship features fast. I'm not really sure, but I do think system design questions are going to become more important. These are skills that are going to matter more if you're aiming for a traditional software engineering job at a big tech company. I mean, if that's your goal, though. That's not my goal, but I do think we all need to think a little bit differently about learning goals. I don't actually have an explicit answer to the question what skills will be valuable in this new world, But I have a couple hunches and one for sure relates to design. That's why what I wrote down was I design the agent assist. Because the night before my 6:07am refactoring, I was doing a lot of design work. Not visual design conceptual. You see, I've been learning to make videos and especially edit and script them. And my process is a bit unconventional and involves this messy Python program. And this program, one thing it does is it pulls apart existing videos that I like that I want to imitate and it breaks them down scene by scene. And then I use that scene by scene breakdown just for learning. It's been an interesting little side project for me, but there's a bunch of steps. Download the video, extract the audio, run whisper and get a transcript. Use Gemini to scan the video for on screen details and key visualizations and, you know, changes. Sometimes I just want a transcript, other times I want a shot by shot breakdown with like animated gifs of each key visual so that I can pass that information on to my editor. I've been learning there's different types of visuals you can use, supplemental, reinforcing, entertaining, and then this pipeline has a whole bunch of other steps that I won't get into here. But this is where the main idea clicked. If I could find a cleaner design for this messy imperative vibe code script, then the agent could do some of the restructuring. Because really all that happens is you start with an input type, whether that's a video URL or an MP3 or a text document, and then depending upon the task, you do some sort of transforms. And as I added new transforms. Right. The LLM was happy to build each step as a one off, but it just was slowly becoming a mess because it had no problem with just churning out a new step exactly as it was and not seeing any overall structure, right? But I kept coming back to this idea, like there is a real structure underneath it. I have a handful of document types, you know, a transcript, a scene by scene breakdown, an MP4, animated GIFs, YouTube URL visualization types, a Twitter thread, etc. And there's clear transitions between them, sometimes with forks and multiple paths. You know, if you're taking an MP3 and you're turning it into an audio transcript, that is just a transition, right? In a graph from MP3 to transcript one that takes a transcript and a video file and runs them through Gemini and splits out a scene by scene breakdown. You know, that's a transition that has two inputs of the transcript and the MP4 and then to a specific markdown file. The coding agent couldn't see the structure, but I could. So I had this design idea and I ended up talking it through with ChatGPT with the O3 model, how this pipeline might work, right? I went back and forth. There's the straight imperative approach. But you know, what if we do it this way? What if we have some sort of pathfinding algorithm? You know, classic leetcode type solution where you have a queue and you use breadth first search to find a way to navigate through various transitions to your end state. So the idea that I came up with was sort of to have a method for each transition and to annotate each with input and output types and then register that as a transition. And then the search algorithm could just pathfind and it gives you a totally new way to structure the program. Instead of explicitly outlining pipelines, you pass in an input type and ask for an output type and it finds the path through it just seemed like a very big change, but conceptually it made sense. So the next morning, right, coffee in hand, hazelnut creamer, which is delicious, I fire up Claude code and I start describing the change that I'd like. I give it the type signature for my annotation, for registering a transition and how I think this might work, and I send it off and it rewrote the whole project to fit this new model. And I actually didn't expect it to be this easy. I thought it was going to be a mess and that's why I put it off to the morning. But really it was easy to describe this end state goal and then there was just a ton of details to get it Right. And it could run the program and figure out and use the types, and it just worked. And that's where things got interesting. Because the newly structured code was so much easier to work with, suddenly I could imagine easier ways to add more things, to add caching, to get logging in a centralized place, et cetera, because it was all now more organized because I had offloaded this grunt work. But I'd come up with a good design. So it wasn't really just a click, accept all moments. Right. Claude had done all these repetitive parts, but the real value was in this planning and designing process. If I'd stayed in this pure vibe coding place, I could have continued adding features, but it's slowly becoming more and more of a mess and harder to add things to. The LLM would have done it, but everything would have been rote and repetitive and I would have had my brain turned off. So I think this kind of thoughtful problem solving, planning, system design, refactoring, ongoing maintenance, I think these are valuable skills. So this is one of my working answers to what skills matter most in the future. Because in the morning, when the refactor ran without errors, the real win wasn't about the speed. Right. It was about the choice to rethink the whole structure and to hand off the grunt work to get that structure in place. The people who will struggle the most with the changes that are happening now are, yeah, those who are deeply suspicious of AI and won't bring it into their workflows, but also the people who will just turn off their brains and try to crank out things as quickly as they can. I feel like both of those people will risk being left behind as the work evolves. So, yeah, let's not settle for just moving faster. Let's take on problems that actually make us think. Let's tackle the tricky stuff. Let's use these new tools to level up and not just automate. That's how we keep our skills sharp. That's how we make sure what we do feels valuable even as the landscape keeps shifting. I got into computers because I loved playing video games as a kid. And then that led me to programming. And then first it was to make my own games, but pretty soon I realized, like, programming for its own sake and kind of the instant feedback when something works. And yeah, and then I moved into enterprise software with the difficulties I mentioned and then security. And then somewhere along, you know, talking to people became part of my job. And Shane Hooverson sent me this article about this idea of unfolding where instead of planning one thing Leads to the next in ways that you can never predict. But if you guide them the right way, you end up in a great place and everything makes sense. Looking back, we don't really know how software development or the world is changing right now. Maybe things will stay mostly the same. Maybe everything will get turned upside down. You can dig in your heels and refuse to adapt, but I don't think that's a good plan. Or, you know, you could panic and assume the worst. Everything's changing. We're doomed. I need a new career. Or then there's also the risk of just going through the motions. You know, doing the Red Queen's race thing, using the new tools to get an edge and not really noticing that you're handing over all your thinking to the machine. But yeah, these aren't the only options. The bigger and simpler option is just notice what's happening and adapt. I can't tell you exactly where we'll end up, but I can say this. If you pay attention to what you're doing, to what you're enjoying, to where the challenges are, you don't need to map everything out. I never did. You just have some preferences and let that guide you. Sometimes the world is very foggy and all you can see is a step or two ahead, but that's enough, right? You just use what little vision you have to guide you and you keep adapting and you let yourself be surprised by what you end up loving and lean into that. It's an interesting time, right? In 2007, Steve Jobs announced the first iPhone on stage. And 12 months later, he opened the App Store. And overnight there was this fertile ground for tens of thousands of tiny businesses. And eventually Uber and Lyft and so much more arrive. It's the concept of the adjacent possible, right? It's this idea that when something new appears, like that iPhone, it unlocks a new set of opportunities. Things that weren't possible before, but now suddenly are. It's just not everybody has found them yet, but they're now adjacently possible to what we have. That's why you see things popping up in parallel, like both Uber and Lyft coming out in similar times. Large language models are like that 2008 App Store moment. Suddenly all sorts of new projects and ideas are within reach. I don't know where things are going, right? If you're curious and willing to explore, this is the perfect moment because everybody is like me, just figuring things out. I've noticed some engineers moving more towards product design. They're moving away from the feature Factory mindset and becoming more interested in building products and how the things actually fit people's needs. So that's one path, right? People letting go of a little bit of technical depth to focus on product, to focus on how things fit with the users. Syntax and language quirks maybe start to matter less to these people. What matters is understanding what they're building and why it matters. This is kind of the product engineer path, where you blend your engineering knowledge with product thinking. That feels like an exciting direction that some people are going in. If you want to have bigger impact, if you want to embrace the changes that are happening. But that's just one direction, right? You can follow the threads that interest you. So see what grabs your attention and keep moving forward. Don't hand over your curiosity to the machine. Use it to push your ambitions further. Maybe you can dive into something totally new, like contributing to projects that felt out of reach. I have no idea how you add a new data type to postgres, but I could use an LLM as a teaching tool, maybe to help me understand the code base to get up to speed faster. If there was something I really wanted to get in there, it might help me get through it where previously I would get stuck. Lots of options, right? These tools are here to stay. But you don't have to become obsessed with coding agents or chasing every newest trend. I think you should just use the moment to be more ambitious in what you can build. Treat these tools as just tools, but tools that are now letting people push the boundaries. And from there you just follow what interests you. Maybe it's hand coding, a ray tracer, maybe it's diving into lean and advanced mathematics. But where careers are concerned and getting well paid, I would focus on learning skills that are rare and valuable. But also interest matters the most, right? So pay attention to what actually excites you, what feels exciting, what feels like real growth, and lean into that. Can you master a large code base that you actually care about? Can you get better at making real structural changes to that code? Can you push yourself to design systems that are cleaner and more reliable and easier to understand than you ever could before? When you start to burn out or lose energy in one area, I always feel like you gotta pay attention to what grabs you. Because sometimes there's a new area that will spark something interesting and take you somewhere completely new. And for me, those instincts have always been worth following. So where does all this leave us? Right? When I started this episode, it was 6:07am and I had that coffee in hand, which I really would like right now I was wondering if I'd automated myself out of a job. But now I see the real risk wasn't the agent. It was turning off my brain. It was losing that part of me that wants to build and to understand and to keep pushing. That's why the real move is to get ambitious, not scared to chase what excites you and see how far you can go past your old limits. The tools will keep changing, right? The pace will keep picking up. But if you keep asking what's possible, what's worth building, what excites me, if I keep designing instead of just hitting accept all, then I'm not falling behind, right? I'm choosing my own path. So as you look at your messy code or that empty file waiting for your next idea, just ask yourself, am I letting the tools call the shots or am I using the tool to build the type of person that I actually want to be? That's the show. Thanks to Michael lynch, thanks to Shane Hooverson, thanks to Chris Lewis for the questions and many more. Shout outs to Alex and Kevin and Jason and Chris and Cedric and everyone in the AI LLMs channel in the co recursive Slack for pushing this conversation further. And thank you for all the supporters. Special thanks also to Malcolm Clark who told me, you know, he's too busy to worry about all this AI agent stuff. He's got a lot of work to do, but he listened to me on our weekend runs drone on and on about my excitement about them and kind of helped underline to me the fact that to me it feels like everything's changing. But for some devs, whatever, they got work to do and they're not too concerned. But yeah, whether this episode hit home for you or left you unconvinced, I'd love to hear more. Send me an email, join the Slack and until next time, thank you so much for listening.
