
Loading summary
A
Hey there, agile adventurer, just a quick question. What if, for the price of a fancy coffee or half a pizza, you could unlock over 700 hours of the best agile content on the planet? That's audio, video, E courses, books, presentations, all that you can think of. But you can also join live calls with world class practitioners and hang out in a flame war free and AI slop clean slack with the sharpest minds in the game. Oh, and yes, you get direct access to me, Vasko, your Scrum Master Toolbox podcast. No, this is not a drill. It's this Scrum Master Toolbox membership. And it's your unfair advantage in the agile world. So if you want to know more, go check out scrummastertoolbox.org membership, that's scrummastertoolbox.org Membership. And check out all the goodies we have for you. Do it now. But if you're not doing it now, let's listen to the podcast. Hello everybody. Welcome to a very special bonus episode. And for this bonus episode, we're exploring the idea of thinking like an architect in the age of AI assisted coding. And this week we're joined by Brian Childress. Hey, Brian, welcome to the show.
B
Hey, thanks for having me.
A
Absolutely. So Brian is a CTO and software architect with over 15 years of experience and he shares our hard won. Well, let me repeat that. And Brian shares hard won lessons from teams using AI coding tools daily and why. The real challenge isn't writing of the code, it's designing systems that scale with users, feature features and teams. And for all of us wanting to be future proof in our careers in building software, we need to learn to use AI for that. And this episode will, I'm sure, unveil some secrets and clarify some mysteries about that. And Brian, for the listeners who may not know you yet, could you share a bit about your background and what first got you excited about the intersection of software architecture and of course, the role of AI in our industry.
B
Absolutely. So like you mentioned, I've been doing software engineering and architecture for well over 15 years now. And in that time I've had the opportunity to work in a few different industries, most highly regulated industries, so in healthcare and in finance, consumer SaaS, products as well. And kind of in that time I've noticed a trend, and that is most engineering projects and software engineers themselves lean more towards complexity. And I find that that complexity really is multiplied. When we bring in the power of AI and its ability to write just tons and tons and tons of code, more complexity is created. And so right Now, I think, is real opportunity for us to kind of lean in and better understand the problems that we're solving. Because we can solve them so quickly, we need to make sure that we're solving them in the right way. And that's where I think many of us really need to step much more into an architect role and less of a code authoring role.
A
Okay, so before we explore that proposition, let's break that down. When you talk about the complexity growing much faster with AI, what exactly do you mean? And do you have like a real life story to share around that?
B
Absolutely, I do. So when I ask an AI coding tool, any of them kind of fit this bill, you know, hey, develop me a user interface that does this, has this functionality. The AI is taking that understanding of the problem, the thing that I asked it to do, and looking at a huge corpus of different ways that it could be implemented. And if we don't really guide the tool through that, we could end up with something that has many, many components. Deep say it's a deeply nested component. There's a lot of information that has to flow back and forth through the component. And while it works, it is not something that's easy to understand or to maintain. And I've run into this as we're using some of the very popular AI coding tools, developing a new production ready platform, and engineers are running into that issue where it's starting to become a little too complex for us to understand. Hey, what is actually going on here?
A
So I think that at this point, it's good to talk about these different approaches. We've done a series on the podcast. I'll put the link in the show. Notes on AI assisted coding. 1 of the things that became clear from those interviews is how wide the spectrum is between what we call Vibe coding and what many of us are discovering is necessary to effectively make use of these AI tools. In your own mind, Brian, how do you define, define those two, let's call it extreme sides of the spectrum, where one would be something where the human is always in control, and all kinds of practices that we've learned through the decades. We should be doing software development and then vibe coding. Where do you land and what are the key signals that tell you if we're going more towards one side or the other?
B
I think that there's a place and an opportunity for all sides of that spectrum. Know, when we think about Vibe coding as the industry has adopted the definition, that there is, you know, a real use and purpose for that, I find that many designers and product owners really are gravitating to that kind of more vibe coding. I'm just going to prompt until I get something works that really demonstrates what I'm trying to do. That's an incredibly powerful tool set that we have now that if I compare it to what we've seen in the past, right. It would have been some sort of wireframe and then it became a Figma document and then we made it like a clickable Figma prototype. Now I can get to something that really does work, that does produce operating code much more quickly, much more robust and high fidelity, which is really amazing because now I can start to do the thing that we wanted to do originally, which was let's test. Does this actually have value if I get it in front of my customers, Is it something that they're going to use that they want where before to get to that place? It was a much, much longer timeline, you know, it was much more expensive. Many more people were involved. So I do think that there's a place for kind of the vibe coding aspect and there's a lot in that that we can learn. There's a lot in that that can be created and reused or you know, honed through kind of interacting with the, the, the application that's created and the, the platform that's being built. And then on the, the other side, I do think that there's an opportunity for, you know, software engineers to really get in deep with not only the code that's being generated in this kind of vibe coding environment, but also really understanding the problems themselves. We're all learning these tools. You know, this is not a, you have more experience with these particular tools, we just might have more experience solving these problems. And so how do I, how am I a better director and manager of things? How do I think more like an architect in a higher level understanding where before you know that, you know, that role looked a little bit different. And I think many of us now have to step into that role.
A
So let's break that down then. When you think about the role of an architect in the context of using this AI assisted coding tools, what would you say is the definition of that? And also what kind of mindset do you bring when you think like an architect using these tools?
B
So when I'm thinking more like an architect, I'm thinking more around, okay, how do bigger components, higher level components start to fit together where, when we don't necessarily think like an architect, then we're just creating new components and I'm just happen to be able to do that faster, or, you know, I have to actually author less code, I have to actually type less, but create more. And so it really requires us to kind of shift our thinking and to what is the problem that I'm really solving? And not as much around how do I work within a framework? How do I make sure it fits within the paradigms of the particular language or technology that I'm using? It's much more an opportunity to really look at the problem in a way that an architect does, where software engineers don't always kind of think in that way.
A
Okay, so maybe you have a different experience thinking like an architect. So let's explore that a bit further. So when I think about an architecture role, I imagine a lot of meetings. In the worst case, a lot of PowerPoint slides. In the best case, perhaps some more appropriate tools for creating this, let's call it shared understanding of the overall architecture. Right, because it needs to be depicted in some way. Written text for us humans is not the best way to clarify architectural ideas. But then I kind of bump into this block, at least in my mind, which is, okay, wait, but that's what we do with humans. How do we do that with AI? Because I can quickly whip up a whiteboard architectural diagram to talk to a colleague and kind of try to convey some ideas about where I think the architecture should go. But how do you do that with AI?
B
So the way I personally use it and have found it to be most beneficial is as a thought partner. And so what it forces me to do is to explain the problem, the thing that I'm trying to solve for, and then also explain my thought process. How is it that I'm thinking about solving this? What are. And then I can go back and forth and strategize with the given tool and try and come up with an understanding, really looking for, you know, questions that challenge my thinking. What is, you know, what in here does not follow our particular paradigms? What in here doesn't fit within our ecosystem? What can we do to further simplify what we're looking at? And so now I have the ability to collaborate just like I would some other technology partner. I just have something, this other entity that has just a huge corpus of knowledge is able to see many more solutions than I or any of my teammates might not have seen seen in our careers.
A
So you're using it like a thought partner, sparring partner, but also as a, let's call it much more contextual and faster Google search. Right. Like you're looking into what's out there that might help you think through the problem you are trying to solve right now.
B
Exactly. And you know, I think for many of us, because you know, we've spent the last 10, 15, 20 years becoming just an expert Googler, right, That's kind of how we evaluate our skills is.
A
Either that or stack overflow. Right?
B
Right. It's like I am a ninja at, you know, knowing which of the, the first ten blue links I need to be clicking on. And that skill, how can I bring that into AI and, and into that conversation? And what it really forces us to do is to be able to explain ourselves better. I was having a conversation recently with some colleagues and I was saying that I find most software engineers will hide behind complexity because they don't understand the problem. And so what most engineers will do, especially when talking with non technical counterparts, they will bury the conversation in just technical jargon and acronyms and just really make it so confusing that the other person, the non technical person, just says I give up. And with this kind of AI back and forth, it really forces us to be able to explain what it is that we're working on in a way to get to that good solid end result.
A
So one of the things that you've used in this conversation repeatedly is this idea of understanding the problem we are trying to solve. Now I wanted to clarify that because when I think about software development, there are many layers of problems, right? Like there's the business problem, there's the user problem, that there's the user in the business, like how do we enable the user and still create a proper business for ourselves problem. And of course there are many problems that are technical in nature, right? Like when you think about the role of the architect, like how do you define this concept of the problem that we should really be trying to define first before we start generating code?
B
For me, it starts at the very top. It starts with the customer. What is the thing that we're trying to bring to the customer or we're trying to remove from the customers the things that they need to worry about. How can we simplify their engagement, the way they work, the way they operate within our platform. And when I really understand that, then that gives me kind of a barometer to know where should I be solving these particular problems? Right. A customer doesn't care about our microservice architecture that might be the solution to the problem that a customer has. You know, the certain portion of the platform you know, needs to operate in a certain way and then that particular architecture solves that problem. But if I don't understand that problem, then I'm just using something that I think is either new and shiny or, you know, really fancy, or I want to have on my resume to solve that particular problem, but I'm not actually solving it. So really, can I simplify it? Can I explain the thing that we're trying to solve back to someone else, either technical or non technical? The better I can explain it, the better I understand it.
A
And when thinking about the architect role, one of the things that you talk about is we need to step away from thinking that we're just writing code and we need to think about overall systems. And you even use this idea, this concept of designing ecosystems. What do you really mean by that? In the context specifically of understanding the problem and of course working with AI specifically.
B
So it really comes down to code. I like to say that technology is the easiest part of what we do. I can Google my way to a solution, I can now generate my solution. But is that solution the thing that I needed? And really, it again, it comes down to my understanding of what it is that we're trying to solve and how can I do it based on our current context, our current understanding and our current set of resources. Right. How can I build something that fits what we need now? Not, you know, when we imagine that we're going to be at a Netflix or a Google level scale. Right. We're not, we're just not. We don't have those challenges. And so let's solve for what we have right in front of us.
A
So when you think about designing ecosystems, how do you phrase that? Let's say you're helping a colleague kind of step away from the writing code mindset and then starting to think like an architect, as you suggest we should be doing. How do you phrase this? Designing ecosystems for them?
B
I really think, for me, I, it has to be just dead simple and we have to continue to break something down. And it's until it's so dead simple that I can basically diagram it out if I'm communicating with a colleague, which I don't think will ever go away. Right. In this AI, you know, ecosystem, I, I need to be able to define it, I need to be able to diagram it. And I basically use four shapes to be able to diagram anything. And if I can't do that, then we're still, we, we still have too much complexity. Right. It's a square, a triangle, a circle and a line. If I can diagram everything out, I can understand it. And we can do that everywhere. From the, the customer facing problem down to the code component breakdown, how does data flow, what's the integration, all of that? And it really just continues to force us to think, think in more simple terms, because when we remove that complexity, then we have just a much better understanding. And then when you, you, you kind of feel that light bulb moment happen for many people and they're like, oh, okay, now I get it. Now I can easily translate it into code because I have that understanding. And that's when we get there and we, you know, you see the people in the flow state where they just completely lose track of time because they understand it so well that they can now generate it.
A
So talking about generating code, I mean, eventually, no matter how much we simplify things, eventually somebody needs to write the code. And many of us are using AI to do that these days. So what have you learned personally, Brian, about making AI assisted coding work for you and for teams, both technically and culturally?
B
So I'd like to start with the cultural piece first. What I find is that many engineers that have some years of experience, you know, maybe call them a mid or they may even have a senior engineer title, they can be more resistant to AI, right? That's when we hear about the AI slop. And you know what I see? Underlying that is a fear is a concern that this is going to take my job, that this is going to create something that's so terrible that now I have to either rewrite it or now I have to support it, whatever it may be. And so from a cultural standpoint, I like to frame it as, we're all going through this, we're all learning it, it's a new set of tools, the same way that Google was and Stack Overflow was for us many years ago. Now this is a new tool we have operate in and try and challenge my team to learn it as a new technology versus something that is a threat to us. And for me, as a leader, as a cto, I need to do that. I need to show my team this is how I'm using it. This is the things I'm learning. This is where I'm messing up with it. Showing that, hey, it's okay, you know, and, and we can laugh at the, the garbage code that may be generated, but we look at how did we get there, because again, it's just another tool. It's just another technology that we get to learn. And then, you know, from a, from the other standpoint, what are the things that we can do as engineers to really kind of guide and put the guardrails around the AI tools. It's the same as if I had, you know, whether they were a junior or a senior engineer, I'm going to make sure that we have the guardrails that we're deploying, everything that we have the ability to test and automation and all of these things. If we have all of that, then the AI just helps us to create the code in the right way, following our coding standards and all of those things that are important, really leaning into what are the things that software engineers feel are valuable but we don't ever actually get the chance to do right. We don't ever get the chance to fully write a nice set of documentation or a nice set of automated tests. They just. It doesn't happen, right? We, you know, we can all think that it does, but it doesn't. Now we had the opportunity to leverage these tools to get the thing that we always wanted. Yeah, absolutely.
A
One of the things that I find interesting is to explore these failure scenarios or big whoops. So is there a particular story in a project or a team or even your own personal experience where you saw AI go wrong and help you learn something important about how to work with these new AI tools?
B
Yeah, I did. So when I do, when I think about architecting a new portion of a system, I create an implementation plan, right? It's just basically, this is a high level overview and then it starts to break down some of the infrastructure and the database and all these things that have to change as part of this. And then we review it as a team. And so I shared with the AI tool, here's some past implementation plans, here's what I'm trying to do. And then I just said, okay, go generate something. And the mistake I made was I took what it put out and I didn't really think through what it was, what it was suggesting, as this is how we're going to implement it. And it became a big point of tension within the team because what it generated, it was like, this doesn't make any sense. And my mistake was I just, okay, no, that, you know, at a very high level, I didn't really read through it. I just said, okay, here, here's what you need, team. And then, you know, just, all right, that, that, check that off.
A
Basically, you vibe, you vibe prompted the team.
B
I did, I did. And the team called me out on it and said, hey, this is, we're going to throw this away, we're going to redo it. You know, we're going to take the Maybe a core concept, but we're going to rewrite this. And that was the first time that, you know, I trusted the AI to build and write and craft this thing, and it did not work. And it was a big, you know, it was a learning opportunity for me and a learning opportunity for the team as well.
A
So, absolutely. As I say, mostly in software, it pays off to have bad ideas quickly rather than try to aim for the perfect ideas in the long term, because we learn so fast. And one of the things that you said about setting up teams culturally for working with AI, I think that's a great example, right? Like. Like, it's okay to fail, and it's also okay for the AI to fail. That's why we have people.
B
So that.
A
We can review each other's work, including the AI, and give feedback and learn from it and improve. When you look ahead two to three years ahead, how do you see AI changing what it means to be a software engineer and also architect?
B
I really. What I see, if I look into my crystal ball, I'm going to see more engineers acting like architects. More engineers are going to be thinking in ways of how do I construct this system, how do I move data around, how do I scale or think about scalability and maintainability? Because what the AI can empower us to do is actually remove the need to more deeply understand some of the frameworks and the nuances that exist, because we can generate something much more quickly in one of these frameworks. And so it does allow us to experiment a lot more. It does allow us to move and refactor a bit more quickly. Again, as long as we have those guardrails in place. And so I think all of the things that we haven't had the time or opportunity to do as engineers, we now will have more opportunity. And we need to lean into that, putting in those safeguards, leveraging multiple agents to write our automated test suite and to document our API endpoints and so forth. I think that's where we're going to really kind of lean into it. We're going to see more engineers lean into the AI tools, and I think I would expect to see more resistance to some of the AI tools. There's, you know, I think we're going to hear more about the AI slop, and we're going to hear more about CEOs vibe coding production apps and then having a huge security incident. I think we're going to see kind of on both sides of the spectrum, it's not going away. I think we're just Going to continue to see more and more of what we've already seen.
A
Yeah, I totally agree. It's not going away. And as we can see from the podcast series we did during 2025, there's a lot of us experimenting with it and learning what works and what doesn't work. And of course also sharing it like through this podcast as well, which is how we learn as a community. Brian, we're getting close to the end, but is there a resource? Could be a book, a video, a paper, or a tool you think every practitioner assist interested in doing AI assisted coding should look into.
B
For me, I go personally to YouTube. I there's some fantastic channels out there that are really following AI more generally, the models, how they're evolving, but also specific to coding as well. I'm seeing a lot of really, really good YouTube channels out there, so that tends to be where I'm trying to stay up to date with a lot of what's going on. Just like we did when we started in our careers, kind of just go and play with everything is really my suggestion. And for me, what I've always found helpful is to document what I'm learning as I'm learning it, not only for a resource for me, but then potentially I can share it with others. That's really kind of what I'm doing. But right now it's very much experimentation. Being open to it, being willing to learn and to fail and to, you know, just kind of keep pushing forward. Those are the engineers that I want on my teams are the ones that are out there experimenting and learning. Yeah, absolutely.
A
Well, I assume you also share publicly, so if people want to hear more and read more about what you're experimenting with and learning, where should they go?
B
So I'm Most active on LinkedIn, Brian Dash Childress on there.
A
Absolutely. We'll put the link to that on the show notes and also to some of the YouTube channels so that people can go and check them out. Brian, thank you very much for joining us and thank you for being so generous with your time and your knowledge.
B
I appreciate the opportunity.
A
All right, I hope you liked this episode. But before you hit next episode, here's the deal. This podcast is powered by people like you, the members who wanted more than just inspiration. They wanted real tools and real connection to people who are practicing agile. Every day we're talking access to over 700 hours of agile gold, CTO level strategy talks, summit keynotes, live workshops, E courses, deep dive interviews, books, and if you're into no estimates, we got the pioneers of no estimates in those deep dive interviews as well. Agile Business Intelligence, creating product visions, coaching your product owner courses. You name it. You'll get invites to monthly live Q&As with agile pioneers and practitioners, plus a private Slack community which is free of all of that AI slop you see everywhere. And of course, without the flame wars. It's a community of practitioners that want to learn and thrive together. It's the best place to connect with community and learn together. So if this podcast has helped you before, imagine what you will get from this podcast membership. So head on over to scrummastertoolbox.org membership and join the community that's shaping the future of Agile. We have so much for you, so check out all the details@scrummaster toolbox.org membership because listening is great. It's important. But doing it together, that's next level. I'll see you in the community.
B
Slack.
A
We really hope you liked our show. And if you did, why not rate this podcast on Stitcher or itunes. Share this podcast and let other Scrum masters know about this valuable resource for their work. Remember that sharing is caring.
Podcast: Scrum Master Toolbox Podcast – BONUS Episode
Host: Vasco Duarte
Guest: Brian Childress, CTO & Software Architect
Date: January 24, 2026
This bonus episode explores how the rise of AI-assisted coding tools is changing the software development landscape, especially the role of software architects. Vasco Duarte talks with Brian Childress—a CTO and architect with rich experience across regulated industries—about how engineers and leaders can adapt, avoid common pitfalls, foster a constructive team culture, and future-proof their roles by focusing on designing systems, not just generating code.
AI tools multiply complexity: Brian highlights that AI can produce copious amounts of code quickly, which, if unguided, can lead to overly complex, confusing systems.
Need for architectural thinking: The faster pace enabled by AI demands a higher-level approach to design and systems thinking, shifting engineers from pure "code authoring" to architectural roles.
Defining “vibe coding”:
Limits of vibe coding for production:
Role shift:
Collaboration with AI:
Explaining & simplifying problems:
Diagrams over text:
Adoption challenges:
Guardrails and best practices:
Architectural thinking for all:
Experimentation & continuous learning:
Brian’s learning approach:
Where to follow Brian:
On the core AI challenge:
On why simplification matters:
On future-proofing your skills:
The episode maintains a candid, practical tone—punctuated by humor, humility, and encouragement. Both participants speak as practitioners immersed in ongoing change, emphasizing learning, adaptability, and community-driven best practice sharing.
This summary was created to offer a comprehensive, context-rich overview, with key moments and actionable insights for software professionals navigating the AI-assisted future of coding and architecture.