
Loading summary
A
We've been working with. They have one person who has all of the knowledge in their head and they're always this bottleneck. And then whether it's clients or internal, it has to go all through them to answer it because they have so much knowledge. And so these, this is a way to kind of make knowledge more accessible as well within the team.
B
So how are you capturing that knowledge from that individual? Right, because this has always been a problem with large. Welcome to Embracing Digital Transformation where we explore how people process, policy and technology drive effective. This is Dr. Darren, Chief Enterprise architect, educator, author, and most importantly, your host on this episode, Breaking the Knowledge, Turning tribal expertise into AI driven ROI with Sebastian Chandal, CEO of Fountain City. Sebastian, welcome to the show.
A
Thank you. Glad to be here.
B
Hey, before we dive into this subject, and this is a really important subject because I see a lot of money being wasted in AI and not really bringing our ROI to, but before we dive into all of that, everyone knows on my show that listens that I only have superheroes on the show and every superhero has a background story. So, Sebastian, what's your background story?
A
You want to know my secret identity? Yeah.
B
Well, we. You can tell me. I won't tell anyone, I promise.
A
Okay. Okay. We won't tell the listeners either. Well, let's see. I. I mean, I have a. A long, meandering story, but I, I've been in tech since, I mean, my first company, well, which is the current company still around Fountain City was in 1998. But Fountain City has changed a lot during that time. It was founded in Europe and Amsterdam, and then when I moved to the States, or back to the States, I should say in 2008. Company came with me. 2016 is when it really started to grow, though, and became a proper agency with 12 people. We kind of peaked at 2022 people in October was our largest. And over the time, like I said, we've evolved on different things, but we kind of honed in on digital transformation as our main focus. And then in end of 24 was also when we decided that we would, you know, we had a lot of internal interest. And then looking at the economy and the market and what our clients are looking for, et cetera, all that combined to make us move deeper into specifically AI transformation, digital transformation. And then last year we also kind of narrowed in more to be speaking more and more to manufacturing companies or manufacturing industry. We do serve outside of that vertical, but that is kind of about 60% of our clients right now are in that space. Specifically that's a really good vertical to.
B
Be in because they tend to lag behind as far as technology adoption. But AI can play a huge impact in their return on investment and their productivity. And so there's a lot, there's a lot of low hanging fruit, I would guess, in that industry.
A
There is, it's interesting because there's a lot, a lot of opportunity, but that opportunity is also created by a lot of companies that are behind. Right. And then there is this culture of, you know, don't break if it's not broke, don't fix it.
B
Yeah, don't fix it if it's not broke. Yeah, yeah, yeah.
A
But the interesting thing is that there is I, and I've seen this when I, I love studying things like history and looking at countries and how they evolve and change over time. You know, if one country was really behind on doing railroads, by the time they put railroads in, they're not going to start with the rinky dinky slow ones. They go straight to the fast ones. Right?
B
Yeah, yeah. They get an advantage, a late, a.
A
Latecomer advantage, a little bit of a latecomer advantage. And so there is that opportunity here in the sense that companies, so one thing that we see a lot in manufacturing companies in particular, but it's true in other companies too, is there, there's. The data might be not very well organized and processes are maybe all how people have always done it or in people's heads. They're not necessarily documented in flowcharts and fancy diagrams. And so in order to get the benefit of these AI or agentic systems, you have to go through that trajectory as well, that path of getting better data or getting data organized or structured or converted to digital plus documenting, defining both workflow processes as well as logical frameworks. And then you can put the AI on top of it. When I say AI, I mean the automation or agentic systems. Those could be a mix of traditional automation plus what you can now achieve with AI automation. So there is, there is. Even though they're getting an advantage, let's say of late start because you get to, you know, take all the lessons learned and then get to an endpoint that's at the cutting edge. You have to go through these fundamental steps as well. And sometimes those steps are, well, I would say maybe even often those steps are really important. Otherwise you do end up with fluffware where you put the AI system on it and then you're not getting either not really good output or high error rate or hallucination rate. Name the problem. I could spend a while talking about all the different ways it could go wrong. So you know, those fundamental things are important and in that sense it is a bigger project than it would be for a company that had, you know, really well defined processes and recording all their data. And it's really well organized. I mean those companies are kind of unicorns. But you know, if, if you had that situation and adding AI on top becomes a lot less effort and a lot less risky.
B
So, so really this sounds AI is really kind of exposing our lack of maturity in process in these large corporations. I imagine most of these processes have grown ad hoc over time. They're not written down probably. They're probably just in the tacit knowledge base of the corporation, especially in manufacturing. I can imagine that would be the case. So what's your first approach? Your first approach is documenting what they have.
A
So the first thing that we do or I do. So it depends when we're getting engaged, but if it's in the ideal scenario we would get engaged with or we're coming into companies when they're at the ideation roadmap planning phase and so they may have multiple ideas or initiatives that they want to do. Maybe some things they've heard about could be done or that they've heard that we can do. And then we'll come in and look at, we'll try and first parse out things that might be just a mandate and make them more into visions or objectives, a North Star, so to speak. And then once we have identified what those are, we then look at how those can be broken down into tangible modular components or pieces that build towards that larger outcome. And then we also rank things basically using a matrix where you compare effort versus impact teams so that we can hit the things that have the highest impact, lowest effort first. Trying ideally to get projects that show ROI benefit somewhere between usually it's like between three to nine months or three to six, six months modules or projects that we do within that. And of course things can be faster too, especially if sometimes we find some low tech solutions that we do right away before we actually do the AI solutions that come later. And then, so if that's, that's the vision roadmap planning and then so the projects that are. I'm looking at my little post it note here so I can remember to not forget anything. So the main things we look at is the making sure there's a vision, making sure that the project has a tight vertical, narrow focus, that the solution isn't trying to be broad so that we can have the AI be focused on that area. We want data and process to be clearly defined for that implementation area. And then there needs to also be testing incorporated into the AI system. A lot of people are getting better about this, but especially last year we were seeing so many projects where maybe there's some general testing, but the AI itself, the knowledge isn't being tested adequately. And then you can't forget change management, AI governance policies, things like that. Are people enthusiastic about this? Are they resistant? Is there enough AI education or knowledge? Yes.
B
Yeah. So the things you're talking about seem like pretty straight up process, re engineering. And if you took AI AI out of it, the process would be the same, right? The, the process of re engineering processes are capturing them. So have you seen anything unique that AI has brought to the table? Because you've been doing this for a long time, right? And you didn't have the AI tools that we have today. You didn't have those four years ago, five years ago.
A
So I would say that one of the big, big factors that's different is that it's new. And so there's a lot of expectations that can be off, let's say from management or upper management in these projects either that they think AI can do too much or can do too little and then which so can go either direction. And then in terms of implementation for change management in there, one of the more unique factors is that there's a different kind of AI resistant, there's a different resistance to change with AI as a slightly novel than with other change. And that has more to do with the AI itself that can feel, you know, insert the blank, threatening or taking all jobs away or it's coming to take my job or you know, those are some things that have, you know, concerns that are out there. I don't think I'm the first to say it. They exist. And you know, each of those has solutions. But it's, it's kind of like back when the Internet was first launching and people might have gotten to the point where they're thinking, okay, we need websites. But then the person who's in charge of that company has never made a website or doesn't know what's supposed to be in a website. And then they talk to two companies to get quotes on what the website should be in their, the quotes are off by factor of 10. And so they don't even know which is the right size of budget. And you know, so it's, we're like in that space, we're in that Space.
B
Oh, that's interesting.
A
We're in that space of like education and all the risks. And then, then you have incentive problems too. Maybe the board of directors is reporting to their shareholders and they want to show that they have AI initiatives and projects in place. So then the incentive is really just to satisfy, could be to satisfy a checkbox that here's AI in the project and you're going to get usually way less successful or just performative AI solutions put into place rather than a business that's really trying to affect significant change or growth in the company through these magentic systems.
B
This is really interesting because it's almost like a two edged sword AI. I'm sure AI increased this, the number of customers that you have because now everyone's like, well I got to get on this. So I got to do digital transformation. So it's been a catalyst, but at the same time it's a catalyst with pricklies on it. Right. It's like a catalyst. That's a cactus, right. I can't just go up and hug it. Right. It's going to make me move a cactalist, a cactus. There you go. Because it like you had all these, all these barriers to adopting AI. But AI is a thing that's pushing for re engineering a process, process reengineering. So it's a very strange time.
A
Yeah, yeah. And fundamentally AI is. So there's several key areas of benefits. One of them is that you can now automate processes that before you couldn't. So qualitative decision making with all the pluses and minuses of that. Because qualitative is not necessarily an exact science you're dealing with.
B
It's non deterministic.
A
It's non deterministic, which is amazing because before it was really hard to automate those processes. You'd have to have someone, you know in the middle of that process maybe to look at a spreadsheet and make qualitative decisions on should this be approved or not. You can help augment that. The other main area that I've seen that we've seen is knowledge democratization is a big benefit of AI systems as well. So if you had. So that goes into different buckets, one could be, let's say for retirement planning, if you've got key staff members that are retiring, you have a better way to retain their knowledge. But also for training and then also decentralization of answers within the company. So if you have, you know, one manufacturing company we've been working with, they have one person who has all of the knowledge in their head, and they're always this bottleneck. And then whether it's clients or internal, it has to go all through them to answer it because they have so much knowledge. And so these. This is a way to kind of make knowledge more accessible as well within the team.
B
So how are you capturing that knowledge from that individual? Right, because this has always been a problem with large corporations or even small. Tacit knowledge is not written down, it's just known inside the organization. How is AI helping you capture that?
A
I would say data strategy has two sides to it, two directions. If we're doing it from a project perspective, we want to capture knowledge with this particular goal. Now, I spoke earlier in this podcast about creating modular projects. So let's say your first step was an internal tool that could help answer the most common questions that that engineer gets constantly from their coworkers.
B
How?
A
So then we would look at how has that engineer been asked questions in the past? Maybe it was a lot over email, let's say, just hypothetically. Or maybe it was over teams or Slack or whatever it is we can take. So, you know, data engineering comes in and you look at, okay, where's the data? How can we get some data quickly? We can, for example, in that example, look at all of past emails or past Slack or past teams conversations and use, essentially use AI to categorize, organize all of that, those past Q&As into a mini database or, you know, data data extraction. Yeah, and then now you've got. And then you just need to meta, tag and organize the data so that it fits within certain categories of questions. Like, oh, this is a question coming from, I don't know, operations or programmers or, you know, different roles might get a different answer to the same question, for example. So those are some nuances, but yeah, so that's one way to do it. So you can do data scraping and organizing. You can also synthesize data so you can create a bunch of data and then have someone who knows the subject go through and enhance it, critique it, push it back, you know, review it, basically.
B
So you're capturing that interaction that they're having with the AI as a way of capturing that tacit knowledge.
A
You can do interviews, could be video phone interviews, and you can record all of that and use that. You can take very large documents that are specification documents. You know, maybe this person kind of, over a long period of time, kind of knows the details of these 23 large documents in their head, essentially. So that could be a quick way to get, I Mean that's a more direct way of getting the knowledge, but those, and then if it's for logic decisions. So one client I'm working with right now, we need to map out someone's thought process of how they arrive at certain solutions given certain criteria. And so what we're doing is we're actually first building a template of what is, what are different problem cases. Then we're including in it how they would normally solve those problem cases. Next we're then generating like our goal is to generate a hundred different scenarios. You know, we, we test it out and then we keep scaling it and then we're using AI to generate a hundred different scenarios. They're going to review that those scenarios are good, push back at some of them or not. Then they're going to not only answer some of those, but then we're going to get the AI to answer those and then make sure like out of a hundred, are we getting a 99% accuracy in how the AI system is of answering those? You know, and behind the scenes we're also asking them to define what makes a good answer. So like it's, sometimes it can be very interesting process to get people to really kind of break down their thought process in a way that they've never broken down before. And you know, and in this case that's like the approach to capture. So you know, for this mini project, this is about like a three week project to kind of iterate through this until we get the data, the logical tree kind of mapped out from their process.
B
That's really cool. I mean, because, I mean that's an important thing as you're re engineering that I've seen happen time and time again. There's all this tribal knowledge inside the organization that you're trying to capture and there's no easy way to do it. AI has made that much easier than what it has been before. What would you say are some of the key success factors? Some of the things that you, when, when you're going through a digital transformation with your company. If, if you see these sorts of things happen, you can say guaranteed, I know this is going to be successful. What are, what are those things that you see, see any common thread?
A
That's an interesting way to put it. Yeah. Because usually I get asked the opposite way around, but I like that. So I would say first that, that there's really a commitment for real change. It's not performative. Like I was saying, I think that's important. And that means also that change management is taken really Seriously, not just as something that needs to be put on the checklist, but that we really do want to understand how people are working. You know, let's say we're trying to increase performance of a certain department because we know they're doing a lot of repetitive work that AI could help alleviate, you know, their, their work week. Are we angling it so that we're really going into those teams and saying, hey, we're here to help you to do less of the things that are just so time consuming so you can do more of the things that are exciting to you or rewarding or you know, higher level productivity goals. And, and if we have that approach so that people are really on board and we're not getting internal friction or, or incoherence between upper, middle and lower levels of the structure of the organization, that's really good sign. And then after that, I would say that initiatives are being taken on in a. It's fine to have experimentation and I think it's good for companies to have an experimentation ethos with AI, but you have to move beyond that and have things then be in a methodical kind of structured approach where you, you think through all the implications of the project you're doing and then you have phases and then you have, you structure it so you can measure the impact of it, take lessons learned both from the data and from people, and then put that back in and then iterate. Because if you have that methodology approach as well, even if you don't quite get it right the first or second time, you're going to hit the target as you iterate through. And then the main, one of the big benefits of all these AI systems or these agentic systems is that once you do get that productivity boom or gain or, you know, whatever is you're trying to achieve, the returns you get on that are perpetual. They're not just one time returns. So if you're getting a, you know, a common baseline, for example, in coding that, that I've seen across the industry is 30% is kind of your minimum productivity gain you should get from AI assisted coding as an example. So, so that's one example. But there's, with customer service, it's, you should be getting like a 60, 70% increase in productivity per person, for example, with the using AI to assist you either internally or with the client. So those kind of productivity gains are forever. So even if you were, you know, worst case to kind of need a couple iterations to get it right, each person in your team is now worth one and a Half people or you know, so it's like, it's, it's pretty major improvements in productivity gains.
B
So that, so that's why boards are so pushing for this so, so heavily.
A
Yes. Yeah. Is because the potential, the upside is way higher than the downside.
B
Than the downside.
A
Yeah, yeah.
B
But it seemed to be stalled. I mean MIT came out with their report that said 95% fail.
A
Um, that's, I, I've seen 80% fail. I'm sure there's different statistics.
B
Yeah, but it's high. No matter.
A
Basically the same. It's high. And you know, and there's, there's a lot of studies that have been done, I've looked into and we have our own track record as well that we look into. And a lot of those projects suffer from so lack of AI expertise, lack of process, logical frameworks that I was talking about giving AI too much agency. So you just throw, like if you were to, let's say give an engineer an AI like you just sign them up with, I don't know, Gemini Pro or some other account or a co pilot, you just say, okay, you've got your AI system now use this to do your engineering quotes. It's like there's, there's so many that's.
B
So broad, too broad.
A
And it's just not, it's too, and, but you get that also within automations, you might have a step within the AI automation where you give the AI too much power to make its own choices. I don't, I don't know how technical we want to go, but with like NCP services and things like that. But you, you want to keep the AI even within the automation sequences, very narrowband. And what we'll often do is we'll chain together multiple AI agents rather than have just one.
B
One that handles everything.
A
Yeah, one that handles everything. So you know, if agent A says it should go up on the flowchart, that's all it does. And then that next agent makes another kind of micro decision or has a, has some kind of a framework. And then anyway, that kind of stuff really helps a lot. So too much agency. Right. Is another risk. Yeah, go ahead.
B
No, no, I, I, I like what you said there because I think a lot of people are like, but the AI can do all this stuff and I don't need a human in there making that decision anymore. That's where they start going wrong. Because you and I both know these AI still hallucinate. They don't understand your business completely and you put too much trust in that without checking and without having a subject matter expert there guiding it and giving it more context so that it can do a better job, it starts going in some direction that you don't want it to go and it can happen very quickly.
A
Yes, I think also understanding how the system is going to be used and integrates with people is really important because you, what you're bringing up there. So some implementations and this is something that I check really quick with, you know, on the project, is this a system that's meant to create an augmentation for the person who's working, in which case maybe we don't need it to be more than 80% accurate to start as a good baseline and that might save the person already. You know, I'm thinking, you know, this one project saves like four to five hours a week just by getting things at 80%. That's already significant.
B
That's.
A
And then, so it's, it doesn't need to be perfect and the person goes through and they're the subject matter matter expert and they can push back where needed. That's very different than a system that needs to operate autonomously and independently and needs to be highly accurate. That could be, you know, a customer AI agent on your website or something like that. So those kinds of systems, if they're having problems with hallucination or saying something incorrect and so forth there, I would say, okay, let's look at more testing to test all the different edge cases. Let's do repeat testing. Let's test all different kinds of ways, you know, angry customer, happy customer, 20 different ways of saying the same thing to see if we can get the AI to trick it to say something else. And then also architecturally, does that system have control layers in place? So, you know, one thing that can help is to you get your answer back from AI, but then you have another AI that looks at the answer and makes sure it's compliant, it doesn't break policy, you know, it fits with. It's not. So having that second kind of like policing AI agent system can also dramatically reduce error rates. So, you know, the architecture is important too for, for these systems.
B
I'm glad you brought that up because I think a lot of times they, and I blame all the big gen AI providers because they're out there like this will replace all your anthropic CEO just came out and said we're not going to need programmers in three to four months. What I mean that, that's just irresponsible to say something like that. Because we do need subject matter experts, we need software architects, we need people that understand architecture of, of human systems. Right. Because, yeah, software is there to help humans, not the other way around.
A
I think, I do think, though, the big shift that's happening to kind of use that what you just said about with anthropic encoding. So right now a lot of programmers are using AI akin to. Okay, so imagine you're on a, on a car manufacturing plant workflow, and you're, you're welding together the doors on the car and you're like, ooh, these new robots are cool by hand. And you're like, oh, these new robots are cool. Let me bring in a robot. And so you bring in a robot next to you and everyone's looking at the robot and you ask the robot to pass you the tools at the right moment. Right. And so a lot of the coding right now is kind of like that. It's like using the AI to do targeted assistance. But the shift that I believe Anthropic is talking about, which is a big shift that's already here, is that there is ways now to set up coding frameworks where you have a collection of AI agents that all work together.
B
Yes.
A
Where you have one that's a project manager, one that's the coder, one that's the tester, and so forth. And then the shift is that you're now sitting in the control room observing robots on the floor and they're. And you're managing them and they're the ones building the car. Right. But that still requires a lot of skill.
B
A lot of, in fact, more skill, actually.
A
Yes. Because you have to think very high level. It's very focused on documentation process, being able to break down tasks and then being able to review incorrect logic, incorrect assumptions or gaps in assumptions and so forth. So you really. It, it's, it's actually very demanding for the person running it, but in a different way. And then it's also different than cars because you can only make so many cars before everyone has to. But with software, there's really no cap on that. There's no upward limit.
B
Right.
A
We can write, there's unlimited software that can and will be written. So it's. Yeah, in that sense, it's like a. You know, there is this major. I know this is. We're going off topic right now into programming, but it's also a fascinating area. But that area is poised for a really big shift in how people approach coding as well.
B
No, I, I agree. Well, and it's probably the first industry that will get targeted. Right. Because the cost is high and it's, it's already in the digital realm. So I don't have to convert.
A
Right.
B
Like you're trying to do in the manufacturing realm. I got the real world, you know, physical world and the digital world colliding where in the software engineers, it's all, it's all the digital world.
A
Right. So it's, here's the, here's the irony. It's like if people, if companies right now are trying to get AI to write code so that they don't need to have as many software engineers, what's really going to happen is you're going to have just as many software engineers, but they're all going to be writing, you know, two to five times more code per hour by supervising systems. Like the, the same kinds of thought processes are needed. They're just higher level thinking. Yeah, it's kind of like the big shift from way back before we had programming languages. We had to write programs using machine, machine code. Right. And like right now if I were to say to you like, no, no, no, you should keep it traditional, just write all your software machine code. You look at the like crazy, you know. Yeah. And you know, the amount of software we were writing didn't go down because we didn't have to write in machine code. It's not like we had like an upper bound limit of how much software.
B
No, in fact we write more software now because it's, it's easier to write than. No one wants to write machine code.
A
So the same thing is going to happen here. I, I, you know, we might see a dip also related to the economy at large where people are trying to save costs more than they're trying to supercharge software output. But it's definitely coming. So.
B
Yeah, absolutely.
A
That's my little flag on the hill.
B
Yeah. So Sebastian, if people want to learn more from you or engage with you, how do they go about reaching out to you?
A
Yeah, so I have, so FountainCity Tech is my company website there, there's a contact form or there's a newsletter you can sign up and a blog if you want to get info in that way. You can also find me on LinkedIn. First name, last name. Pretty easy there. You can follow me there. I'm not super prolific on LinkedIn, but I do post a bit and I also have a YouTube channel which I post to maybe once every other week or once a week, depending on the, the tempo of, of life.
B
Yes, I know how that goes. All right, Sebastian, thank you so much for coming on the show. It's been insightful. I always like talking to people on the ground that are in the trenches to find out what's really going on instead of hearing from CEOs of multi billion dollar companies that don't, don't, aren't connected to reality.
A
Yeah. Different incentives. Also, different incentives individuals.
B
So thanks again for.
A
Thank you so much. It's been a pleasure. Yeah, thank you so much.
B
Thanks for listening to Embracing Digital Transformation. If you enjoyed today's conversation, give us five stars on your favorite podcasting app or on YouTube. It really helps others discover the show. If you want to go deeper, join our exclusive community@patreon.com embracingdigital where we share bonus content. And you can always connect with other change makers like yourself. You can always find more resources@embracingdigital.org until next time, keep Embracing the Digital Transformation.
Host: Dr. Darren Pulsipher
Guest: Sebastian Chandal, CEO of Fountain City
Release Date: February 12, 2026
This episode examines how organizations can effectively capture and scale "tribal expertise"—the unwritten, experience-based knowledge often held by a few key individuals—using AI-driven solutions. Dr. Darren Pulsipher and his guest, Sebastian Chandal, discuss the realities of digital transformation in the public and manufacturing sectors and focus on creating real, measurable ROI through people, process, and technology alignment.
On Latecomer Advantage:
"They go straight to the fast ones, right? ...There is that opportunity here...you get to take all the lessons learned and get to an endpoint that's at the cutting edge." – Sebastian (03:34)
On AI Change Management:
"It can feel, you know, threatening or taking all jobs away...It's kind of like back when the Internet was first launching...It's—we're in that space of like education and all the risks." – Sebastian (09:25–10:51)
On AI's Role in Expertise:
"This is a way to kind of make knowledge more accessible as well within the team." – Sebastian (13:23)
On Compounding Returns:
"The returns you get on that are perpetual. They're not just one time returns." – Sebastian (20:48)
AI Project Failure:
"You want to keep the AI—even within automation sequences—very narrowband." – Sebastian (22:59)
The discussion blends pragmatism and optimism. Both speakers respect the need for structure, continuous learning, and the irreplaceable value of human expertise and oversight, even as AI continues to drive step-changes in organizational capability and productivity. The episode demystifies the actual path to sustainable digital transformation and cautions against “AI for AI’s sake.”
For listeners seeking a practical, savvy roadmap to integrating AI with lasting impact—this episode provides rich, experience-based guidance anchored in real-world execution.