
Loading summary
A
What they had not done for years, actually, they found an opportunity to make it reality with the use of generative AI. When generative AI came in, it opened the Pandora box of unstructured data.
B
Welcome to the ThinkAI podcast. Each week we talk about the most exciting AI research tools, case studies and more. I. I'm your host, Dev Goyer and I've been working behind the scene in data and AI for over 30 years. Whether you are an AI expert, skeptic or something in between, this podcast is for you. Today I'm joined by someone who has building enterprise technology for global manufacturers over two decades. Naresh Mehta. He is the CTO at a large consulting organization managing the manufacturing businesses. He is a Forbes Technology Council member, a published voice on AI and quantum computing, and a regular speaker at Hanover Messi and a Google Cloud next on what AI is actually delivering for manufacturers right now. He has worked across India, Mexico and us. He sits at the front of one of the most consequential AI transformation happening in the world today, that rebuild of how things gets done. Naresh, welcome to the show.
A
Thanks Dave, and thanks for having me. Thanks for the generous introduction. Looking forward to an insightful conversation.
B
Great. So let's get started. So I wanted to start with your AI journey. Like me, I see that you've been 20 plus years in the industry, have seen several waves. So from.com to blockchain to now AI. When did AI move the needle? You know, when it started to go from read about AI to ship on AI. When was the inflection point for you?
A
Yeah, actually when you said it makes me realize, yeah, I've been through a lot of waves in manufacturing, you know, from from Excel dashboards to visualization tools, from the traditional analytics and simulations to the digital twins of the world, from RPAs to the agentic AI now and from traditional robotics to the physical AI now. But I am most excited about this phase today that we are living in because right now is the time when we are actually seeing the world of digital AI and physical AI converging and manufacturing is getting its own set of accreditation and breather come to life because the world of physical intelligence is beginning to show up. My own. I mean, talking about, you know, your second part of the question where you said what was the inflection point? If, if you honestly ask me, last 24, 30, 30 months have been extremely, extremely riveting. Why I say that because AI was there. We have, I mean we have been solving problems with machine learning for long and then quality defect detection Scrap reduction are areas where machine learning was already there. But when generative AI came in, it opened the Pandora box of unstructured data and all the low hanging fruits in manufacturing cut across design all the way to after sales services, the use cases just opened up and generative AI kind of became the catalyst for manufacturers to get to that leapfrog effect. What they had not done for years. Actually they found an opportunity to make it reality with the use of generative AI and all the low hanging fruits in terms of use cases for service technicians, for repair technicians, for shop floor analysts, for supply chain analysts, for design engineers, all those use cases to help them simplify their day in life became a reality. So for me, that gen AI moment with the ChatGPT actually opened a Pandora box. That was the critical point when manufacturers said, you know what, this is how I'll drive this with a catalyst. And then Agent Ki took it to the next level when in November 2024 is when agent Ki became a rage and in less than six months you saw so many agents come up in a flurry. Some met the road for adoption, some did not. And there are reasons for why it did not. But you know that that's how it's for me, that's the inflection point.
B
Very well said. And for me also started in 95, 96 on AI doing algorithms when hardware was a big issue. Even today it's the full circle now. But back then it was a different challenge altogether. Right. So we used to call that data mining and try to understand the algorithms through AI and from that point to something that you can ship a product within a week. And that's something amazing. So very well said. And it requires a lot of discipline on how AI needs to be implemented now more than ever because customer is buying faster and closer. But they're not buying AI, they're really buying the solution which AI can build off. Correct.
A
Yeah, yeah. You know, it's interesting and I, and I tell this, you said that about that you know, where we can build and ship a product in one week. I always say this. The pace of innovation in AI and the rate at which AI is growing is not equal to the pace of adoption in business.
B
True.
A
And that gap is continuously increasing. At one end we are continuously decreasing the cost to ship a feature or build a feature. And at the other end the operating model challenges the constraints, the issues on the ground with respect to businesses actually saying, boss, I want to do this, but I don't know a method to the madness of making it real. Because I have so many deterministic rules that I need to apply to make this real. I am manufacturing, I am financial services, I am insurance. The world operates differently for me. So getting that bridge between these two is where we are lying the way we are all working today.
B
Very well said again. And that leads me to the next question, which is, you know, you've worked in manufacturing for a longer time out of a lot of other things as well. And where do you see manufacturing companies specifically going wrong on AI? Again like we are saying, right, AI is a buzzword and everyone wants to hop on it. But what are they thinking right or wrong? Where do they stand today in terms of AI? And it's mainly for their tier one customers, right? Billions of plant data. Maybe you know, they have done a deck from McKinsey, but what is going wrong on the ground you probably can assess better than anybody. So hence the question to you.
A
Yeah, I think, to be very honest with you Dave, I think there have been a lot of enterprises, manufacturing enterprises that have done well in the last year in terms of this thing, but each one of them have had their learnings. I mean personally I have learned so much in the last 30 months. Right. But I can tell you few things that actually came as a experiential viewpoint from what I've observed. Right. And I think it's proven that I share that on today's conversation. One is I think this whole POC and pilot mindset, I one thing I realized, the moment any enterprise thinks of a PoC first approach or a pilot first mindset, I think half the battle is lost there. Why? Why I'm saying that is because the moment you think about POC first or a pilot's first approach, you will always talk about happy path. You won't talk about the success concerns that you would have. You won't talk about the data integrity availability and the quality of the data. Because your mindset is to make the POC successful. You're always looking at the happy path and you won't be deeply worried about the integration saying okay, yeah, you know what, I'll take care. But let's assume that this is there lot of assumptions with which a POC or a pilot gets delivered. And then eventually when you do a POC and a pilot and then take it to a production, you will realize that all these issues creep up. So I think the shift from a POC pilot mindset to a production first mindset is extremely, extremely critical to make things come on the ground. I think if you ask me One big area where lot of enterprises went wrong initial days is because they've had this POC or a pilot first approach only to realize that more than 90% of the things did not even move to production later on because all these tough questions were not answered on day zero. The moment you go with the production first mindset, I will ask all these tough questions on day zero and I'll immediately say whether this use case is qualified or not, whether there has ROI or not, whether this is really business viable, technology feasible, people desirable or not. All those questions will be answered on day one. That's number one. Number two, I think what what I think some of the manufacturers in the initial days or even today a few cases actually go wrong is they underestimate the change that is required. Manufacturing is a different beast. Not saying any other industry is not humongous in itself, but manufacturing is actually different because there is a lot of it ot convergence of data. And honestly if you think about it, initially there was data that was residing in silos. There are orgs that are residing in silos. And now thankfully thanks to AI intelligence is also residing in silos. Why I am saying that is because every piece of department in the manufacturing organization is building their own set of agents. So data silos to org silos to now intelligence silos. It will just not work that way. You will not be able to realize outcomes that manufacturing demands. Manufacturing model is basically stitching one single thread which can connect the data silos, org silos and intelligence silos into one unified thread. When any enterprise does that, it actually will be the biggest success recipe to gain those business outcomes. And if you ignore the hard part, which is yeah, anybody can come and build a model, but the plumbing is more important than the model. I always say this. In any AI project, AI effort is 5%, 95% is core engineering. If you ignore the engineering and the plumbing, AI in terms of putting the model on top of that is the easier part. The technology is there, the promise is there. We all know it can happen. The plumbing is the difficult part. So again, three things PoC approach to production. First, mindset of mindset change. Do not underestimate the change. Data silos to org silos to intelligent silos and then connect the thread between all of them. And lastly, don't ignore the hard part which is the plumbing model is easier to build.
B
Now that's very articulated. One of the thing we also do at think AI is doing a lot of strategy and road mapping. And one phrase I always use that strategy without Execution is just a dream and execution without strategy is really a nightmare. And a lot of people who just go for POC, their focus and we fail on a lot of poc. And when I say we fail, meaning customers, doesn't go further. The POC has been delivered successfully. But then they realize, okay, there are data challenges and most projects starts from AI and it becomes a data issue, it becomes a data quality issue, it, it becomes a process issue and then it becomes a people issue. And those needs to be fixed before you can really leverage AI. Now most of the people issue can be handled with AI, but AI only exposes things in an exponential level. So you get scared when it starts to expose what you are seeing. What we have noticed is if we pick, we build a whole. If you have a mission and a purpose, that's great. And then we build the whole roadmap and strategy, maybe one, two, three years and then we pick up one workflow in 90 days. You know, I worked with Ralph Kimball on data warehousing. He also used to say the same thing. Do one subject area, 90 days and then see what goes on. And then you improve upon it. You fail early, you iterate and you create a solution which will be worthy of either saving the revenue or generating the revenue. But it's very difficult for people who are only looking at AI as the magic wand. It's the electricity which is running through, right? Not the magic wand to change everything for you. So that leads me to the other thing. So we talked about a lot of challenges. What are the real world wins and just basic thing type of people or customers where you have real outcomes, real KPIs measures over top 12 months I would say because in three to six months you can't measure anything. And what made a CFO or a COO or a plant manager says yeah, this is real, this is what I get out of data plus AI.
A
Actually a couple of use cases come to my mind. Couple of initiatives that you know, we work very closely with some of the manufacturers. Let me talk about this one industrial conglomerate. This were for their suppliers risk mitigation aspects. And because we are talking about AI. Let me pick up one example on agentic first. You know we have all talked about agents, agents, agents. But you know, in a multi agent system you are having multiple agents stitched together. I'll talk about an example where actually Agent Ki has taken shape. Largely deterministic or maybe deterministic sequence with a non deterministic agent, but still a very, very tangible outcome. Right. Of an agentic this thing. So, so here are we in a scenario and setup where today, or rather earlier, there were about 40 people doing manual work on just dynamically uploading or updating the supplier risk scorecards. So this industrial conglomerate is actually gold standards in supplier risk parameters, which means they measure and monitor about 200 plus parameters of measuring supplier risk cut across multiple categories. Right. There is financial and credit viability, there is human capital index, drug trafficking on time delivery performance, multiple categories. There are about 200 parameters. And these 40 people in the back office who actually go day in and day out, look into multiple portals, some government websites, some enterprise systems, some ERPs, collect all of this data and dynamically update the scorecards. The day in and day out task is to make sure that nothing falls through the cracks and there is a complete visibility of every single supplier risk in the entire ecosystem across all these 200 plus parameters. And wherever there is a flag, generate reports, wherever there is notification to be sent, please send and stuff like that. And here we are at a crux of a problem which has some reasoning, which is non deterministic, but there is a sequence of operations which is largely deterministic. We said I think this is a fabulous way of looking at bringing in agent tki. Not just to bring technology for the sake of bringing in, but actual merit. And across those 200 plus parameters there are about 13 different categories. We automated about eight to nine categories completely through agentic workflows. There are eight agentic workflows that we actually stitch between all these different supplier risk operations. So there is an agent which will agentic workflow that will automatically go into the government website, scrape a particular information, bring it back, marry that information with the enterprise systems, summarize, synthesize, use the power of LLMs and then dynamically update the scorecard. There is another workflow which will do the same thing for a human capital index or same thing for on time delivery, review all the notes from different people in the enterprise systems like an SAP or an Oracle, and then basically generate some insights from there. So these different multi agent workflows actually automated roughly about 60 to 65% of the tasks that was done with existing data quality. And that's extremely important because oftentimes we say you have to do a heavy lifting with data. True, but there are some low hanging fruits within a larger setup which are actually, you don't have to heavy lift with the data. There are minimal data sets which you can actually tweak or transform to basically achieve the kind of outcome that you want to achieve. So this 65% automation, the same amount of work that this conglomerate actually did with 40 people is now actually being transformed in the more than 20 people reduction. So it's not about the reduction about just the 20, 24 people that I mentioned. Primarily it is more to do with because there is a significant reduction in the amount of manual effort that is required to actually do this. Now that industrial Converge says I want to move from gold standards to platinum standards, which means they are increasing their security posture of the entire enterprise. They're reducing their risk exposure on the supplier side, which means better risk resiliency, better supply chain resiliency for them in terms of quantified outcomes. So there's one way of looking. Yeah, yeah, you know What, I reduce 20, 25 people. No, that's only a productivity mindset. But you're improving the compliance, security and the risk posture of their entire organization because of which they used to, they got the savings on the dollars they moved from run to change and basically improve the security posture. So that's I think a tangible outcome of an agentix transformation. That comes to my mind a similar one. I mean I remember one about an auto oem. This is more on physical AI, albeit small physical AI is taking shape in the industry now. But this is one specifically where situationally aware robots were actually used to do picking and sorting of defective relays for this particular auto em. What used to happen is on the assembly line, at the fag end of the assembly line is when you realize that okay, the wipers are not working, the windshield is basically not functioning as it is expected to. And the reason was defective relays. Relays are the small components like switches which help you basically make the function work. And actually during picking process there was a defective relays that come in which you cannot make it from the naked eye. So at the fag end of the assembly you actually replace those relays and get it to work. A situationally aware robot actually walks through the aisles of the shelves where these relays are there and actually solves it through a computer vision problem. Be situationally aware and actually you shift left the entire inspection process on the left and drive more accuracy with respect to not realizing this at the fag end, but capture this and arrest that right in the picking and sorting process. So that's another example of where the defect reduction has happened to the tune of more than 25, 30% by doing it the right way.
B
Great example on Agent Tech AI and we had a very small project also with a bit sized manufacturing company similar Thing, you know, their PO process being a global supply chain process that they have. You know, it comes in all possible languages and prints and emails and things like that. A lot of human engagement where agentic AI is really helpful is automating those process and making certain decisions based on the rules and understanding of past data, isn't it? And once in those places, if you apply that. So in their case, small team, they want to grow or outgrow what they have and they had loyal people, so they have trained them into different things. So they didn't fire. But you could say the reduction in employee workforce, but utilized differently. And I think that's one of the key thing people need to understand. AI is not taking over the job, it's really transforming your job. So if you're doing mundane, repeatable things, you should be scared. You need to pick up on what value you can provide to the organization, how you can help the organization and make use of AI. Just like the calculators came in or computers came in, or the car came in. Right. So that's the thinking shift which needs to happen more than getting scared or getting enthused about AI, right?
A
Yeah, yeah, totally, totally agree. And I think it's less about taking jobs. I think AI has only come in and said all the, everyone saying that, you know, I am the biggest leveler for you, which means I bring everybody on the same starting line now and whoever will work with AI and Excel there will be the one to succeed. So people who will not work with AI are definitely the ones that are going to stand behind for sure. But AI is only going to be the biggest leveler and biggest accelerator for all of us. So it basically enhances or amplifies the human value of why a human is required to do a particular thing. So we'll pick up probably really, really more strategic, more human reasoning oriented task and let AI handle the other mundane pieces.
B
True, true. And, and that leads me to our next question. So AI and agentic AI specifically is making the marks and it's here to stay. But then you touch base on physical AI and one of the angle is from software side, which is quantum, right? Where AI is meeting Quantum. I wrote about it on Forbes and I think you wrote about it earlier this year. I assume audience hears Quantum, that tunes out simply. The other thing is manufacturing leaders, they say, why the hell I need Quantum on it? That's second problem I hear. And then a lot of them who understand a little bit, they say, yeah, it's genuinely too early to worry about what do you have to say, yeah,
A
it's, it's an interesting conversation, interesting perspective. See there are, there are two different schools of thought to quantum realization. None of them is right or wrong. It's a perspective and we all are seeing how it is evolved. But in my view, I'll tell you my perspective. My view the marriage between AI and Quantum is inevitable. And why I say that is because I truly believe the expectations of a consumer like you and me on our own, in our own daily lives using AI is getting better and bigger by the day. Think about it 12 months back when we put a prompt and the way we put a prompt today on our chatgpts or OpenAI's or anthropic clauds or any, the maturity in our prompt has increased. It will only grow 10x. That maturity of prompts. You will get into a situation very soon and I'm already started to see patterns where you will actually see the expectations of things that people would want AI to solve will get bigger and bigger by the day. Today they are testing agents, they are putting prompts, they are generating the complexity. Tomorrow they'll say okay, I want to tie this data with other one third they'll say okay, I want X amount of permutations and combinations. The other next evolution would be to basically say okay, I want multi combinatorial patterns. Things are going to evolve and classical computing wall will say, you know what, you are testing my limits. It's not AI saying I cannot do, it's saying AI is saying I can do. But classical computing is restricting me to push my limits to those. So Quantum is not a replacement for AI. Please go. Don't get me wrong. Quantum is basically saying there are some problems which will test the limits of classical computing. And Quantum will basically come and say here am I, I'm able to help you. Take an example, right? Manufacturing. You, you spoke about manufacturing. I, I, I love manufacturing. So let me take manufacturing optimization problems, right? And today you talk about supply chain networks. It's the biggest problem that manufacturers still have not been able to solve. Still there are, you don't have tier one, tier invisibility. You still have tier one and tier two. Look at the problem dimensions. Like at one dimension you have plant level dynamics, at the other dimension you have a supplier level dynamics. There is tier and supplier visibility which always is the ask. There are cross country constraints like geopolitical, socio economical constraints. All of these needs to be married and mashed together in a multi combinatorial non linear path to basically get the right supply chain. Optimization. It's too large of a problem for a classical optimization to happen in real time. Classical computing is not an ideal means to do this. For a multi planned production sequencing. For a capacity and material and energy and labor constraints, geopolitical and socio economic constraints. For a network wide inventory positioning under uncertain scenarios. This is where Quantum will come and solve the piece of puzzle. And is it near? Is it far? I think things are evolving in a state where. So I spoke about the demand, now let's talk about the supply. So demand for why Quantum will be there now the supply for Quantum. I think initial results are extremely, extremely promising. We did some use of quantum annealing techniques. Quantum inspired solvers actually addressed a few problems in a piecemeal. Can we stitch them together? Yes. Is it easy? No, it's extremely expensive. So as you see the next three to five years shape up. I feel when it gets a little more less expensive, when it becomes a little more real, those near optimal solutions will take shape faster and large entangled decision spaces like supply chain networks in manufacturing will become those low hanging fruits very soon. It's just a matter of time. The way we were talking about AI and one big revolution happened with the Genai we just spoke about today in the initial part, same thing would happen with Quantum. One big upside event on breakthrough of cutting down the cost and expensiveness of using Quantum and then things will start flowing in together. It's just a matter of time. I am a very, very strong believer that these two paths are bound to meet.
B
No, I love that part. And I'm a big believer on Quantum How One Bit Can Change Everything. Right. Just adding another bit, 2, 0 and 1 is going to change the game. Now people could say yeah, we had this traditionally hexadecimal system, but this is different. It's talking about the hardware and how the computing will work and I'm pretty hopeful. I have a patent in AI and neuroscience and doing the simulation and analysis of massive amount of data. It could be on outer space research. Also this is mind boggling and AI with current infrastructure just cannot handle it. So AI like we were talking about earlier, AI says what's your limit? Throw me the things. Now it's the other side. Okay, so AI will say I cannot handle this much power on my own. I need a companion. And that companion, the backend environment or backend surroundings will be Quantum. And I'm pretty hopeful that it's going to flow through fairly quickly even before we start thinking about it. There'll be one use case I'm Pretty sure about that. Okay, so that leads me to. So we're talking about a lot of hopeful and things which are positive AI to quantum. But then large enterprises also worry about governance. And I worked with healthcare for more than 15, 20 years now and their cry is all about AI and governance and measuring success also. So examples like audit trails, ISOs or FDA or IATF or MES validations even, right, basic stuff or even PCI compliance. How do you tell a customer if their AI investment is working? And how do you really balance speed versus audit reality and governance?
A
I think it's the need of the art. It's an extremely important question. And the art of possibility is there. What is important is the art of reality. And art of reality is there. If you, if you can make all these agents work, come together and do all the AI in an extremely difficult situations, in a regulated scenario, in a, you know, in a plant which is full of legacies, you know, in a data which resides in silos and making sure that the confidence in the decisions that these agents makes, how do you basically get that? The practical pattern that actually I am seeing today emerge is that, you know, what's the need of the R? The need of the R is every enterprise needs to have an AI control plane, period. There's no second thoughts to it. Everyone. Governance is not paperwork. This control plane that I'm talking about is actually defining policies, guardrails, evals. They will get you auditability, traceability, explainability, observability across all of your workflows. Every single decision needs to be traceable. That is what will make AI come to life. You have the lot of art of possibility with innovation, but you need to balance that with reality. And that reality check is having this AI control plane why the agent behaved the way it behaved. Observability matters much more today than ever. Every single step, every single API call, every single agent to agent that is discovering by itself every single outcomes and decision that it took, every single ERP update that it made, every single integration from an MES or a PLM standpoint it did, needs to be traced so that there is a clear accountability of what AI did versus what human did. Extremely, extremely important today. So in my view, to answer your question, for me to say AI and agent AI scale in manufacturing enterprise is largely dependent upon how this enterprise AI control plane for different entities in a plant setup, in a supply chain setup, in a design engineer setup, in a warehouse setup are basically coming through. So it's no more the art of possibility. Yeah, it is. There you need business outcomes like yield, like downtime, like cycle time, cost, all of those are important. Business outcomes are important. That's only one phase of the scorecard. The other face of the scorecard is trust outcomes. And the trust outcomes is auditability, data drifts, model drifts and monitoring those drifts, exception handling, incident rates. And both these are important. Like I was telling earlier, PoC and ROI, no ROI mindset, business outcomes and trust outcomes, both are extremely important. If each either of them are missing, that initiative becomes fragile. And that is how I put together. Right? So all the systems that you mentioned, regulations that you mentioned, so important and anything that you want to scale, this will become the backbone of everything that we do in AI to scale.
B
Now what you brought up is a really good point. Point. It's showing up as AI on its own without any guard rails. This can cause and will cause a lot of havoc. I just read the article the other day. Someone actually asked to migrate a large system with billions of rows and the legacy system and there were no guardrails. It created the agent and agent. When it saw so many errors in data quality, it simply deleted the database. And fortunately they had the regular backup so they can go back. So they asked AI like what happened? And AI actually, honestly, at least that's one thing will come out of AI honestly explain what it did and why it did. And the guardrails were forgotten. And what brings it up for companies such as yours, as ours, is that architectural thinking is still needed. You cannot just throw everything at ChatGPT or even Claude or any, any other LLM out there in the world because they are only trained by what's out there in the world. And there are so many permutations and combinations they have to pick up on. They can pick up anything because you did not tell them pick this versus pick that, right? And that's where the guardrails comes in. And more importantly, governance. When we do one thing, we see people just talk about governance on its own. But what about the KPIs? What KPIs will determine the governance, the change, the defect rate in manufacturing, as an example, the time to decision, the adoption. Just having an audit log will not correct because you know if it's doing something wrong, it will just capture it and nobody is monitoring it. What are the guardrails that it doesn't do and what are the guardrails? If it do something, then it will improve upon it or at least bring a human in loop. And that's where I think the gap in AI implementation Today quick, short take on what do you think about that?
A
I mean, I would say there are two important aspects of gaps. One is basically there are no for known patterns, there is a known fix. Where there is a for known patterns, there is an easy fix. Where the biggest gap lies is for unknown patterns. What I mean by that is let's say if you are solving for oee overall equipment effectiveness on a shop floor and you know that there are systems and processes are running in a certain way and their underlying data quality is a challenge for a given specific use case and outcomes. You know that these are the different data sets, known patterns, known issues. Solvable takes time. You may automate feature engineering doable. The implementation gaps come in the second scenario where you do not know or do not have enough data. Data is there, quality is an issue. Solvable data is also not there. People say generate synthetic data, but it's easier said than done. You are basing a decision on something which is synthetically generated in a manufacturing setup. Highly unlikely that people will let agent make a decision on that. You need to ground that with a lot of context. That is an implementation gap Today to solve that gap is actually you need domain randomization. You need context as a service to flow in into those models and data and do that ground truthing. Unless that happens, it will be a problem. So that's a big, big implementation gap. Second is we know that every single systems foundational platforms will have their agents of their own. There is Salesforce with Agentforce, there is SAP with Joule, there is ServiceNow with Now Assist. These are all the systems that are there. Every single platform player will have their own systems which will have agents and agents if you will will become the systems of engagements of the future. The problem is not systems of engagement. The problem is not also the systems of record. The problem is the middle layer which is getting evolved at an unprecedented pace. So the problem today in the implementation gap is there is a problem of plenty in the middle layer. So many tools, so many models, so many orchestration platforms. What do I choose from what is the best in which scenario? The underlying plumbing of systems of intelligence is evolving at such a fast pace. By the time I realize I fix this, it has already evolved to the next phase. So there are two pieces of implementation gap. One is the unprecedented evolution of systems of intelligence layer. And the other part is cases where frugally you have to understand how to marry synthetic data with lot of grounded context to make it work. I think that's the two challenges on the ground from an implementation standpoint I would like to quote.
B
No, that's brilliant. And that just articulated really well where it stands today. Now knowing your level of depth in this, let's jump onto the next question which is what really surprised you and let me qualify. So last two years we talked about agentic AI coming through 2023. You know what we have not predicted what ChatGPT was able to do, what stayed boringly true, meaning what is really going to stay in your mind. But what is really surprising on what is happening right now in AI.
A
See, I would say it's a very interesting question. See I would say what surprised me is how quickly the ecosystem shifted. I mean in 2023 I would have never thought that the pace at which technology has grown and the ecosystem shift happened. Look at the kind of startups patterns evolving unprecedented like in the last 24 months, first six months we had a set of startups who only focused on creating agents from existing LLMs. You know, I very fondly quote this. There was a car wash company, we just built a small little agent for car scheduling and tying up multiple pieces and automatically scheduling that service. Big startup very successful in nine months, ROI generated right then the next six months had startups who basically talked about orchestration. Oh, everybody realized that orchestrating agents from multiple platforms will become a key. So orchestration platforms, startups evolve like anything the next six months everybody said no, no, no, grounding is very important. Evals, guardrails, observability, very important. Startups started flourishing on the. Today we are talking about harness. I mean come on, in the last two years we have said we have seen so many startups and patterns of startups evolve and it's a very exciting phase to be in. You really see how fast ecosystem is shifting and the pathways are shifting. I mean I have seen working with customers where they built agents in a particular way for one year only to realize at the end of 12th month the whole platform landscape behind the scenes completely changed and moved under their feet, completely turned around with the, with, with what anthropic and Claude came in. So the tooling and the marketplaces are evolving so fast that the do it yourself kind of an nomenclature now has become a technical debt overnight. So that's the world we are living in. So that continuously surprises me, that pattern surprises me and it will continue to grow. And that's why I feel it's a very exciting phase to be in. The one thing that still remains boringly true is that data for AI still remains a challenge still even till date There is complicated IT landscape, there are limited talent pools and there is a persistent need for explainability and evals. Nothing moves until you actually nothing scales. Let me say this, things will move but nothing will scale until you get this right. So this still remains a challenge even after so much of advancements. We are seeing some light at the end of the tunnel, I would say. You know, I was just in another conversation where we actually started using AI for data products creation and lot of feature engineering automation, lot of the entire data products life cycle. I see a lot of promise and hopefully that promise and doing the light level of feature engineering, data discovery, mining, the right data sets, metadata, model creation. If AI can basically augment its piece in that way. And we spoke about data for AI, but it's time we also talk about AI for data. And when AI for data works the right way, maybe data for AI will become an easier problem to solve. Probably. I would end my answer at that.
B
Again, very well said and I couldn't agree more with you. I want to add just a few things. So clean data. I totally agree. Data and clean data specifically AI is hungry and you know we used to say back then, right, garbage in, garbage out. With AI you can say garbage, garbage in, garbage out with scale because it's just going to expose your issue. It's such a massive level, you won't even believe what's going on. The other thing that I see that is clear ownership sponsor who won't let it die in month four because everybody is excited. You have a FOMO fear of missing out and you just want to get a project done so that you can have something under your belt. But then after that 90 days, how do you really survive? It's not going to be glory. It will expose a lot of issues. How do you really fix the people issue, the process issue, the technology issue, the data issue and then let it not die. I think that's where the next wave and phase I see will come in play. And based on that, what would be your advice to manufacturing leaders starting today? You have been in this role in operations and CIO or CTO roles for a long time for mid market manufacturers specifically. They typically only know a few systems, most of it right? Erps, some knows mes, some even knows APS and PLM and those things because a lot of those things can be managed with Excel also forecasting supply chain. So they're stick there right now. A lot of them, right. And the most they get out is they put power BI type of technology in front and then they say okay, let's just do something with AI. So they try to find answers within power bi. And then I realize, oh, from the get go you are going in the wrong direction. That's not the right question. The question is, can AI help to serve your customers better, improve your processes and help people who are stuck in day to day job and really overwhelmed. So what would be your approach and thinking for them to what should they be doing this quarter and what to ignore on AI?
A
Yeah, I think it's a relevant question for at least a lot of customers. I mean large customers of course have put things in place. But for all customers who are starting now, my only recommendation is it's important to have a strategy. Don't get me wrong, but don't start with large AI strategy decks and identify your hotspots, play to the area, prioritize those hotspots. Don't try to find a solution. Don't go start with the solution. Trying to find a problem. Find the problem that you want to solve. And technology is only a means to a destination. It's not the destination by itself. AI. You don't need a bazooka to kill an ant. In most cases you may not even need an AI to solve a small problem. It might be actually there in front of you and you may not even realize that you immediate AI. So just for the sake of doing AI, let's not do it, you know, so, so start with hotspots. What are your big problems? What do you want to solve for? What is the ROI look like? Right? Is AI really required? Ask that question. Ask those tough questions. Day 0 Like I was telling you, right? And, and, and pick those two to three problems where decisions do repeat daily and value becomes so obvious that it, you know, becomes exciting to solve for. That's number one. Second, I think I mentioned it earlier. I repeat once again, just to summarize, I think production first mindset is extremely critical. And a mindset where you put a lot of onus on governance in adoption, it should not just be a lab demo. Making it real should be the focus. Third, you don't have to spoil the ocean and solve the entire plumbing. Oh, you know what Naresh came and told me plumbing is important. No, I'm not saying plumbing is important. So you invest a lot of things on building foundation which is a mega one year project. No, please, that's not my point. Point is identify those hotspot prioritization and use cases and identify minimal plumbing that is required. Like I don't want to put a Big lake house project for four years and stand on it and not do any AI until then. No, that's not the point. The point is pick up a lightweight knowledge fabric, a graph rag scenario, a small ontology driven approach for only those specific data sets which caters to a given use case. So that, you know, let's say in manufacturing context so that your MES plus quality can plus maintenance, all of this can be bundled and reused and not rebuilt for every single use case. That's the point. So that minimal plumbing which will be required to put all of these three things together is what I'm talking about. So that, so that number point, number third is that. And lastly, if you are after all of these building agents, whether it is gen AI fueled or deterministic agents or a hybrid mix of both, I think it's time to invest early in building that AI control plane for both orchestration as well as governance. Because what will really stand the moment of truth is when they and these agents hit the production and when they're running. Like the example that you mentioned, it just deleted the records. Had you put a deterministic guardrails that no matter what, if these things are breached, you basically trigger a human in the loop before deleting that deterministic guardrails and evaluation framework for every single criteria for your given use case has to be a part of your enterprise AI control. So those are the four things I would say any organization is starting now to basically make it as a part of their life in terms of thinking their overall AI strategy and implementation.
B
That's a great point. And if I have to put an analogy, you're building a cake so you can do it in two ways. You can build the base layers and then toppings on top, or you can build a slice first and get a taste of it with one or two data sources and then assemble them in eight pieces. And it may look weird in that analogy, but it's the right approach. You just make a small slice of the pie and then go around it and that'll bring a better success or better failure if that slice fails. So either way it's a great way to look at it.
A
Absolutely. And you know, very well put. Again, like you said, from a taste perspective, I don't know what, but from an analogy perspective it actually stands. Very true. But because you go slice by slice and create your stack and maybe the next slice that you will build is probably better than the earlier one because you have that learning, you're not starting from scratch, but you're Starting from the experience that you have left behind in the earlier.
B
Very well put. That brings me to last few questions. So one of the things that I always ask our guest is you've been into 20 years in the industry, you have seen you work with for consul. We both do that. We have seen different ways from BI data, BI cloud, agent tech, and now maybe quantum very soon. If you have to tell your early 2000s naresh that need to hear in one sentence, what would that be? What's that advice that you'll give? Because that's the advice those who are starting their career and jumping into the next level being a tech leader or a business leader, what do they need to hear? It could be anything. AI tech, whatever is happening in the world, wherever you feel like is the right advice you can give.
A
I think I'll. I'll probably give two pieces to it. I think one AI is your passport to the future. Any, any person today. It's. It's not just okay to be AI literate. Let me, let me, let me put it that way. It's not just okay to be AI literate or AI aware. You need to be AI fluent to survive and thrive in the world tomorrow. So any newbie college grad listening to this AI fluency get go is an extremely important and that is your passport for the future. That's number one. Number two, I think the only thing with AI that can probably limit anything is your imagination.
B
True.
A
Every single thing that you can imagine. I think 95% of the times you will have answers through AI and whatever that remaining 5% which AI will not be able to give answers to. You are the face of pushing the boundaries of technology and maybe quantum is the answer for that. So do not settle for something less or mediocre because if you accept mediocrity, you will become mediocre.
B
Very true and very well said and a great advice. My son is about 15 years old and he sits in a lot of meetings that I have. I work from home. I'm a disabled entrepreneur and dyslexic also. So a lot of things I look at it differently. So he looks at it a different lens altogether. And you know, any problem that he does so example, he uses AI, which is probably a radical thing to say in his studies. So how he does it, he actually take the quizzes, the printed paper, he feed it back to Gemini. He's made a bot gems over there and then he have him prompt that question, meaning James is. Or Gemini is prompting the question to him and he gets Better and better and better. And then he's preparing for the next wave. His college, some of his dream list out there. So he started a company called admitshare and he actually went on to figma build this whole use case. You know, back then I built a startup, used to cost you at least half a million dollars to do something. And a kid, 15 year old, is able to build the website, build the, you know, build the demo of it and assemble everything. Started looking at the market and then taking his problems and solving the problem for the world. And I think that's the lens everybody needs to look at. Like what problem you have in your life, how can you use AI to solve it? And it may not be able to solve today, but if you keep hunting for answers, you're going to get it sooner or later.
A
I'll add another example, a very interesting one that you said. I'll add another example with my kid, I have a 10 year old and my 10 year old actually for his sister, he actually came up with an idea. You know what he said, dad, why don't we plan some written gifts for, you know, sister's birthday. So she's seven actually, second one. And she's a, she's a great storyteller. So he came up with an idea, you know what, we'll ask Gemini to, you know, create something. And I said, what? So he said, okay, we'll figure out, let's, let's start. And you know, he, he put on that microphone button. And then, you know, my daughter being my daughter, she actually started giving stories of a mermaid and you know, how kindness returns and stuff like that. So she put a story, Gemini, with its reasoning power, actually did a fabulous job in converting that story. And my son puts a prompt of saying, you know, he actually made it a surprise for us also to think about how he's thinking. So he actually said, okay, now you have this story, now summarize this. And then he said, break this story into 10 parts. And he broke that entire story into 10 parts. And then he said, for every part, create a visual. So Gemini with his image and image creating capabilities actually started building images. Then he said, so basically created 10 images for 10 parts. Then my daughter added saying, you know what, I don't want these images, I want my friends in the written thing actually to color this. So the prompt came in next where my son said, okay, make this as a story coloring book. And here we are with a story book which anybody can color with a mermaid story. And kindness returns. And the little businessman in Me said, okay, Amazon today asked in Kindle Publishing whether it's an AI cooperated image or not. I said, yeah, it is AI cooperated and we actually create a book out of it, publishing it on Kindle. And my son fondly flaunts his small little kitty that he has collected from the business on Amazon. So amazing, amazing way kids think today about the leverage of AI and solving on the ground real things that are. It's a very small story, but think about the level at which today's generation is thinking.
B
That's amazing. And with that, I have one last question. So we host this show. It's one of my passion project where I want to give back to what I've Learned in last 30 years. But I want to cater all kinds of audience. So there are AI curious, there are AI enthusiasts, and then there are AI skeptic. What one piece of advice you will give to each one of the groups curious enthusiast and skeptic,
A
I think to all who are AI curious, I would say. The world is ripe for many more cups of coffee and tea. Every single day is going to be a different thing. And I think something that I learned from very fond leader that I follow. He says AI is learning every single hour. Are you learning? AI is learning 24 hours a day. Are you learning even for one hour about AI? So for all those curious AI people, AI will get better someday in terms of the knowledge and the reasoning process. How will you basically be at par is I think the curiosity should be for. How will you differentiate? Working with AI is what the curiosity should be for. How will you create that human plus AI operating model within yourself should be what the curiosity should be for. So that's for all AI curious people. For all AI enthusiasts, I think find a problem that AI can't solve. Today, the enthusiasm should be to challenge AI put a prompt that creates a response which makes you think that this is an opportunity. That's been at least my personal passion that I put a prompt someday and it gives me a response that thinks me, nah, this is not the response I expected. There's an opportunity here. Not because of hallucination, just because it's not there yet. And for all naysayers who are thinking, you know, AI is far off or AI is still probably not here, I think it's time to just revisit the basics. Not in a negative way, but, you know, the biggest way to believe AI is to make AI a part of your life. The biggest way to convert a naysayer into a promoter for AI is To just experience one. The basic shift. The basic shift is start your work style lifestyle changes. With AI, you find one one thing that you do which will bring AI in. The confidence will accelerate. Like anything every everything starts with failures. If there is 5% success rate of AI moving into production, becoming a reality, that's an excellent percentage to have. So for all naysayers, if 5% is the reality, that's a fabulous reality. In first two years, let's embrace it.
B
Nicely done. Nicely done. I want to add one example and this could be an idea for a product. So you know, we are doing this podcast and I am always curious and I'm always learning. 30% of my time I'm learning. One thing we don't see with AI today is how you can optimize your videos. It can identify all the challenges, all the gaps. You can give a transcript and it can handle that way. But a live video AI is not capable of doing. So if you go to any LLM tool today, it will ask you to take screenshots or you can do a playwright and do a browser sharing on cloud and then can do it that way. But there's no direct solution. I think this could be big knowing that how much video and content creation is out there, it's a problem just waiting to be solved. And I'm sure the moment we think about a problem, so many people are out there already trying to solve. But there's something that can be done for AI enthusiasts. That's Naresh Mehta, CTO for a large consulting organization managing manufacturing businesses, writing on Forbes Technology Council and one of the clearest voices on what AI is actually delivering. That's what my feeling is right now. You can find him on LinkedIn, follow his writing through Forbes Consul and various other platforms. If you lead a plant network of plants or any operation where AI has to survive. Shift three at 2am he's the person to connect and to understand what AI can be done and what it will do for you. That's the highest compliment I can give to anybody that he knows that in and out and can give you an honest opinion. Thank you. Naresh. Thank you for being on the show.
A
Thanks Dave. Thanks. Thanks for the conversation and thanks for having me. Your podcast. Thank you.
B
Great. You have been listening to Think AI podcast with Dave. Take one idea from this episode and turn it into action.
Title: Why 90% of Manufacturing POCs Die
Host: Dave Goyal
Guest: Naresh Mehta, CTO, Consultant & Forbes Technology Council Member
Date: May 12, 2026
This episode dives deep into the challenges and opportunities of implementing AI—especially generative and agentic AI—in manufacturing. Host Dave Goyal and guest Naresh Mehta, a leader with decades of experience in enterprise/manufacturing tech, dissect why so many AI proof of concepts (POCs) fail to reach production, the crucial intersection of business value and technical feasibility, and the future role of quantum computing in AI-driven operations. The conversation is rich with real-world cases, strategic advice, and practical cautions for AI leaders, enthusiasts, and skeptics.
Waves of Technological Change: Naresh has witnessed transitions from Excel dashboards to digital twins to agentic AI, but sees today’s convergence of digital and physical AI as most exciting.
AI Inflection Point:
“For me, that GenAI moment with ChatGPT actually opened a Pandora box... That was the critical point when manufacturers said, you know what, this is how I'll drive this with a catalyst." (Naresh, 03:38)
“The pace of innovation in AI and the rate at which AI is growing is not equal to the pace of adoption in business.” (Naresh, 05:13)
POC/Pilot-First Mindset:
“The moment any enterprise thinks of a POC first approach...half the battle is lost there.” (Naresh, 07:18)
Underestimating Change & Silo Proliferation:
Neglecting “Plumbing” (Core Engineering):
“In any AI project, AI effort is 5%, 95% is core engineering. If you ignore the engineering and the plumbing, AI...is the easier part.” (Naresh, 10:05)
“Strategy without execution is just a dream, and execution without strategy is really a nightmare.” (Dave, 11:27)
“It's not about the reduction... it is more to do with a significant reduction in manual effort... leading to improved compliance, security, and the risk posture of their entire organization.” (Naresh, 17:47)
“AI... is the biggest leveler for you. Whoever will work with AI and excel... will be the one to succeed.” (Naresh, 21:41)
“Quantum is not a replacement for AI... Quantum will come and help you where classical computing is restricting.” (Naresh, 24:39)
“Governance is not paperwork. This control plane... defining policies, guardrails, evals... will get you auditability, traceability, explainability, observability across all your workflows.” (Naresh, 31:08)
“You can build the base layers and then toppings on top, or you can build a slice first and get a taste of it...” (Dave, 49:54)
For Early-Career Tech Leaders:
“It's not just okay to be AI literate... You need to be AI fluent to survive and thrive.” (Naresh, 52:01)
Personal Anecdotes:
Both Dave and Naresh share how their children use AI practically, from turning stories into coloring books to customizing their study processes—showcasing the new baseline of digital fluency and creativity in the next generation.
On the POC Trap:
“The moment you think about PoC first or a pilot's first approach, you will always talk about happy path... The moment you go with the production first mindset, all those tough questions [get] answered on day zero.” (Naresh, 07:18, paraphrased)
On Talent:
“AI is only going to be the biggest leveler and biggest accelerator for all of us.” (Naresh, 21:41)
On AI Control Planes:
“Governance is not paperwork. This control plane... will get you auditability, traceability, explainability, observability across all your workflows.” (Naresh, 31:08)
On Data:
“Data for AI still remains a challenge... Nothing will scale until you get this right.” (Naresh, 42:44)
On Future-Proofing:
“You need to be AI fluent to survive and thrive in the world tomorrow... AI is your passport to the future.” (Naresh, 52:01)
The conversation is practical, candid, and rich with firsthand experiences and actionable lessons. Both speakers blend technical insight with business pragmatism, emphasizing that AI’s success in manufacturing demands grounded planning, strategic focus, relentless governance, and cultural adaptation. Listeners receive concrete frameworks for avoiding common pitfalls and for building scalable, resilient AI programs.
End of Summary