
Loading summary
A
Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them@meter.com CST welcome to a special edition of Project Synapse, our AI show that we run every weekend on trending. This week we're sharing the show with our Cybersecurity Today listeners. And we have a special guest, Krish Banerjee. He's the managing partner for AI in Canada at Accenture. Welcome, Krish.
B
Good morning and thank you, Jim.
A
And of course, John Pinard, whose day job is a VP and CISO for a financial institution here in Canada. But every weekend he's a co host of our AI show, Project Synapse. Welcome, John.
C
Yep, morning.
A
Marcel can't be with us today, but, but the three of us are going to carry on just to start out with some of the news stories from this week. And Chris, you've done this, been on the show before, but we just bounce these ideas and add what they've, what we know about them or what we think about them. One of the ones, and this is a familiar for us, John, is Google. Every time we do a show and we dis Google for not doing something, they come out and go do it. So I think we're going to consider ourselves the Google whisperers, because two weeks ago we were wondering why Google hadn't integrated Gemini, which is a fantastic AI, and why they hadn't integrated it with the tools, why you had to do stupid things like, oh, I want to fix this email, so I'm going to cut and paste it into another window. And then all of a sudden Google makes a big announcement and they've integrated it with everything. Workspace, Gmail. I don't know if, I don't know if either of you guys use Gmail or Workspace. I do and it's fantastic so far. I'm just, I'm happy to rave about them again because the email things that they do and the intelligence of Gemini, I don't know if you've seen it before, but when Google first started out and many of these AI products, you get this clunky window that tries to take over your email and then gets in your way and all you want to do is send a quick email and you can't do anything about it. But Google now elegantly lets you write the email. It underlines the mistakes. By the way, Grammarly is out of business, at least in terms of Workspace, but it underlines everything. And of course you can pop the window up and get advice on the email. It's just elegant and it does the things that I want it to do. Give me quick grammar and spell checks before you press Enter. You're like me when you do email, you press Enter and you see something is spelled wrong, you go, oh no. So anyway, I was just happy heck about that. Have you guys tried this at all?
C
I'm not a Gemini user or. Sorry, I am a Gemini user. I'm not a Google Workspace user. So yeah, I do a little bit with Google Docs, but that's about it. And so I haven't done a lot with it.
B
I have used Gemini both for like work as well as for kind of more of the research type of stuff as well. I agree. Like, it's pretty good at work. We do have Copilot and Microsoft. That's the integrated platform that we have and it's pretty good as well. Like the difference between these platforms are disappearing quite fast. And if you see one announcement of Microsoft Copilot cowork led to another announcement and then all of that kind of cascades.
A
Yeah.
C
And that's one of the things that I've said in previous shows too, Jim, is that Chris. I keep saying that it's a bit of a leapfrog effect that one company will come out with something and then the next week another company will come out with that plus something else and go through. I do Gemini. I've. I have two paid subscriptions. I have a Gemini paid subscription and a ChatGPT paid subscription at home. And I, I find I'm. I still use ChatGPT but I'm. I find I'm using Gemini more and more. And then like Krish and by the way, Krish, it's really nice to hear somebody else that uses Copilot at work. These guys keep giving me a hard time because that's what we use as well is that we have Copilot integrated into, into our Microsoft Suite.
B
Yeah, that's what an Accenture from at least in Microsoft 365 perspective.
C
That's yes.
B
Integrated with. Because we are on Outlook and that easy form and integration. But we do have Gemini. We have Claude name it. We have that as well.
A
Yeah, you need that in your line of work. But the. But it is. And I'm a big. I am, like I said, I'm a big fan of the tools that I use. But I've always, I've been in both the position of John and you as well, Christian, in terms of consulting and I. The tool that works the best is the one that you use. Yes, I've had people complain to me, this tool doesn't do everything. How much do you use? Not a lot. You go, well, use more. Yeah, you'll get more from it. A couple other things that happened this week that just. And two of them are related. We had the big agent explosion over the past couple of weeks with openclaw and Multiple Book and then Open Claw went to OpenAI and basically I don't know how you buy an open source software, but you take the founder, I guess what happens then Meta came in and bought Mult Book. And I don't know if you guys remember Multiple, I'm sure you do, but Molt Book was for our audience, was that it was supposed to be a social media network for agents. I think that was nonsense, but I think there were a lot more people there than there were eight.
B
Reddit for agents.
A
Yeah, yeah. Reddit for agents. Yeah, that's a better. That's a better way of putting it. And I looked at it, didn't look anywhere near as nasty as some of the Reddit forums I've been in.
B
It was getting there. Like I've been following it. And if you have followed from when it started till the time it got taken over, the discussions has changed. And that's the whole point here, is that there is a self learning aspect of it. There were comments around how they would develop a language that they don't want humans to understand, which is surreal, just mind blowing to think about. Like that is an agent to agent communication and an agent talking about we need to build an agent, some kind of a communication protocol that humans don't understand.
C
Yeah, isn't that comforting?
A
But I, my feeling is that I felt that some of those discussions were manipulated. Not the fact that they wouldn't occur because that's happened before. Somebody has put two AIs talking to each other and they did evolve their own language. But I just, I felt like it was more manipulated and sensational. That's just a feeling. You can't prove it. But the usual AI is going to take over the world and destroy all humans things. And I don't blame them. I feel that way some days, but it's. But being a grouchy AI sort of thing, I don't know. But Meta taking that over is a really interesting mix and seeing how they're going to move into this because everybody's struggling with the business model of AI and The obvious ones we take a look at are OpenAI. Are they going to be able to make money? Or how does Cloak make its money and all that Meta is has a whole AI that nobody thinks about and they have a perfect way to make money from it. They're great at making money from social networks and from the use of AI and I think it's going to be interesting to see the way they take this. But it's a new way of looking at at how people are going to evolve. And Nvidia joined the group looking at the next generation of how they're going to make money or how they're going to continue themselves. And they pulled before, for listeners who don't know, they pulled back. Nvidia was supposed to invest $100 billion in OpenAI. They got to about 30 billion and they stopped and Jensen Wan came out and said basically I don't know if we're going to do any more of this, but Nvidia is now, they've built a network now and I think Meta will do this Reddit type network. That's a great thing for meta. That's a consumer type of social network. But Nvidia and maybe you've thought about this Krish, but they've got an enterprise software agent network but for corporations and I've been thinking about this a lot because the next level of ERP of enterprise software is going to be agentic and if they pull this off and manage to make this the coalition that develops those real enterprise agents and runs them, they're going to be in a great place for hundred billions, multi hundreds of billions in spending every year. Is that something? Have you been watching that, Chris?
B
If you think of Nvidia, they're evolving from being a chip maker to really an infrastructure backbone. That's the strategy that you see from them over the last year or so and now they're getting into more of the agent based enterprise systems. Comparing with the co worker model that we are seeing in the last few weeks to months coming out, they are capturing that space and if you think about the compute demand, it's nearly doubling every three to four months overall in an AI compute respect globally. And that's the opportunity for them. The chip opportunity will become commoditized at some point. But how do you capture the AI infrastructure? AI compute opportunity?
A
People used to think Moore's Law was fast. You're saying every three to four months we're doubling our demand. I used to think Moore's Law every 18 months. That's spectacular. Every three.
B
The last report I saw, it's every three to four months. The global AI compute demand is almost doubling. Wow.
A
And then Nvidia is getting into autonomous vehicles. The same thing.
C
Yeah.
A
And where. I guess people have. And I think they're so smart as a company, they've let Tesla and Waymo do the hard work, develop all these things. Now they come in and say, okay, we'll develop a facility that can be used for autonomous vehicles and everybody can use it. And once again, that everybody is going to be using Nvidia chips. And I think they're. It's a brilliant strategy. Gets back to that idea of the. That we used to teach in business school about. The people who made money during the Gold Rush were not the people who looked for gold. A few of them got rich, but most of them didn't make any money at all. Who made money? People who sold them. The pickaxes, the wagons, and for Levi Strauss, the jeans. Great stuff. Last thing. And I've been really. Since we knew you were coming onto the show, I wanted to talk to you about this. And then I want to get into this, the whole question of AI anxiety. But I want. My favorite YouTuber, Nate Jones, came out with a great model because this has been the great discussion is why. And MIT came. I think MIT came up with a report, said we can plot how much of an occupation could be done with AI, and then we'll plot how much is actually being done with AI. And it's a big difference. We have the promise, but we have the ability. We're not having the uptake. And that constantly comes back. You see Everybody's report. Only 5% of executives say they're getting benefits. And yet we know, especially if, you know business, you know, how much of the job can be replaced. We play with it every day, and we're not seeing that degree of uptake. And Nate Jones came up with this idea that maybe it's just the way we're doing it and that we're not. We're trying to automate jobs end to end, like we used to do with chatbots, where we come in, we type something in, we'd expect the whole report to come out instead of saying, what? How can I break down the job? How can I supervise the AI properly? And have somebody say, oh, that's right, that's not right. And you start to think about it. We need to think about AI, not as I think somebody used to say, as junior employees. But I think we have to think about these AI capabilities in a new way of architecting our business or we're never going to get anywhere because they'll never be perfect. But then we accept imperfection from humans, but we can't manage it with AI Khrushchev, you've you. This is what you do all the time. Are you seeing the same reluctance and are you finding that people are starting to think about ways to overcome it?
B
I think it's getting better. If I were to go back to the time when we started talking, maybe a year back or six or nine months back, there was a lot more of skepticism and a lot more of fear in terms of getting hands on AI. But I see more and more people, not just people who are in consulting or who are dealing with AI every day, but just in general, like people who has no clue of what AI is. But it is becoming more of an. I think it's becoming more of an ambient layer in. In my way of explaining, right. Everything you do, it's part of AI. Like you wear the meta sunglasses from Ray Ban. It's part of AI you don't have to figure out, but then it becomes part of the jewelry or kind of the thing that you wear. And then you ask the question, like pointing and looking at something and say, tell me what this building is and it tells you about it. What's happening behind is a meta AI. It's the llama models that are probably working behind the scenes. So that ambient layer is causing us not to make those conscious decisions. Every day that I'm going to use AI, no one thinks about using digital applications anymore. When it was a thing, like in the first time when you could order pizza on a digital app, it was a big thing versus having to call. I think it will just become like that at some point.
C
But, Chris, how do you find. I agree that there's more and more people that are using AI for a whole variety of different things, like you said, even down to the meta glasses and so on. But how do you find the difference in uptake of. I'll call it traditional AI. So the chatbot, what I call the personal assistant versus agentic AI.
B
I think agentic AI is still in the labs in many ways. There's a lot of organizations, at least the tier one Canadian organizations, are all using some form or shape of agentic AI. Where the opportunity lies is have we actually used agentic AI in your business processes? It's a great thing to have a chatbot. It's a great thing to have one process in your overall kind of ecosystem being automated. But have you actually looked at how to automate your procurement business processes? Have you looked at reinventing your HR and marketing? And I think that's the opportunity and I'm seeing people are starting to think about it, but haven't really thought about the full reinvention of all of those business process.
C
I was going to say it almost sounds like years ago people used to talk about process redesign and re engineering of the business. And that really sounds a lot like what's required now is process redesign to now incorporate agentic into your processes and not just trying to slap them in various places.
B
Because if you put AI into something that is broken, you're basically scaling the.
C
Making it break faster.
B
Yeah, exactly. You're. You're promoting digital bureaucracy.
C
Yes. And the. So what do you, the other part of what we're talking about, Agentic is how do you find in your experience, are there a lot of companies or do you find that there's a real need to be able to go back and look at making sure you know that your data is secure. I'll call it pre agentic before you start utilizing any kind of agents, making sure that your data is safe and secure and that it's properly classified and that because I guess my concern, one of my concerns from a corporate standpoint is it's fine if, if Sally accidentally has access to some data right now that she shouldn't, that's wrong. But it's probably not going to kill us because Sally's one person and can only look at things so quickly. But now if Sally's running an agent that can run 24, 7, 365 and do things dramatically faster than what Sally did, now you run into the problem of the. An agent having access to the data that they shouldn't have.
B
Yeah, no, I completely agree that the guardrails need to be there. Depends on the industry, depends on the type of data. At the same time, if you look at the sniff test which we were talking about, it talks about reasonableness verification. Like you, you need to verify reasonableness and sometimes you could be 80% right when it comes to AI and the 20% could be manual, human in the loop, human in the lead. And that's good enough. And I think sometimes we are running for perfection and that is causing some of the concerns to be perfect. And to your point, like if there's one person getting access. Sure. Can we put the right guardrails to make sure that doesn't happen? And if it happens, what are the kind of the mitigations for it but you may be not doing anything with AI for the next nine months just trying to solve that problem. Is it worth it? Yeah.
A
And I've seen this before in it. I wrote a paper on it for the Cutter Journal. At one point I was doing a big telco system and they we were going to build this massively great system to avoid a couple of errors. And the couple of errors were that the way Bell expressed where they put things was not by postal code, so you couldn't do an edit. So we were going to spend all of this great time and money trying to figure out how to solve this problem. I went and asked one question. I said, who do you have that's really successful, really dealing with this? And had this one lady and she was really up on this. I went to her, you know what she had on her terminal? Whole pile of yellow sticky notes for the 15 items that were there. And literally we were going to spend a million and a half dollars building a system to fix several error areas like that that training would have gotten rid of or that just making sure people had experience was. And that opened my eyes. And I did almost 10 years of lean work and just pulling bad things out of processes and understanding that many times we were trying to automate things, we should have just abolished or we should have just taken a new look at and said, why automate this? We can fix this with training, we can fix this with experience. And I think that's, I think that would be the maturity of AI is to stop trying to think of it like it's, we're going to have an end to end, it's going to, we'll install it, it'll fix this problem. Instead looking at it saying, what components could I do well with AI? What components can I do well with people? Where do I need exactness of an algorithm versus the experience that says that number is just wrong. I've had a lot of bosses who don't redo my work. When I was, especially when I was a young employee who was working in finance. Finance and stuff like that. And somebody look at it and go, chima, there's something wrong here. And basically that's what they got paid for.
B
You're spot on, Jim. Like we, when we go and look at business processes or existing process at a number of our clients, I often find that there's probably good 20, 30% that's pure RPA type of work and that's good enough that does not require AI. And I think that's a realization that we need to absorb that you are looking for an outcome that is going to be x 30, 40% more efficient, more productive, improves your NPS score, helps you with customer acquisition or saves you money. From a bottom line perspective, how you get there doesn't need to be. And I'm saying this being kind of the AI lead for Accenture, it doesn't always need to be AI And I think that's an important acceptance because sometimes we run after a bit of an AI wash. Let me wash everything with AI and we don't have to.
A
Yeah. Interesting. John, you're in that same. And by the way, Christian, John's questions. He's also in charge of security. So he's. That's.
B
I saw the CSO mug.
C
Yes. I just, I was just going to say that I'm a big fan. Part of the reason I'm on this show with Jim is I'm a big fan of AI but one of the things. And it more so with agentic than traditional AI but it scares the hell out of me because there's. It feels a little bit like the. A black box that you put the data in and it spits answers out and you go okay, how did it get that? So it's not like traditional programming where you can follow the code through and verify it. But it's. So I'm. I don't want to say I'm skeptical, but I'm a little bit leery and again because of the cybersecurity hat it. It makes me a little bit nervous. So it's. I'm.
B
You're in a regulated industry as well.
C
Yes.
B
So that's the other aspect because the industry and the type of data and the type of obligations you have for your customers and shareholders is also going to decide the kind of protection you need to consider.
A
Yeah. And one of the things I love about having John on the show is he's got that because of where he. The way he's thinking about these things and trying to wrap his mind around this. It's different than me who's. I don't. I don't work for anybody anymore. I can put as much AI as I can. I created a little publishing company that is 80% I like. We. We automate everything, including my expenses which I toss into it and say send those to my accountant. Now what used to take me 45 minutes of going what is this one? What is this one? What is this one done? So we've. And that's how I attacked it was I just. When I was stuck saying I wanted to have a publishing company but I couldn't afford to have massive amount of employees. Anything that's not the value of publishing. Doing this sort of work where a human should do it. I said we're going to get rid of it. And I think we've done that in a small business. I know you can do that in a big regulated business. I think the constraints are bigger and the challenges are bigger. It's great to be a cheerleader, but then if you have to do it in real time. What's that old. Was that an Accenture ad that they used to have about changing the engines on a plane in mid flight? I think it was. There was. Try to explain. They're trying to explain consulting is that here's what we're doing for you.
B
Things for our clients as well. So I'm sure it could be ours, no doubt about it.
A
But it was the idea of. It was the idea of a big corporation making a change was like flying an airplane trying to change the massive
B
change ourselves as well. If you look at from the time of the AI kind of the growth in the last few years, AI has always been there, as we all know. But since the time it has become more of an commoditized opportunity for all of us. We are a small country, like 800,000 people globally in Accenture and think about the change that we have to go through to move Accenture people to use AI. And that's what we have done. Like our leadership and our CEO Julie Sweet, she has been very vocal about that. We need to be our best credential for us to be going and talking about AI to anyone. So we have completely changed our marketing function. We have changed the way we look at HR and legal and all of our corporate functions. We went through a massive transformation and then for our people as well. Like every single person within Accenture, irrespective of your level, has to go through a certain amount of agentic and AI training and that includes certification from Stanford.
C
That's a phenomenal way to do it because that's the only way to do it properly is train your people ahead of time on how to use the tools before you just throw them out and, and say here, go use it. Sorry, I was just. When you said 800,000 employees, I'm sitting there going, God, I can only imagine what their co pilot licensing costs are.
A
I'm sure they've got somebody driving it. Deal. Don't worry.
C
Yes, no doubt.
A
But this is the thing I loved about working in a consulting company. And something that I think people could adopt. And that was we. And I ran a worldwide practice with the former DMR Group and we did a lot of training. We invested heavily in training, both for knowledge and culture. And those are two important things. And I think Accenture does that exceptionally well as, as well, investing the money and training your people so that they're not only knowledgeable, but they have an understanding of the culture of the company and how you move forward and how you work. And it's why I think you can have 800,000 employees. And I. My experience has been you might, you might. People might argue with this, but if you talk to somebody in Accenture who works in another country, they're going to have a lot in common with somebody who works in Canada, because there's a. Their thread that of. And they're big companies, all that, but there's a thread and understanding of the purpose of the company and how it works and all those sorts of things. I think companies could learn from that.
B
No, I think we have. I feel proud of what we have done. And I think it's a good segue, if you may jump into the topic of AI anxiety, because I think when this does not happen.
C
Yes.
B
It leads to confusion, it leads to distrust, it leads to causing the. All the parameters that leads to anxiety, because that's one of the things we were hearing, like people were anxious. People were nervous about what's going to happen. Like, in consulting, people don't always jump and talk about job loss, but they think about relevancy. They think about, am I going to be relevant in conversations, in discussions, in my peer group, in front of my clients? And that is an anxiety. And I think we address that with the training, with the knowledge, giving people the tools, letting them use it. And I think that's a good way of addressing that and acknowledging it, that it's okay to have that conversation.
C
And I think some of the anxiety anyways comes from the fear of the unknown.
A
Right.
C
That if, as you said, that you've. That Accenture has done a really good job of educating people and sharing information with them that helps to tame that anxiety versus a company that says, oh yeah, we're rolling out AI across the board and everybody going, oh, my God, I keep hearing about people losing jobs over AI and this and that. Am I. Does that mean I'm going to lose my job? And what's. How are things going to change? Yeah, the communication part, I think, is huge. And the education.
A
Yeah. And I think companies, I think this is now Catching up with us. I think the first. From the first looking@chat GPT in. What was it? 2022? November. 2022. November was. Yeah, yeah. With you every. Everybody knows where they were when John Lennon died. And everybody knows where they were when Chat chatgpt came in the. But it's the idea of we've. We were so amazed by this thing that could write a poem that could talk to us, that behaved in a way that no software had, except maybe in science fiction. So we're enthralled with that. We've had. That has exploded. I think there is a huge anxiety in companies. I think there's a huge anxiety in society and I'm starting to see it. And I don't think that people who are rolling this out, even the big AI companies have been quite critical of them, really understand the feelings that people are going through with this when they hear. And it's relevant when you're talking about a company. It's one thing to talk about the job loss that's going to be out there in the future and it will come. I'm absolutely convinced of that. Jeffrey Hinton was in front of a parliament committee and you could see the impatience in the man saying, you're not preparing for this. You're not preparing for what's going to be a massive transformation of the workforce and our taxation and all of that. But that's in the future. We're not good at the future. But when we see ourselves being affected and it's that old joke about. Not joke, but saying that when the guy next door loses his job, it's a recession. When you lose your job, it's a depression. And so when we see it affecting us, and I think that's what's happening. I think people are seeing it affect them in their own lives and starting to worry about that no matter where they are. Yeah.
B
And I find like most organizations are trying to tackle the job, which is okay, but I would rather start with the task because there's a difference between the job and the task. And if you think about the work's going to change doesn't mean that the human being is not required anymore.
A
It's.
B
The work is going to change. I may not be doing the exact same thing I'm doing today. I may be doing more audit, more decision making, more of validation versus actually doing the stuff and the steps that I were doing. So it changes the work, which then in turn changes the workforce. And thinking about what kind of a workforce do I need for future, how Do I prepare for that? What trainings, what tools, what kind of empowerment do I need to give them? The pyramid is going to shift upwards. How do I make the junior entry levels behave like they have more tools and capability to behave a couple of levels even higher even from the get go that changes the workforce. Then we need to think about what do I need from the workers. I think we are jumping into the workers first in many cases and saying, oh I have a contact center, I need to displace X amount of people because I need to save cost instead. If I think about what are the processes that I'm running in my contact center, what work needs to change, what are the tasks that needs to change and then decide and design the workforce of the future, then you can have a better worker. And I think that builds trust.
A
Yeah, I think one of the things though that we also don't do is we don't think of work in terms of outcomes. And this is one of the things that I think freezes organizations. When you think about who you are in the organization, I think it's important your processes become important. You've got to defend your data, your processes and what you do in the organization instead of asking yourself what's our outcome and what's the best way to utilize both people and technology to get that outcome. And that gets broken very quickly. We're good at thinking about inputs to a process and we're good at protecting those and managing those but we forget the outcome. And I think that's part of, I think what you're saying Krish, is if you're looking at the outcomes people can have, you can start to ask questions like wait a minute, a junior employee, why are they junior? Because they don't have enough knowledge to make these decisions. But how else could we employ them? How else could we use them? I think that give them the tools
B
they can make different kinds of decisions that they were not making today.
A
Yeah, because and we talk about the errors that AI would make. I have taken on a lot of new employees and I wish I could give them a tool that says look, just work your stuff out and then ask, check it using this and see if you get a different answer. And that's I think would be, I think we can find different ways to use the different tools we have. But the anxiety piece of it goes, I think goes deeper as well. And I don't want to get past that. But I also don't want to lose that point that I was going to think about and I was talking about consulting companies and investing in the training and getting people to understand what this can do before we start asking them how we're going to employ it. Because I think if people have an understanding of AI, they will do things that will make their jobs easier. I believe, you know, that people are, people will act in the interest of their own personal efficiency, if nothing else, if it's offered.
B
And you can use AI to your advantage as well. I'll give you my personal example. Like early on, like I was really, I'd say borderline anxious in terms of what do I need to learn? Because I just because of my title, I have a responsibility to be ahead of others.
C
You need to know a little bit about AI is what you're saying.
A
Yeah, yeah. My mentor John Thorpe once said to me, said, you don't have to be the smartest person in the room. I said, I'm the head of the practice worldwide. I love what you just said, but I have to try to be the smartest person in the room.
B
Yeah. So that was a phase when, okay, I need to read mit, I need to read this, I need to read that article. And it was overwhelming. And it's overwhelming for a lot of people. And if it's overwhelming for people who have access to a lot of tools, I can only imagine people who do not have access to tools, but they're hearing about all of these things. So I used AI to my advantage. Now I wrote a couple of agents who summarizes information, sends me an email every Friday. This is what happened in the world of AI. I take a quick glance. At least I know enough to be relevant or can have a few sound bites of conversations like this. But that's like use AI to your advantage as well. Like even in the field of mental health and neurodiversity, there's a lot of AI based diagnosis that is happening. So I think it's bit of a circle, like AI is causing some of that. And then you can also use AI to preempt or remediate some of that as well from an anxiety or mental health perspective.
A
That's interesting because we do focus on the reverse. And I was thinking about this the other day, some of the smartest people I know are dyslexic in business. And it really, you don't actually know that unless you actually talk to them or neurodiverse in one way or another. And they've used that, what might be thought of as a limitation by some people as a way to drive themselves forward. My friend Mark, I Won't give his last name. But I didn't know he was dyslexic. For years I'd known him and he finally told me he was one of the most well read people in the world. And I went, how do you keep up that fad? How do you do that? He said, work hard. And now I'm thinking in some cases you can now have books available to you, read to you. You can have different ways of attacking information. It's sad to say my eyesight isn't what it used to be and after three hours of staring at a screen, I don't know if I can read that stuff anymore. I'm much more likely to say, look, I'm going to just have something read this to me. And I think executives must suffer from this a lot is you get a stack of things you're supposed to read. Could I summarize some of them? Is that cheating? Maybe, but it helps you cope with the volume of information you've got. It's an interesting thing. Yeah. Wow. She started me thinking about that.
B
Krish, it's not difficult to write a couple of agents. I have seen people who have never done coding have never been close to calling themselves technical. Has actually started doing that. And this is something that again in Accenture we have started doing like an democratized way of how you can actually use agentic to your own work. Like we, we have in office workshops where people will come in, sit down with their laptops. That includes managing directors, partners at all levels and analysts and build agents. And the agents could be about helping me plan my next trip to Italy. To summarize this AI knowledge for me, to give me a summary of my emails. Like the use cases are different. You find your own bottlenecks in your day, do your own kind of day in a life journey map and say, where can I be more efficient? And I think there's many opportunities.
A
Yeah, I think by the way, when we talk about anxiety, I think we ignore senior management and executives to our peril actually because the myth is, oh, they don't understand this technology. They're not interested. Many of them are interested, but we forget that they have jobs too. And some of them are incredibly busy and taking a risk in front of a large group. It may not be everybody's style. When I was at Ernst and Young, one of the reasons I think I got ahead there was I would help partners with their computers and I didn't look like a jerk when I was doing it. I respected that they knew a lot more about a lot of things than I did. And if I could share my knowledge of how to use a PC and get some of the experience they had and understanding, wow, I was getting a big benefit out of that. Instead of just treating them like you don't understand how to do this. And I've seen tech people do this. They walk into an executive's office and they have that attitude from the start and they don't realize that these aren't people who are stupid. They are just, they're smart at a lot of other things. But I think if we want executives to be able to run companies understanding AI, they have to somehow get that experience, I think.
B
And I think the good thing about AI is that it is becoming a great leveler to a large extent. From a capability perspective. You can build an agent with just English language prompts or execution level kind of commands. You don't need to be knowing Python or Java or any programming language to do that. And that's a great kind of way of leveling the playing field. And I'm seeing that is already happening. Like people who would have otherwise dependent on a group of engineers to do something for them can build reports, can build tools. And we are already seeing that happening. Like in our Canadian leadership team, our Canadian CEO, he asked everyone directly reporting to him to have a session on building agents. How do we build agents, how do we learn to do that? And now there is a whole flurry of things happening. Everyone's trying to be more efficient, trying to come up with new ideas and it's just like it's almost, it's spreading. And when you do that at the top and you start showing that and when you start sending that information to a broader group of people, it spreads. It is very infectious.
C
Yeah, yeah. I think one of the things like we've talked about anxiety and fear and things and I think part of it is that, I guess I call it fear of the unknown. That and Jim, you talked about the executives as well, that they don't want to look stupid in front of others in the company because they're an executive. They should know all of this stuff. But I think that's the big thing is that, that how do you get started learning it? How do you, how do I know what to do with agentic AI when I don't even know what it is? And so I think, Chris, to your point, the. The idea of providing training often and making it available to everybody in the company is a huge start or a huge stepping stone to getting you to where you need to be.
B
Yeah. 100. We absolutely have seen that benefit.
A
Yeah, yeah. And for the overachievers out there. And I said, and one of the things. And I've talked openly about my career and what it, what I did wrong and what I did right over the years. One of the things that drove me was if somebody was working for me or consultant or contractor or anybody who came up and said, you don't understand this. I take the books home. I. The next time they saw me, I'd know a lot about the topic. But it's exhausting. Operating at that level is exhausting and overwhelming at times. And as I got older, that got harder. Now I'm just. John and I have talked about this. We were talking about one of the. When Gemini came out, I learned a programming language in an afternoon. And am I going to be perfect at that? No. But I got past all the syntax problems, all of the things that you had to learn. I could be. Came up and wrote a program and the AI walked me through it. So I had a really good understanding of how to program in Rust in an afternoon. And everyone will tell you there's a reason why they put me in management and it wasn't because of my programming skills.
C
It was to get you out of programming.
A
Yeah, I think they put me in, they wanted me in project management to keep me out of coding. But that's. But at the same time we have this ability to learn new skills and experiment with those. And I think that's being missed. Of the things you can do, everybody thinks about it like a chatbot. And I was listening to the Senate committee, everybody's talking about chatbots. That's good. But what about the other things that you can do to expand your knowledge, to learn, to try things? I think, I guess maybe that's one of the cures for anxiety. Krish, I want to ask you about another thing and John, I'm going to ask you the same question because it's. I think when we, we're all humans and we hear about the things that are anxiety generating from AI, the things that we hear about children, the things that we hear about people being dependent on it, how do you cope with that level of anxiety, which I think hits us all, if we're parents and we're members of society, how are you dealing with that?
B
When I have two daughters, one just finishing university and the other one in grade nine, and I see the real field level experience of working with them on that and I see the one who is in the high School natively in AI. Right. My, my older daughter, she's in university and final year she is adopting to AI. She has, she's obviously seen the journey, but she's adopting to AI and she's learning AI as we are all learning. But the younger one going back three, four years when she was middle school, by the time they started using computers and all, they're all in AI. So it's the native way of using it. So I call them kind of the AI natives in some way and for them, they don't know and honestly they're not even using search anymore and sometimes to our peril. Because if you use LLMs for every single question, you are probably not using the resources effectively. Right. Because there's a cost of using LLMs and tokens and all that. So I think that to me is an. Is a positive thing. But I obviously have the same concerns of what does it mean for their future, what kind of work and task will be relevant for them. And I don't have an answer because today one area of work might feel secure, right. Whether it's whether you have certain skill set, whether it's in medical or law or something. But even law is disrupted. The question would be like, do we need as many lawyers that we need today to make the type of decisions and synthesis that we are doing tomorrow? Right.
A
Everybody tries not to smile and make a lawyer joke. I'm proud of your professionalism, gentlemen. Yes. Yeah.
B
So honestly Jim, I don't know the answer of sometimes people ask me so what is in future proof profession at this point I don't know. Like I can know the skills, I know the traits. Like it's probably more decision making, it's more analytics, it's more being how can you ask the right questions because the answers are all out there. It's about asking the questions where the skills lies now. But what type of job will evolve in future to my kind of the work changing and then the workforce changing and that means the worker will change. I just don't know at this point. It's hard to predict.
C
Yeah. And I think just tied in with what Krish was saying is that it's, it's the skills, not the knowledge. I go back and look at when computers first came out, people went from handwriting everything or typing to now they use the computer for taking, doing all of their note taking and so on. And this is, it's another transition that I think one of the fears that I have is. And quite frankly Jim, this kind of ties to the discussion that you and I were having earlier on is as you get older you want to do things to keep your exercise, your brain. And to me it's the same thing with the younger generation as they're going through school that if they can just go and ask an AI for an answer versus them having to go and do some research on it, that makes me a little bit nervous. I think one of the things though is it. So that's the negative side of it. The positive side though is the more detailed you can be in the questions you ask your AI, the better the answer is going to be. So it will train people to ask better questions. But I don't know, I think back to what Chris was saying that I think it's going to become more about knowing how to use things, understanding how to be able to grasp concepts versus this is really just a new tool that we have to learn how to use. And I think there are going to be some. So I have 23 year old twins. One is in the trades, one graduated with her BCom in marketing. My, my son with the trades had no problems finding a job. Got a job right away and he's busy 95% of the time. My daughter in marketing had troubles finding a marketing job because now everybody's using AI for marketing. So she's now gone into a different, a different line of business. But it's not that she can't find a job, it's that the job that she has found is in a different line of business. And so I think people are going to have to learn to adapt. And where is this going to end up? Who knows? I'm a firm believer that I think that as humans we always go overboard and then tone it back a bit. I think that we're going to be in the same boat with AI. I think everybody's trying to do everything with AI and go well, yeah. As Chris said earlier, some things just make more sense to do for a human to do it. So I think we'll figure out what that balance is and we're going to have to as a human race we're going to have to figure out how to adapt to that. That new balancing.
A
Yeah, two things that, that and one thing that jumped out from what you both said. That the one thing that troubles me is when it goes back to this idea of the sniff test of being able to assess and not trust openly everything that. That an AI tells you. Yes. Or that in the way you wouldn't trust any other piece of information. Because in the society of the Future AI is the easiest thing to manipulate. If we think controlling the media is something that's happening, and it is, that's just a natural thing. Media has its own way of thinking about things and presenting information, and that's whether it's social media that's making us angry or whether whatever we think about media is there. But AI really easy to make, to put together information. And I've tested it. What does it say? What does this AI say about one politician or another? You'll find actual differences. And so who's trying to please who? But that. But the fact that potential is there means not only does AI have a capacity to make mistakes or to misinterpret, and it's always going to have that as a feature, not a bug. And. But so people have to be able to have that critical thinking. I don't see that in our educational system. And it troubles me. But more about it, I think, is this idea that I think our kids may have and maybe a lot of us have. And I heard Geoffrey Hinton express it beautifully. And it's a paraphrasing of that old line of if you think you understand quantum physics, you're not a quantum physicist. But he said, if you think you understand the future of AI 10 years out, you're wrong. That's all I can tell you that You. It's like looking through gauze. He said you can see a little. You can see some stuff clearly. The rest of it's pretty, pretty messy. And when one of the godfathers of AI tells you that he can't think 10 years ahead and be right, we shouldn't feel bad. I think that's something we all have to grapple with. Yeah.
B
No, I think, Jim, you're right. I would say you can also use AI to the advantage of the benefit of that. Yes, it's easy to. To have fraud with AI, but it's. I expect that in future we will use AI to also detect that as well. And there's already capabilities like that. You have scanners who can detect whether it's an deep fake or whether the text that you just got probably came from an AI bot. Nice. Expect that will become more and more part of our lives. So it's the usual thing. Like any new technology that has come in the world has been used for both good and bad. The balance of that is where the progress of the society lies.
A
Yeah.
C
Yep.
A
My kids are grown, but I was actually asked about the same thing about what I hoped for them and. And my kids are in their 30s now. And I came up with a piece of advice that I think actually works. And because they're my kids, they won't listen to me. But so for other people who've got kids out there, I was asked about, because I get asked about the same question, is, what do I do about my work? And I said two things to concentrate on. What are you passionate about? And where can you add value? And value doesn't have to be money. It can be something you trade for money. But it can also be making a difference in your society, making a difference in your world. And I think, as I look back on it, the only thing I've done well, I'm crappy at career planning. I've just stumbled into everything I've done. But I tried to follow stuff that I was passionate about and where I thought I could make a difference. And I think if we could get behind that attitude and we'd ask ourselves a different question about AI, and that's how do I use it to help me get there?
B
Yeah, well said. Well said.
A
Chris, you advise a lot of businesses and you have some foresight into this. What is your biggest piece of advice for companies right now who are looking at this and they may be part way through the journey they might be starting? What would you say to them?
B
Yeah, this is what I am saying pretty much every conversation with the CEOs and Chief AI Officers and Chief data officers. We need to move beyond the experimentation. And I already see that's happening. But let's jump into proving something and use AI where it matters most and gets the most value. So focus on the value part is important. I think we have all proven in the last two years or so AI works. There's no reason or no need for any more proving the point that AI works. Now. It's about how does it work in your environment, how does it unlock value, how do you get value? How can you do that ethically? And how can you do that keeping in mind talent, people and change, and that's how you can be successful? I think that's usually where I see most of the organizations are stuck. Everyone has done some experimentation. Now, how do we scale from there and realize value, which is the ask from the board, ask from the CEO. Show me the value now, and let's focus on the value and do it in a way that's future proof ethically and for the people and for the community and everything that the organization lives under. So that's my advice. Jeb.
A
Fabulous. Thank you, sir. I'm going to Wrap there. I don't think we can do any better than that. Our guest today has been Krish Banerjee. He's the managing director and partner for Canada for Accenture in AI. Thank you so much for joining us, Chris. This has been really great.
B
Thank you, Jim. I hope you enjoy. John, great to meet you and great conversation again. I put a small challenge to you that I would like to see you hosting an podcast with agents talking to you.
C
Oh.
A
Oh, come on, let's do that. We'll. We'll do that.
B
Okay.
A
Yeah, I'm gonna. We'll be. We'll talk you. Yes, we're. Yes is the answer. Well, that's cool.
B
The guest.
A
Yep. We're going to do that. John, get on us.
C
Yes. I was thinking maybe Marcel.
A
Yeah. Yeah. Thanks again, Krish. Thanks to you out there listening to this program. It's been great to have Krish with us. Hopefully we've given you something to think about, some insights. Don't hesitate to contact us and send us your ideas. And even if you're an AI, you can reach me@technewsday.com or CA, take your pick. Just go to the contact us form and drop a note. And like I said, even if you're an AI agent, tell us what you think. Have a great weekend and we'll be back on Monday with the tech news. We'd like to thank Meter for their support in bringing you the podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises. Working with their partners, Meter designs, deploys and manages everything required to get performant, reliable and secure connectivity in a space. They design the hardware, the firmware, build the software, manage deployments and even run support. It's a single integrated solution that scales from branch offices, warehouses and large campuses all the way to data centers. Book a demo and@meter.com CST that's M E T E R COM CST. I'm your host, Jim Love. Thanks for listening.
This special episode, a crossover with Project Synapse, explores the current state and implications of artificial intelligence (AI) and cybersecurity in business, focusing on the rise of "AI anxiety." Host Jim Love, guest Krish Banerjee, and co-host John Pinard discuss the challenges and opportunities of integrating AI—especially agentic AI—into business processes, the explosion of AI agents and platforms, organizational anxieties around adoption, practical advice for leadership and workforce development, and the future of work as AI evolves.
Google, Microsoft, and Meta are in an intense cycle of innovation, with their AI tools (Gemini, Copilot, etc.) rapidly improving and becoming more seamlessly integrated into productivity suites.
Agent-based systems are becoming increasingly important, signaling a shift from classic chatbots/personal assistants toward more autonomous, agentic AI for business processes.
Recent acquisitions like OpenAI’s purchase of OpenClaw or Meta’s acquisition of MoltBook highlight the move toward social networks and enterprise agent networks.
Nvidia’s pivot from chipmaker to AI infrastructure powerhouse is positioning them for enterprise agent-based software dominance.
Organizations should recognize that not everything needs to be automated with AI; some tasks are better suited to RPA (Robotic Process Automation) or simple process redesign.
Human oversight ("human in the loop") remains critical for verification and to manage risks.
Agentic AI changes security risk: If an agent with broad access is compromised or misconfigured, the scope for damage is far greater than with a single human user.
AI systems require robust guardrails and continuous reasonableness verification, especially in regulated sectors. Sometimes perfection isn’t possible or necessary if proper mitigations are in place.
“AI anxiety” stems from fear of job loss, irrelevance, or lack of understanding.
The antidote: communication, education, skill-building, and a culture of open experimentation.
True digital literacy now means not just using tools, but developing critical thinking to verify information and knowing when not to use AI.
The generational divide is evident: Younger users are becoming “AI natives,” but this comes with new risks, such as over-dependence on generative tools.
The true impact of AI on the workforce is unpredictable—even to experts.
For individuals: Align work with personal passion and seek to add value, whether monetary or societal.
On AI Uptake in Business:
“We have the promise, but we have the ability. We're not having the uptake.”
— Jim Love [10:22]
On Security Risks of Agentic AI:
“If Sally's running an agent that can run 24, 7, 365 and do things dramatically faster ... now you run into the problem of an agent having access to data they shouldn’t.”
— John Pinard [17:16]
On Using AI Where Appropriate:
“I think that's a realization we need to absorb...it doesn't always need to be AI. ...Sometimes we run after a bit of an AI wash. Let me wash everything with AI and we don't have to.”
— Krish Banerjee [20:28]
On AI Anxiety & Training:
“When [training] does not happen, it leads to confusion, it leads to distrust...In consulting, people don't always jump and talk about job loss, but they think about relevancy … and I think we address that with the training, with the knowledge, giving people the tools, letting them use it.”
— Krish Banerjee [27:01]
On the Next Generation:
“For them, they don't know and honestly they're not even using search anymore and sometimes to our peril... because if you use LLMs for every single question, you are probably not using the resources effectively.”
— Krish Banerjee [44:32]
On the Uncertainty of AI's Future:
“If you think you understand the future of AI 10 years out, you're wrong. That's all I can tell you... you can see a little... the rest of it's pretty messy.”
— Jim Love, paraphrasing Geoffrey Hinton [51:30]
Closing Advice for Business Leaders:
“Move beyond the experimentation...use AI where it matters most and gets the most value. ...How do you get value; how can you do that ethically; how can you do that keeping in mind talent, people, and change?...Let’s focus on the value and do it in a way that’s future proof ethically and for the people and for the community.”
— Krish Banerjee [54:34]
For further questions or topic suggestions (even from your AI agent), contact the show at technewsday.com or technewsday.ca.