
John Hines is the Senior Director of Enterprise Business for the Asia-Pacific and Japan region at Lumen Technologies. With more than 25 years of leadership experience in IT solutions and cybersecurity, John has led and grown businesses across diverse global markets. His expertise spans cybersecurity, risk management, network and cloud solutions, consulting services, and new market acquisitions, serving industries such as manufacturing, government, healthcare, transportation, financial services, energy, and retail. At Lumen, John is responsible for driving enterprise growth by delivering secure, agile, and innovative technology solutions that help customers connect people, data, and applications seamlessly. He is a proven leader in building high-performance teams, modernising operating models, and executing strategic programs that deliver measurable business outcomes. His work has included partnering with global law enforcement on cyber threat takedowns through Lumen’s Black Lotus L...
Loading summary
A
It'll write itself over time as organizations understand that perhaps AI is not the answer to everything. Perhaps you do need some humans in place to do certain things. Perhaps it wasn't going to be the silver bullet and enable it to operate with two people. So I think it will sort of level out at some point, but I think that will be different across different industry verticals and different use cases and different organizations.
B
This is Katie Cass,
A
primary target for ransomware campaigns, security and testing and performance risk and compliance. We can actually automate those, take that data and use it.
C
Joining me back on the show is John Hines, senior director for Enterprise Business at Lumen Technologies. And today we're discussing why organizations need a secure AI ready network to power AI workload. So John, welcome back.
A
Thank you very much. Thank you for having me.
C
And I know it's been a little bit of time since you've been on the show, you've changed roles, so it's good to have you back. But I really want to, to start with, why do organizations today need a secure AI ready network to power AI workloads? What are your thoughts when I ask you that question?
A
Yeah, certainly. So we're seeing the adoption of AI across Australian enterprises is rising pretty rapidly, but it's outpacing technical preparedness in many cases. And what we commonly see is that companies aren't struggling with AI itself, they're actually struggling with the infrastructure that wasn't designed for the scale of AI workloads. And in fact, a study from the CSIRO found that half Australian organizations are already using AI. But a separate McKinsey report shows that the remaining are stuck in pilot phase. So the gap between production and pilots is often caused because traditional networks just can't support real time movement of massive data sets across hybrid and multi cloud environments. So AI workloads, as you know, require low latency, high throughput, consistent reliability. And without that, even the best models are going to underperform. And also at the same time, AI is reshaping the cyber risk landscape. So as AI agents become more embedded in enterprise workflows, the attack surface expands. You know, AI powered threats are becoming more and more sophisticated and you know, any data integrity issues, latency issues, can quickly undermine the trust of AI outcomes. So that's why having an AI ready network is absolutely essential. They create an environment of performance, trust, resilience. It all needs to be sort of baked in from the beginning. And that's what helps organizations move beyond that pilot phase into a more sustained, you know, scalable business outcome. And also Globally, we're seeing more organizations align with frameworks like ISO 42001. So here at Lumen, we have dozens and dozens, if not hundreds, of AI projects and programs happening within the company. Lumen's a Fortune $515 billion global company. As you can imagine, for a telecommunications and technology company. You know, there's a significant amount of internal process governance, those sorts of things as we provide these services to customers, and through a number of acquisitions over the years, integration of systems and those sorts of things as well. So we've got a number of programs internally within the company to streamline a lot of those systems. But also from an external perspective, how do we service our customers better? How do we use data to provide a better service to those customers, a more secure service from that perspective? So we've spent a huge amount of time from an AI governance perspective, and we ourselves are ISO 42001 certified. I think we're one of the first organization in the world that have gone through that certification.
C
Okay, so I want to go back to the report for a moment. So from memory, you said 50% in pilot, 50% in production. One was a CSO CSIRO report, another was. Another one was a McKinsey. So when we say pilot, do you mean that, like, they're buying these tools with these vendors like Lumen Technologies and Friends, or are they still dipping their toe in the water? They're still looking into it. Like, people just have different versions of Pilot, though, that I'm seeing.
A
Yes. Yeah. So I think it's identifying an area within the organization that AI can potentially assist with and then, I guess, piloting it. What type of data are they putting into it? What expected results are they looking to kind of see out of it and sort of testing the data as well? I think that's important. I mean, AI is as good as the data that you put in there. And if you're. If you're drawing from systems with inconsistent or incorrect data, then you know your outputs are not necessarily going to give you the outcome that you're looking for.
C
Okay, so just to touch on a little bit more, I was interviewing the president of NetApp last week, and he was saying that, like, he was sitting at this roundtable, and then like, some CEO turned to him and said, like, you know, we've got to. We've got to do all this, but, like, where do I start? Around the data. So you can bring AI to your data. Right. So do you think that people feel overwhelmed on how to modernize that data, get it into a proper data lake to be able to leverage it from AI because at the moment some of it's all scattered, it's structured, it's unstructured. What are your thoughts then on that? What are you hearing from your customers?
A
Yeah, so I think there's, there's probably a couple of approaches. I think at the board level, I think, you know, AI is a big thing. Everybody's talking about it. Boards are looking at it to become more efficient, to save cost, those sorts of things, and therefore are sort of pushing that pressure down onto the business. And then the business is having a look at where AI can be sort of most effective within that organization, either driving efficiency, savings or providing better customer service or whatever outcome they're looking for from that perspective. But we've seen a couple of different sort of approaches. The kind of all you can eat, people just go crazy and kind of put everything into the AI model. They're running 27 different use cases at the same time. We've even seen organizations that, you know, six different parts of the organization are running six separate sort of pilots in isolation of each other and are sort of lacking, I guess, from an efficiency perspective, from a learning perspective, and those sorts of things as well. And then we've seen other organizations who, you know, take the time, maybe they'll take a smaller bite, pick something less critical and sort of run that through the pilot process to sort of fruition. But what we are certainly seeing is the underlying infrastructure, so your networks and where your data resides, whether it's, you know, in multi cloud or whether it's hosted in your own environments, those sorts of things adds a level of complexity. And because of the thirst of AI for sort of speed and data and those sorts of things, if you're not planning properly that underlying infrastructure, then you can come across, it'll bite you down the track because you're constantly sort of redesigning after the fact, which we see a little bit with some pilots.
C
So going back to your comment around all you can eat, so do you think people out there, so customers, companies, are they scrambling to understand like what AI works? And I caveat that with saying, because now there's this innate pressure to be like, well, we've got to be better than our competitors. We got to get stuff out fast. I'm hearing that a lot. I'm seeing it a lot. Is that why It's a little bit of a scattergun approach at the moment, not seeing what sticks. I don't mean it in that way, but what I do mean is that it's still new territory for all of these companies.
A
Absolutely. And I think depending on the sort of size and maturity of the organization and depending on the pressures that are coming down from above, definitely sort of dictates how organizations are sort of moving forward. And as I said, there are those that kind of want to be the best and want everything to be AI. They're the ones, I think, that are running into a lot of challenges because they're not sort of updating the infrastructure that underlies AI. They're not planning out the processes properly. They don't necessarily have the right governance frameworks in place. From a security perspective, there's a whole range of challenges for organizations there.
C
Okay, I want to talk about this a bit more. So what do you think people don't get when it comes to infrastructure when it underpins like AI? So what I mean by that is if I'm like, okay John, I'm going to go build a house, I want it to look like this. But then it's like I can't think about the roof until I have the structure ready. So do you think people are very focused on the roof when it's like, well, you don't have a house up yet. So are they jumping the gun, would you say?
A
I think, you know, they're extremely excited by the opportunity. They're going out there deploying large language models, not necessarily thinking about the broader infrastructure sort of implications. AI requires high speed, low latency connectivity, needs to be designed for continuous data movement. You've got end to end security, protecting, you know, both the data and the AI agents that are accessing it. You have, you know, resilient and intelligent monitoring. You need to be able to detect anomalies faster. You know, there's the multi cloud edge integration piece as well. You know, AI doesn't generally live in one place. It may be scattered across multiple sort of instances from that perspective as well.
C
So yeah, and so what would you say given and I know you've been traveling a lot for work, so you're obviously in front of customers. What do you think people sort of don't get around the infrastructure that supports AI?
A
I think definitely from a network perspective, if you have a look at traditional networks, they're designed for uses and data that's fairly sort of predictable human interacted data. So you can design and sort of shape a network from that perspective. And it's fairly predictable. I think with AI as AI sort of spins up and then particularly with sort of generative AI that adds a Huge amount in terms of bandwidth requirements, in terms of latency requirements, and those sorts of things as well. So without sitting down and mapping out and understanding what that looks like, you know, organizations can hit the wall pretty quickly from a network perspective because they haven't forward planned.
C
Okay, I really want to talk about costs now. We want more with less. And people are looking at. There's been a lot of layoffs in this space because they're redirecting, redeploying investment into AI and other areas, et cetera. And there's been a lot of development in certain areas and money pulled from certain streams. What do you think people, at the end of the day, really care about when it comes to cost? Right. I know there's. I've got a few more specific questions after this, but just generally speaking, what are your sentiments here?
A
You know, there's a huge opportunity with AI and there's a huge opportunity for organizations to save on cost. Whether that's people, cost, efficiency, cost, a whole range of different things from that perspective. But there is that big opportunity. But then how do you sort of measure the return on investment for that cost? And you know, there's some interesting statistics out there. I think I read something recently that over two thirds of companies report that, you know, they haven't been able to achieve a measurable return on investment for their AI investment. So how do you measure that? There's a huge amount of hype around AI. Everyone's kind of jumping in on it. But realistically, what is the cost of it and what is the cost benefit from that perspective as well? And I think that's what organizations struggle and sort of fail to measure properly upfront as well. And I think part of that is determining what is the outcome that you're looking for. Are you looking to drive efficiency and save cost, or are you looking to provide a better service, which, you know, doesn't necessarily hit your bottom line in financial terms, but helps drive more business out the other end because you're providing a much better product or offering to your customers in that particular market? And then from a hidden cost perspective, you know, we think things like inefficient infrastructure use, there's heavier demands on networks, bandwidth latency, the movement of data, resiliency, all of those sorts of things. If they aren't planned out properly, the costs will sort of stack up very quickly after that as well. And then from a security perspective, you know, without baking that security in at the beginning, then, you know, as we all know, baking security in further down the track, is quite significant from a cost perspective as well. And then the other interesting point I sort of touched on briefly, we see in a lot of organizations, different groups just working in isolation in silos, deploying different, you know, different AI models for different things, be it marketing or, you know, the logistics folks or other parts of the organization. And I think you need to bring everyone together under the same framework and have a consistent approach to your enterprise as opposed to sort of working in isolation. Because, you know, there are efficiencies from a cost perspective when utilizing AI, and you just get a better outcome if the entire organization is working together, not separately, in isolation.
C
So going back to your comment around, you were reading something around the achievable measurable return. So do you think people have just. Well, I mean, like if you're a company and you've lived off like 20% of your staff, and I know that's not attributed directly to AI, but at least you've gained some ROI in terms of investment. Because if you're not leveraging these people, then, you know, and you've invested into AI, isn't that somewhat return there because you're not overextending then? It's not like you've kept the 20% and then you've invested a bunch of money into the AI then as well.
A
Yeah, I think for a lot of organizations it's kind of still early days from that point of view. So I think in terms of removing some of that human cost out of their organization, they aren't necessarily there yet. I think they're still sort of testing the AI side of things and the ramifications potentially there. And I think there have been some organizations who have, I won't name names, but have removed people out of their organizations only to figure out that they actually need those humans and have, have kind of rehired them back. So again, it all comes down to planning and measuring and making sure you're in the right place before you make any drastic measures around the human element.
C
Selling SaaS to Enterprise, you'll be asked about your security credentials sooner or later, likely before the contract's even signed. Vanta helps SaaS teams prove their compliance and frameworks like ISO 27001 and SoC2 without losing months to manual prep, visit vanta.com forward/kbcast that's V A N T A dot com forward slash kbcast to learn more. And so talking about the AI bandwagon, and I've heard that multiple times as well. So what do you think is going to unfold like now. Ish. Given, you know, what you're saying, the conversations you're having. So, yes, as we've sort of discussed here already, some people are on board with it bit early than others. Some people are still toying around with it, some is trying to find their fault. Feet dipping, toe in the water, assessing it. Would you still say majority across the world? And that's a bit of a big question, but, you know, you're still talking to people across the space. Do you think that this will become just an everyday thing that people are just going to start talking about? Because, like, then it was virtualization, it was cloud, then it was cyber, then it's this, then it's quantum or. I'm starting to see. I think I read something today, John. It was about company, like an investor starting to pull back on the whole AI bubble. Like, what are your thoughts?
A
There's, it's kind of an interesting dilemma and I think you're right. You know, we've gone through many, many hypes. I've been in this industry for 30 years and there's been dozens of sort of hype cycles that we've gone through. I think it will sort of peak at some point and taper off a little bit. I think there's a huge amount of money from an investment perspective that's sort of getting pumped into AI. And any company with sort of AI in their name is kind of getting funding, lots of funding and those sorts of things. And then I think it'll write itself over time as organizations understand that, you know, perhaps AI is not the answer to everything. Perhaps you do need some humans in place to do certain things. Perhaps it wasn't going to be the silver bullet that's, you know, going to save an organization a fortune and enable it to operate with two people. So I think it will sort of level out at some point, but I think that will be different across different industry verticals and different use cases and different organizations. And I'll give you an example. We, we have a insurance customer in Japan and they are running a trial at the mom. They have mapped all of Tokyo, which I think has a population of about 25 million people during the daytime and some of the surrounding areas for Tokyo. And what they've done is taken all the historical car crash data over the past, you know, I think it's 60 or 70 years, and they've fed that into their model. They've taken all the live traffic data on a regular basis and mapped all of that out. And now what they're thinking about doing is depending on where you live, what route you take, how many accidents happen at those sort of intersections within a certain radius around your house. They're talking about can they potentially almost do sort of real time insurance, so, you know, almost sort of weekly insurance based on the data that they have and the risks that they see depending on the area you live in, those sorts of things. So I think there's definitely some interesting use cases is that, you know, is that good for the general population or is that good for the insurance company or is it good for both? I guess time will tell. My 18 year old daughter, I pay her car insurance, so I'd hate to move to a model where it's based on her driving patterns and so forth at this point in time. But yeah, there's certainly some interesting use cases out there. And I think as time goes on, whether that experiment works for them or it doesn't, I guess comes back to the sort of financials around it. Would they make more money with real time insurance or would they make less money depending on the data that they kind of gather?
C
Yeah. Okay, so that's interesting. So you're saying, so hypothetically if you drove to work, you lived in an area of Tokyo, maybe you're like 40 minutes away, you got to drive on a freeway, you go a certain route every day, you do it minimum 10 times a week to and from work. Therefore your insurance will be mapped against that. And so maybe you might pay a bit more versus someone who's only going as a hybrid. Maybe they're only going into the office two days a week with the same route.
A
Yes, yeah, but also, you know, you travel through this intersection and in the last 12 months there's been 48 sort of head on collisions there because of the way the road's configured or whatever, then because you're traveling along there, that's a higher risk and therefore you should pay a higher premium because the likelihood of you crashing is higher. So it's a little bit sort of deeper than that. And then, you know, you sort of think about what other data they can then feed in there. I think it'll be quite interesting. And then I think if you think from an insurance, health insurance perspective as well, some of the data that can start to be gathered about your health, that's potentially a challenge down the track as well, I think.
C
So what are your thoughts then on this? So I'm just curious to hear your like your mind, where does it go? Because I mean, I'm interested in this because again that's going to potentially give money back to the consumer. Right. Like at the moment, cost of living, everyone's focused on how they can save money. So I don't think that's a bad thing. But are you sort of saying it might get to the point where it's sort of overdoing it?
A
Perhaps, potentially, yeah, there could be that sort of overstepping. I mean, again, it comes down to the use case, I think, think. I think if you look in the sort of hospital system, I think there's some great opportunity there to help, you know, identify, diagnose, you know, illnesses, challenges, those sorts of things at sort of scale, which is great. Sort of back to the community. And then, you know, the insurance example that I just gave there, you know, that's potentially good. If you live in a good area, there's not a lot of accidents around you. Fantastic. You're paying cheaper insurance. But, you know, for those who aren't, they're potentially sort of penalized for car insurance based on where they live and how likely they are to have an accident. So I think there's definitely a spectrum there and I think as time goes on, we'll. We'll start to see what that looks like.
C
Yeah, that's interesting. I mean, you could always potentially go a different round and not go for the intersection, but I mean, that's getting really prescriptive now.
A
Yeah. Does someone else then spin up a business that counters that and uses AI to the cheapest path from insurance perspective? So, yeah, it'll. It'll be fascinating to see where things head.
C
So then going back to the verticals, would you say there's any verticals or areas that seem more advanced, would you say, than. Than other, like, other sort of arenas or. What are your thoughts there?
A
I think the FSI vertical, the banking vertical, I know a lot of the banks, and particularly in Asia as well, are adopting AI. I think, you know, from a customer interaction perspective as well, from a customer service perspective, I think there's some great opportunities there particularly to, you know, to not target you as a customer, but to provide you a better service, a more personalized service, you know, based on some of the things you've done. But again, you know, I use Gemini and my friends call me Heinzi. My name's John Heinz and I use Gemini for a number of different things. You know, anything from helping the kids out with some of their sort of schoolwork and uni work to just researching other things. I have never put Heinzi into Gemini at all. And now if I Make a query. Gemini is calling me Heinzi. So, you know, that's the other part of it as well. How much data is being gathered at scale across multiple sources about you. And then, you know, where's that, where's that data going to kind of reside? What's it going to be used for? And you know, is it going to bite you at some stage down the track?
C
But then can I also ask that. I know I'm hearing this argument online and I get that, but would you think that people are then going, well, what about Google the last 20, 30 years that people have been like plugging into Google on certain things?
A
Yeah, no, absolutely. I think, to be honest with you, I think that data's already out there. But then how is that data used? And could you use AI to quickly sort of build some maps about individuals and those sorts of things as well? And then where does that go from there? Who has access to it down the track?
C
And would you say that is a concern for some of your customers? Like who has access to that down the track or what happens to that down the track? Are those questions being asked?
A
One of the advice that we give is, you know, you really need to have a governance model. You need to set up an appropriate framework. You know, people talk about AI in the SoC as an example and using, you know, threat actors, using AI to attack and SOCs using AI to defend and those sorts of things as well. They're relevant issues. But I think the bigger issue is how do you protect your large language model from being poisoned from somebody, you know, manipulating the data in there, accessing the data when they shouldn't be able to. Where is that data going? I think that's a big concern. And I think, you know, down the track we'll probably see some leakage from that perspective, where perhaps organizations haven't put those guardrails in place and all of a sudden we interacting with those companies and providing that company with data, you know, potentially there's a plethora of data there, as I mentioned before, that's been collected on you. That's then, you know, been bridged your privacy.
C
Yeah, this is interesting and there's something that I have spoken a little bit about. So hypothetically, if someone, I don't know, poisoned data and it had false information. So people are, I don't know, maybe asking question around some CRM thing. But I had fabricated, I'm using a bad example, but I had fabricated data. Then it came back with, you know, Carissa lives in Sydney, Australia, and she does these Things which is not right. Do you think that we're going to start to see more of this? Because the part that probably concerns me is the discernment then of how factual, not factual it is. Now, admittedly, this is more across like open AI and platforms and that, but if you're looking at a closed, small language model with inside a company, you sort of have a little bit more trust in it, thinking, well, you know, there's a bit more guardrails, a bit more secure. It's not, you know, open AI. Do you think people are starting to put too much trust then, in case this does happen? And how would, how would you kind of ever really know? Right. Because if it's got all of thousands of records and data and all this sort of stuff over there, and you said 60, 70 years, like, I don't know, in 1979, so. And so went through an accident in a blue Mitsubishi car. I couldn't remember that.
A
Yeah, yeah, no, definitely.
C
So would you. Do you agree that that's like, what are your thoughts here?
A
Yes, I think it's definitely potentially an issue both externally in terms of your data going out. The other interesting point is organizations deploy models internally within the organization as well. Internally, say the payroll folks are running all the payroll of all the employees, you know, through their AI model and doing some modeling and trying to make things more efficient. From that perspective, that model now has all of that payroll data from the CEO all the way down, you know, and across the organization. If you don't have the right guardrails in place and other employees start to query that model, I don't think it's too difficult to figure out very quickly, you know, what everyone's getting paid, who's getting paid what, and then, you know, move that across other parts of the organization, people's HR files and those sorts of things. You know, potentially that's a huge challenge for organizations as well. And that's why it's critical for you to have that framework that's in place. You know, are you using it ethically? What's being queried in it, what's being put into it? I mean, there's many, many examples of, of models being poisoned just by consistently asking the model sort of different questions and almost convincing the model to kind of get to the right answer and get you that bit of data. And there's an example in the US There was a researcher, I think it was one of the US car companies, had a chatbot, and this guy was able to convince the chatbot to give him a $50,000 car for $1 to generate a contract for him to purchase that car. And with the language in there that sort of stated it's, you know, that they can't kind of rescind that offer to him. Now, obviously he didn't go ahead and get that car, but that's somebody externally, you know, back into a company. So unless you've got those guardrails in place, you know, that's quite easy to manipulate a chatbot to give you what you're asking for. And then you can equate that internally within an organization as well with other models.
C
So this dude bought a whole car for one buck.
A
He didn't actually buy the car because they did rescind the offer and it wasn't a binding contract, but he just did it as an example that he could externally manipulate calculator model. And he did it over a period of time by convincing the model that eventually it should be giving out cars for a dollar. So that that was just a simple manipulation. So they pulled that chatbot down and threw it in the rubbish bin, basically. So the, it's a kind of fine line between, you know, providing that great service and opening yourself up as well, which is why it's so critical to have those security guardrails and have that governance model in place.
C
I think I was in Canada and I was listening to the company Genetech speak. Someone was mentioning that someone did this on a chatbot. But they won because it had already done all this stuff and then there was just no way out of it. They had already generated the contact contract, etc. And they just had to own it. I can't remember the example, but I was like, wow, that's going to another level. They just said, well, technically on paper you've issued it because of this. Whether it's not right, it's what's happened. And yeah, they had to absorb that. So, I mean, look, we could see cars going for $1, who knows? But I want to talk about a little bit more about the fine line. So what I'm hearing is the conundrum really is the risk around potentially selling cars for a dollar. But then the competition, while we need to get ahead of our other folks, you know, like you said, you're going to be left behind. But then also there's the risk of the data poisoning and all these sort of things then as well. Plus, do we need a human in the loop? Do we not need a human in leap? We want to be efficient. So do you think it's Sort of like people are trying to work out and I don't want to use the word cut corners, but it kind of, it's like, well, we have to sort of take something from somewhere. I think that are people trying to find what the equilibrium looks like right now? It does seem very unbalanced.
A
I think they are. I think for those that are kind of at the beginning of the journey and really it's depending on what you're sort of using it for. Is it externally facing, is it an interface like a chatbot or something like that? Or are you using it internally to drive efficiencies and you know, potentially kind of downsize your organization and save money that way? I think we're sort of still in, in our infancy from that perspective as well. And I think it's going to take another sort of 12, 24 months to the kind of real story to come out. I think, you know, who's having success and who's not, who's failed. I think it's important, you know, for organizations to share these stories, to partner. We do a lot of roundtables at Lumen, we do a lot of cyber roundtables. We did one recently around supply chain risk management. But we'll certainly start to in the AI space and I think they're a great opportunity. It's Chatham House rules. It's a great opportunity, you know, for like minded CISOs from organizations in lots of different industry verticals, you know, just to share some of their experiences on that particular topic and where they've been successful and where they haven't. I do think, you know, cyber's a team sport. We need a lot more of that sharing and lessons and those sorts of things.
C
Okay, so John, I want to ask you about what do you mean by traditional systems cannot meet demands of AI? So like, can you define a traditional system in your eyes?
A
Yeah, so if you think about traditional systems, they're designed for relatively predictable workloads and human initiated traffic. So there's that predictability. You kind of have the bandwidth, it doesn't necessarily need to be sort of low latency. These traditional networks aren't sort of built to support continuous high volume data movement. Yeah, AI is hugely thirsty from a data perspective. And again it's where does that data reside in your own data center? Is it across multiple clouds? Those sorts of things as well. So what we're seeing is organizations with those traditional systems and traditional networks are kind of struggling without upgrading those to sort of move into the AI era.
C
What are people sort of hesitant towards because I mean these things aren't like it's easy, you know, you and I talking here today, but it's a lot more complex like in reality. So do you think it's. Where does he like what is people mindset then towards this new territory? Of course, but there's risk associated to it, etc. Downtime, I hear is another massive thing people of course worried about as well happening here.
A
Yes, I think it's, I think it's the, you know, potential cost increase for, for improving your networks, improving where your data sits, all of those sorts of things. And I think those who are unsuccessful with that are those who haven't sort of taken a step back and properly planned things out. I think certainly, you know, with generative AI it's hard to sort of predict what that data size looks like, what bandwidth's required. You know, we've got a number of sort of tools that assist from that perspective and help organizations map that out. But we've seen organizations who, who have failed on their sort of pilots because they haven't made those investments on that infrastructure that sort of underpins that AI project. And so we spend a lot of time helping organizations plan that out. And particularly from a sort of bandwidth and a latency perspective, where are their large language models? Are they in a public cloud, are they spread across multiple clouds or in that, that organization's own data center? Those are the sorts of things that you really need to sit down and sort of map out. Because as, as you spin up those models and as data comes into them and as all the analysis sort of starts to happen, you can quickly run out of bandwidth or you can have challenges from a latency perspective as well. And then trying to sort of increase things down the track is costly. You know, as you said, you might need some downtime for it as well and has a big impact on the business.
C
So what do you think now moving forward? John, like I know we discussed a lot of different things today around examples, use cases, maybe hesitation towards certain things, risks. What do you think sort of moving forward, what's going to unfold now for the rest of 2026?
A
Yes, I think we'll, I think we'll see some sort of great successes for a number of organizations. And I think they're the organizations that have kind of done it properly. They've gone through that planning process. They've defined what outcome do they want from AI. AI is just not a kind of magical thing you throw out there and it'll solve everything. You really need to sit down and spend a lot of time mapping out and determining where AI may benefit your organization. But more importantly, what, what is that benefit that comes out the end of using AI? And as I said, there are organizations that are just kind of trying it on everything and you know, what's the return on investment from that point of view. So I think, I think those who will be successful are those who have properly planned it out, are looking at it through a business lens, particularly in terms of, you know, whether that's efficiency, cost saving, a better customer service, you know, a better outcome for the business. It'll be interesting to see certainly throughout the end of this year. And then I think we'll see some of that benefit rolling out to consumers, whether that's easier interactions. You're not, you know, waiting on, on the phone for hours with your bank or with the government or with an airline, you know, trying to get something resolved. It'll know about you, it'll know what the challenges and hopefully can kind of resolve that pretty quickly as well. I think those are the things that'll be more publicized because consumers will be out there talking about how great that is. Their interaction with, you know, XYZ Corporation was a lot better than it was 12 months ago.
C
And lastly, John, any sort of closing comments, final thoughts for our audience today?
A
Yeah, I think, you know, AI is an interesting thing. Everybody's talking about it. I was at the Gartner security conference a couple of weeks back and interestingly, every single organization there had had the word AI in there. I think we need to sort of look through some of the hype, think about some of those real live use cases. I think there's a huge benefit to be had. But I also think we need to be, you know, a little bit careful in terms of pushing forward too quickly and then not having that underlying infrastructure in place, not having those guardrails in place, not having an appropriate framework in place. And I think, I think at some stage from a cyber perspective there will be some form of major breach and yeah, it'll be interesting to see how that plays out foreign.
B
This is KBCast, the voice of cyber.
C
Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today.
B
This episode is brought to you by mercset. Your smarter route to security talent Mercse Executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and midsized businesses scale faster and more efficiently. Find out more@mercsec.com today.
Published: May 13, 2026
Host: KBI.Media
Guest: John Hines, Senior Director for Enterprise Business, Lumen Technologies
This episode tackles the “AI readiness gap”—the increasingly common scenario where enterprise enthusiasm for AI outpaces the technical and governance infrastructure required to make AI both useful and secure. John Hines returns to KBKAST to share insights from his work at Lumen Technologies, discussing why having a secure, scalable, and well-governed network infrastructure is essential for successful enterprise AI deployments. The dialogue covers current trends, common pitfalls, cultural and business realities, emerging risks, and strategic recommendations for moving from AI pilots to production with intention and resilience.
John Hines underscores that AI alone won’t drive transformation—real results hinge on robust, secure, and scalable infrastructure, paired with deliberate governance and cross-functional collaboration. Despite the hype, AI is not a “silver bullet.” Organizations must resist the urge to rush into AI for its own sake and instead prioritize planning, risk management, and clarity about expected outcomes. As the hype levels out, the winners will be those who view AI through a strategic, business-focused lens—and who are candid about lessons learned along the way.
Final Quote:
“There’s a huge benefit to be had. But I also think we need to be, you know, a little bit careful in terms of pushing forward too quickly and then not having that underlying infrastructure in place, not having those guardrails in place, not having an appropriate framework in place.” – John Hines ([34:23])
For more resources and analysis, visit KBI.Media.