Loading summary
A
Hello everyone, this is Tom Uren. I'm here with another Risky Business News sponsor interview. Today I have with me Tony Delafuente of Prowler. G' day Tony, how are you?
B
Good, good, how are you? Thanks for having me.
A
I'm well. Tony is the CEO and founder of Prowler, which is the open source cloud native security company. You've probably got a much better description than I do, but when I was thought we'd kick off with Tony is that you maintain the open source Prowler software. And I kind of thought of you as a bit like Red Hat, where you're a company that's built around open source, but you have these add ons that you provide value for business and you can sell that and that makes a company. But I was wondering how your business had been changed by the rise of AI, because so much of AI is like cloud workloads. So what's changed?
B
Well, I like the comparison with Red Hat, which is a company that I always loved when I started in computers. However, of course operating systems are in the cloud, but the cloud itself is different. It's a different paradigm and it's a different business. Right. Nowadays nobody says I'm going to build my application in Red Hat, or in Debian, in Windows, or even in Solaris. People says I'm going to build my applications, my workloads in the cloud, in AWS, in Azure, in GCP, etc. Right. So the cloud is the new operating system. So that is how this has changed it. And to secure that new operating system is why I built Prowler, that is currently the world's most widely used open source cloud security platform to automate security and compliance in the cloud, in any cloud. So that is what we, what we do. If you think about AI and if you think about the cloud, AI and cloud are kind of family members, right? Because AI needs the cloud and the cloud uses AI for everything. Data is in the cloud, GPU is mostly in the cloud, right? And APIs are in the cloud. So when you have to secure AI is same as secure data cloud. It's not exactly the same because I have new services, new components, but at the end of the day it's part of the same mission, right? So for example, when it comes to securing cloud services for AI, like let's say AWS, Bedrock or others for other cloud providers, you have to take into account not only that service itself, but all the services around it. API endpoints or API gateways, Blob storage, databases and the services themselves. Like AWS Bedrock, for example, to make sure you have the proper guardrails in place, you have configure with least privileged API keys to access the service, you have protection to expose sensitive information, et cetera.
A
Yeah. So does that mean that the rise of AI has kind of been like plug and play for what you do already or is it deleted? Does it present new challenges? You need totally different types of security approaches.
B
When you are building your workload, or a new workload, a new integration with AI, you are using some same services. Let's say that you are familiar with S3, for example. Okay, so you have to make sure S3 is properly configured. Right. But there are new challenges, of course, when it comes to hardening AI services or hardening an AI architecture because there are more components. It's like securing an MCP server. Right. So you can use a local MCP connecting to a remote mcp. You have to take into account that authentication if it's on top of your application or below your application, it's talking to the data directly with full privileges or is talking to the application using an API token. All those circumstances are key when it comes to securing AI applications. I mean, at the end of the day, if you think about that, it's protecting the data that you are allowing users to use.
A
And what about multi cloud environments? Are many of your customers using different AI platforms as a kind of redundancy? Or is it, you know, Claude is better at code, so we'll use it for that and just run without any backup or failover.
B
I guess a trend that we see is using multiple clouds for different things. So I haven't seen many often to use multiple clouds to do exactly the same thing. But it's like, okay, I'm going to use GCP for BigQuery, I'm going to use AWS for general purpose computing and I'm using Azure because I have my Microsoft 365, I have my Entra ID in Microsoft. So that is a very, very common trend, but not for exactly the same. So it's like, okay, I'm going to build my custom application in the three of them or the others because I need to put my eggs in this different basket. That happens, but not that often or it happens in very large corporations.
A
Right, right. Is that because the models are just, you know, some models are better at some things and they're only offered on Azure or GCP or whatever.
B
Sorry, I was talking in general. But in AI it depends on the service, it depends on the models, it depends on the way you do the integrations. Yeah. So if you use GPT5, it depends on which flavor of GPT5, but it's going to be more or less the same. And the security aspect of the model itself may vary between models. Right. So that is why we also recommend to scan your model or your custom model for security issues.
A
Right, right. What does that involve?
B
That involves, for example, as I mentioned for Bedrock, but to make sure you have the proper guardrails in place, you are not leaking information, you don't allow prompt injection, you have logging enabled, invocation to other models or explicit comments, etc. So like the basics are probably the OWASP, the new OWASP for AI, but beyond that.
A
Right, Right. Okay. So I guess, forgive my ignorance, but I'm assuming that if you're talking to, I don't know, GPT5 or whatever, I guess the model provider will say here's what we do, here's what we don't do, but what's the wrapping that Prowler does around that to make it even more secure? I'm assuming it's not as simple as just going, okay, the model is fine, so we can just fling off whatever we like.
B
Yeah, totally. So think about an application. You are building an application with AI which is very common, right? Like a chatbot on top of your information that talks to different tools. Right? So first of all, you need to make sure your users are safe, you know, using your model. So of course to prevent jailbreaks, to prevent data leaks, to prevent business violations like okay, give me the list of your employees, things like that. Right. If for some reason that LLM has access through a tool to a database or something like that, you know, because this is not like you talking to an LLM, like check ChatGPT, for example, which is very one to one communication initially, right? But it goes beyond, because you are adding tools into your AI infrastructure to talk to different databases, API endpoints, etc. So to make sure the information you are providing to your users is enough is key. Second, the whole infrastructure around that, for example, if you are using an mcp, you are using other applications underneath other tools underneath are those tools connected to properly, not only the basics like SSL or mutual SSL etc. But also with enough privilege. Right. Also the tools hosting that information. A good example can be Amazon SageMaker. Amazon SageMaker is a very popular service for machine learning and to generate or work on models. And you can access, if you don't configure properly those sagemakers notebooks you can expose those notebooks to the Internet and an attacker can use those notebooks. And even if you don't configure it properly because you don't know it, you can set up root access to those notebooks. So it's a perfect recipe for an attack. And I mean to mine bitcoins, not mine bitcoins nowadays, but to do other things or even to steal data, right? As you can see, there are different components in a cloud infrastructure that has to be secure, hardened to protect your applications, your current applications and your current applications plus AI.
A
So I'm coming into this conversation and I've got no idea. Now, when you talk to security professionals who are trying to deploy AI projects, what's the state of understanding of all of this? Are they as clueless as I am? Hopefully they're better than I am.
B
I think we are in a very early days. We are in very early days for all this stuff. So that is why tools like Prowler or other tools, open source tools, are very important for the industry because that makes everybody to say, okay, let me see before I run this, let me see if this is secure enough. Right? That is why I started Prowler. Basically it's like, okay, if you are going to do something new, make sure you are covering the basics. Of course now we don't do only the basics, we do way more than that. But the basics are key when it comes to open source. And in this case for AI security. You need to know that you are covering at least the low hanging fraud when you go and not even live, but when you are doing your own POCs and your own applications. Right.
A
I guess at this stage it's probably the least bad way possible because it's inevitable that people are at the phase where they don't know what they're doing and so they will inevitably make mistakes and you just want to minimize those mistakes, I guess.
B
Do you remember the very early days of web when we had hyperlinks, page, HTML pages and that's all. And all the data was right there. And after that we started with Java, php, all that stuff with that database that changed the architecture of systems, right. It's like okay, plain text with data to plain text with data in another place and all the connections, database connections, the security around it. So here I think in terms of architecture or ways of thinking, adding the AI component into a workload adds all of that with its own pros and cons. Pros because your applications are more powerful, your users are going to have a Better outcome. But the cons is that of course you need to know those components of your new architecture and how to secure them.
A
So one of the things that you mentioned before that you added attack path as a feature to Prowler. But what does attack path mean in terms of cloud infrastructure?
B
When it comes to understand how everything works in the cloud or how the different connections are working in the cloud, you need to take into account not only just like three tiers architecture, right? Like in the on prem or traditional software. It's is beyond that is like the network configuration and remember everything through APIs and also talking service, service to service. And also the services in the cloud are talking using roles, right? Are talking using permissions, using users credentials, right? So attack path in the cloud means or gives you the ability to see how all the components connect each other. How to connect the dots from the Internet, let's say from the Internet or from an insider to the data basically. So a good example of an attack path is okay, I do have a vulnerable application in an EC2 instance or in a virtual machine with privileges to access with write or read to an S3 bucket with PII. So if that application is vulnerable and an attacker gets access to the instance itself, the operating system underneath that instance may be or may have access to other services. If you have enough privileges so that other services can be. An S3 bucket can be a database with also personal information or customers data. So that means that from the Internet or from other account, other project, other subscription, you can go through different components, different resources to the data. That is attack path. And that is sometimes as I said, is harder than in the traditional workloads because it's like okay, I have my firewall load balancer or WAF load balancer, front end, back end, boom. Done right here is like that, but including the API part and permissions part on the ability to talk to each other, right? So that is what we are showing now at Prowler. And the users can see multiple ways that other potential attackers can do different activities. For example, you can identify very easily resources with over privilege that probably the user is not aware of. And what is the consequences of of having that overprivileged resources, right?
A
Tony De la Fuente, CEO and founder of Prowler. Thank you very much. Thanks.
B
Thanks Tom for having me.
Episode Title: Sponsored: What AI workloads mean for Cloud security
Date: January 11, 2026
Host: Tom Uren
Guest: Tony De la Fuente, CEO and Founder of Prowler
This episode explores how the rapid growth of artificial intelligence (AI) workloads is reshaping cloud security, featuring insights from Tony De la Fuente, founder of Prowler—an open source cloud native security company. Tom and Tony discuss cloud’s emergence as the fundamental layer of applications, the intertwined evolution of AI and cloud infrastructure, challenges in securing AI platforms, the use of “attack path” analysis, and the current state of knowledge in the industry.
“The cloud is the new operating system.” [01:21 – Tony]
“AI and cloud are kind of family members, right? Because AI needs the cloud and the cloud uses AI for everything.” [01:50 – Tony]
“When it comes to securing cloud services for AI…you have to take into account not only that service itself, but all the services around it.” [02:43 – Tony]
While many underlying services are familiar (e.g., S3, API gateways), AI stacks introduce new architectural complexity.
Securing AI architectures means considering:
“If you are building an application with AI…first of all, you need to make sure your users are safe…to prevent jailbreaks, to prevent data leaks, to prevent business violations.” [07:23 – Tony]
Concrete risks:
Most organizations use different clouds for different workloads (rather than using the same type of AI workload for redundancy across clouds).
“A trend that we see is using multiple clouds for different things…not for exactly the same.” [04:47 – Tony]
Large enterprises may sometimes build redundancy this way, but it’s uncommon.
“The basics are probably the OWASP, the new OWASP for AI, but beyond that.” [06:47 – Tony]
“I think we are in very early days for all this stuff. So that is why tools like Prowler or other tools…are very important for the industry.” [10:01 – Tony]
“Adding the AI component into a workload adds all of that with its own pros and cons.” [11:23 – Tony]
“Attack path in the cloud means…to see how all the components connect each other. How to connect the dots from the Internet, let's say…to the data.”
[13:00 – Tony De la Fuente]
Guest: Tony De la Fuente, CEO and founder of Prowler
Host: Tom Uren
Podcast: Risky Bulletin
(Advertisements, intros, and outros were excluded from this summary.)