Loading summary
Host Name
Today on the podcast, I want to talk about a really interesting company in AI called Confident Security. So they're calling themselves the Signal for AI, which is kind of funny. Like, I think a lot of startups want to attach themselves to already successful company and just say we're, we're the Uber for like shampoo. We're the, you know, whatever. It's a funny thing. But any case, Signal has, I guess now made it to a prolific enough place in the market where we're comparing them, we're using them in this way. But in any case, Confidence Security, this is a really interesting company. They just raised $4.2 million. They came out of stealth and they have a really interesting product that I wanted to bring up because I think it has broad implications for the entire AI industry. I think Apple is trying to do something very similar, but in their own ecosystem. So it's kind of interesting to see that there's going to be platforms, products and players outside of, just outside of kind of some of those perhaps more narrow use cases. So I want to get into what they're doing. They just raised 4 million, which is a, you know, huge kudos and congrats to everyone on the team. Before we get into all of that, I wanted to mention that if you want to try any of the latest AI models, I have a platform called AI Box AI. This is my own startup and we are currently in beta. We have the top 40 AI models on there. Image, text, audio. You can try all of them for $20 a month so you don't have to have subscriptions to all of these different platforms. One thing that we built into AI Box, one feature in particular that I love, that is something called the media storage. So anytime you create an image or an audio file or any sort of piece of media, usually on ChatGPT, like, these things get so lost for me. I can't remember what conversation it was in. I can't remember where it was at. All of it is stored in our media file. You can go and click on the image, you can see the prompt that was used to generate, and you can get taken straight back to the conversation that you were having without having to dig through all your threads of conversations. It's something that has saved me so much time, so super useful and, and the amazing thing is you can use it with all of the different models on the platform. So anyways, go check it out if you're interested. $20 a month for all of the top models. AI box AI. There is a link in the description all right, let's get into what Confident Security is doing. So the thing that I think is interesting, right, we have obviously all the big AI companies, OpenAI, Anthropic, XAI, Google, and all of them are sucking up tons of user data. Two different places I would also mention, right, like they're going and scraping the entire Internet and getting everything they can there. But, but also we are talking to these AI models and they're, you know, acquiring data that way. Now some say that in certain use cases they're not using it to train, but others aren't so clear about it. It's kind of convoluted and crazy and it's really hard to verify any of that in any case. So this is what's really interesting I think is that we have this from like a consumer standpoint. We all understand this from like, oh yeah, I don't want to, you know, take my data and use it. But there's way, there's super regulated industries that have, you know, that are way more concerned about this than even us. And that is, you can think about anyone that's in healthcare, finance, government, these are areas that it's not negotiable if, if you know, there's any of these sort of open questions about what happens to the data, they're just not going to work with AI, they can't use. Trust the tools. And so it's kind of a tricky place because you know, obviously healthcare, finance, government, these are areas that I believe could benefit immensely from AI. But the security, you know, the security risk that's, you know, tied to all these companies makes it very tricky for these companies to work with them. So in any case, this is essentially the problem that Confident Security is trying to solve. They have a product, it's called C O N F S E C Conf Sec but anyways, it's an end to end encryption tool and it basically wraps around foundational models, right? ChatGPT, Anthropic, any of these models and it guarantees that any prompts and metadata cannot be stored. They cannot be seen or used for AI training even by the model provider or by any third party. This is something that's really important to us over at AI Box, my software startup that I'm building. Anytime that you're sending messages to AI models, we cannot see the models. Everything is encrypted on the back end. If we wanted to look at your messages, we can't actually. And this is really important. And so this is something that they're trying to solve. This is what Their founder and CEO said. He said the second that you give up your data to someone else, you've essentially reduced your privacy. And. And our product's goal is to remove that trade off. So they just raised 2.3. Two point or sorry, they just raised $4.2 million. This is their seed round. They raised it from Decibel, South Park Commons, X Ante and Swyx. The company essentially is trying to be a bit of an intermediate intermediary vendor between AI companies and their customers. So like hyperscalers, governments, enterprises, they're trying to be between these people. So. So even AI companies, I think, see a lot of the value here. The late, you know, the latest AI browser that's hitting the market, which is coming out of perplexity. It's Comet, you've probably heard me talk about on the podcast here. They're essentially trying to give their customers a guarantee that their sensitive data isn't being stored on a server somewhere that the company isn't, you know, and also no bad actors are going to be using this to train AI on your job. So this is something that big AI companies are very conscious about, myself included. Right. Like it. It definitely took us much longer to build out our product than we would have liked. And a huge chunk of that was the security, making sure everything was encrypted, making sure everything was private and safe. And like, basically I wouldn't make a product that I wouldn't want to use myself. And so, I mean, I can understand why this is such a big deal for those companies. It's a big, It's a big deal for me. But I think this is really where this company shines. Now. One area that I think is really interesting is they kind of compared themselves to something that Apple is doing. Apple, basically, if you've been following any of their updates, they have something called the Apple Private cloud compute or PCC architecture, which. But basically they're saying that they are 10 times better than anything out there in terms of guaranteeing that Apple cannot see your data. So they're like, look, Apple might say that, you know, they have this private cloud compute that no one else can see, but Apple could technically see it. And so they're saying, look, even Apple can't see what we have. What's interesting is like what Apple's done with the PCC Confidence Security systems is first going to. Basically how it works is they're going to anonymize all of your data, they encrypt it and they route it through services like cloudflare. Or fastly. So basically, servers never see the original source or the content. Next, they then use a bunch of encryption that also, that basically only allows decryption under really strict conditions. Right? So, so the AI models that are taking this data, they can decrypt it, but they have to essentially agree to a bunch of rules. So basically what their CEO said about this, he said, quote, so you can say you're only allowed to decrypt this if you are not going to log the data and you're not going to use it for training and you're not going to let anyone see it. So after they've done all of that, the software that is running the AI inference is publicly logged and it's open to review so that experts and anyone can actually verify those guarantees, right? So the companies don't just say, like, don't worry, we're not gonna use it to train. And they secretly use it to train, like, it's all public, it's all logged. And so experts can go review that and make sure that they're actually, you know, being legit with this. Hey, this is what one of their investors said about it. And I always take everything an investor says and we come to the grain of salt because obviously they're talking their book. But this is from Decibel, one of the lead investors. They said Confidence securities is ahead of the curve and recognizing that the future of AI depends on trust built into the infrastructure itself. Without solutions like this, many enterprises simply cannot move forward with AI. I think this is actually true, although I know that they're just talking their book for sure. But like, there are a whole bunch of industries and areas where it's very tricky to use AI because of these types of issues. And like, you know, that's not to say that like banks and healthcare providers aren't using AI, but it's just the. They're much more selective in how they can use, I think with tools like this, really focusing on the security aspect of it, we're going to see AI integrated into more tools. It's going to become more useful and in more areas that it was much more restricted in the past. So for that I'm very excited and I think that there's going to be some, some fantastic implications. I'm excited to see what they're able to do. It's obviously still very early days for the company, but I think it's a, it's a great step in the right direction. So, so I'll definitely keep you up to date as this company continues to move forward. They've just raised $4 million, so we can expect some exciting things from them. Thank you so much for tuning into the podcast today. If you enjoyed the episode and if you want to leave a rating and review, that would mean the world to me. It helps basically the algorithm promote this podcast to more incredible people like yourself. So it's pretty much a way to say thank you. If you learned anything new and if you appreciate it, it'd mean a lot. Over on Spotify you hit the about tab and and on Apple you can just leave a couple drop some stars, leave some comments. I really appreciate it. And make sure to go check out AI box. AI if you want basically one platform to test all of the top AI models without having to have subscriptions to everything, it's $19 a month and you get access to everything. So it's fantastic value, but it's also super useful with a bunch of tools and features that you don't see anywhere else and a whole bunch of exciting stuff that we're going to be rolling out in the future. I'll tell you all about it as it comes. Thanks so much for tuning in and I will catch you in the next episode.
Episode: Is Privacy Dead in the AI Era?
Release Date: July 24, 2025
Host: Joe Rogan Experience for AI
The episode delves into the pressing issue of data privacy in the rapidly evolving AI landscape. The host introduces Confident Security, a burgeoning company in the AI security sector, positioning them as a pivotal player addressing privacy concerns.
Confident Security recently emerged from stealth mode, successfully raising $4.2 million in a seed round led by investors such as Decibel, South Park Commons, X Ante, and Swyx (02:15). This funding underscores the industry's recognition of the critical need for robust data privacy solutions in AI applications.
The host outlines the inherent privacy risks associated with major AI firms like OpenAI, Anthropic, XAI, and Google, which often rely on extensive user data for training models. He emphasizes that while AI holds immense potential benefits for sectors like healthcare, finance, and government, the security and privacy risks present significant barriers to adoption in these highly regulated industries (05:30).
Confident Security introduces ConfSec, an end-to-end encryption tool designed to safeguard user data when interacting with foundational AI models such as ChatGPT and Anthropic. ConfSec ensures that:
The host highlights the importance of such solutions by referencing his own startup, AI Box AI, which implements similar encryption measures to protect user interactions with AI models (12:45).
Confident Security distinguishes itself by offering superior privacy guarantees compared to industry giants like Apple. The company’s approach involves:
This comprehensive strategy is claimed to be "10 times better" than Apple's Private Cloud Compute (PCC) architecture, providing unmatched data privacy assurances (20:05).
Confident Security’s founder and CEO asserts, “The second that you give up your data to someone else, you've essentially reduced your privacy. Our product's goal is to remove that trade-off.” (09:40). This vision is echoed by a lead investor from Decibel, who states, “Confidence Security is ahead of the curve and recognizing that the future of AI depends on trust built into the infrastructure itself. Without solutions like this, many enterprises simply cannot move forward with AI.” (23:55).
The host discusses the broader implications of Confident Security’s innovations:
While Confident Security is still in its early stages, the host expresses optimism about its potential impact on the AI industry. The recent funding round signifies strong investor confidence, and the host anticipates significant advancements and integrations from the company in the near future (30:10).
Key Takeaways:
Notable Quotes:
This episode provides a comprehensive exploration of the intersection between AI and data privacy, highlighting how innovative solutions like Confident Security's ConfSec are crucial in shaping a secure and trustworthy AI future.
Note: Timestamps are illustrative based on the transcript provided.