Reshaping Workflows with Dell Pro Precision and NVIDIA RTX PRO GPUs
Episode: GTC Bonus: Fully Encrypting AI Workloads with Pankaj Thapa of Mirror Security
Host: Logan Lawler
Guest: Pankaj Thapa, Co-founder & CEO, Mirror Security
Date: March 20, 2026
Episode Overview
This GTC bonus episode features Logan Lawler interviewing Pankaj Thapa, Co-founder and CEO of Mirror Security. The main focus is on how Mirror Security is enabling fully encrypted AI workloads—protecting confidential data not only during storage and transmission but all the way through active AI model inference and memory. The discussion demystifies state-of-the-art encryption solutions, real-world workflows, and how these breakthroughs integrate into both on-premise and cloud-based AI solutions, particularly in regulated industries that are highly sensitive to data exposure.
Key Discussion Points & Insights
1. The Challenge: AI Data Exposure and Security
- The Core Problem:
- AI systems require data sharing, which creates significant exposure risks, especially for organizations in regulated industries.
- “We are basically solving one of the biggest challenges with AI which is the data exposure risk. […] If you have to derive intelligence from these AI models, you will have to share the data. So regulatory industry, I mean, so the data is very close to their heart.” (Pankaj Thapa, 00:43)
- Encryption Scope:
- Mirror Security brings end-to-end encryption that covers prompts, context, and documents—not just in storage, but during AI inference as well.
2. Innovations: Encrypted AI Inference and Memory
- Breakthrough Capabilities:
- Encrypted AI Inference: AI models operate on encrypted data—never seeing data in plain text at any stage.
- Encrypted AI Memory: The context that makes AI models smarter is also fully encrypted, protecting sensitive histories and memories from leaks or breaches.
- “Your prompts, your context, your documents are all encrypted and these models are able to do the operations and inferencing on the encrypted data itself. We call this encrypted AI inference.” (Pankaj Thapa, 00:56)
3. Workflow Integration: SDKs and Real-World Use
-
Workflow Fit (RAG Apps Example):
- Mirror Security provides an SDK that works seamlessly with all major vector databases and AI frameworks.
- The system ensures documents are always encrypted, even during advanced operations like semantic search or retrieval-augmented generation (RAG).
- “When you are ingesting any documents, so it will be encrypted and our technology enables encrypted semantic search. So that's the breakthrough. Which means so never in the pipeline. I mean these documents sit in the encrypted space.” (Pankaj Thapa, 02:14)
-
Security Strengths:
- Even in air-gapped environments, exposures like ransomware or insider threats are mitigated.
- AI-driven attacks that can reconstruct documents from embeddings are neutralized, as those embeddings are never accessible in unencrypted form.
- “Now people are using AI to basically attack your systems. So with 92% accuracy they are able to reconstruct your document. So big threat. So even if it is air gap environment...” (Pankaj Thapa, 02:39)
4. Cloud and Coding Assistant Use Cases
-
End-to-End Encryption for AI Coding Assistants:
- Compatible with all major open-source models (e.g., Llama, Mistral, Queen) used in coding assistants.
- Mirror Security can integrate as an extension in development environments (like Visual Studio), encrypting code before indexing and cloud transfer.
- “We sit as an extension in some of these Visual Studios. Which means, as soon as your code base start getting indexed, so these are indexed in plain text and sent to the cloud. But with Mirror, everything is encrypted before the indexing happens and when you are generating the code it is all into an encrypted.” (Pankaj Thapa, 03:25)
-
Commercial Cloud Platforms (e.g., GPT, Claude):
- For proprietary or sensitive codebases on platforms like GPT or Claude, Mirror directs data through confidential computing routes and provides an encryption gateway to ensure nothing is exposed in the clear.
- “We take them through the confidential computing route. I mean, that's where our gateway comes into the picture. Right. So we ensure that your code, which is again some form of data, proprietary data and it has to be protected.” (Pankaj Thapa, 03:46)
Memorable Quotes & Moments
-
“Honestly, I've never heard of this before. I think it's actually fantastic.”
—Logan Lawler reacting to the concept, (04:03) -
“Now people are using AI to basically attack your systems. So with 92% accuracy they are able to reconstruct your document. So big threat.”
—Pankaj Thapa, outlining the modern threat landscape, (02:39) -
“We sit as an extension in some of these Visual Studios...everything is encrypted before the indexing happens and when you are generating the code it is all into an encrypted.”
—Pankaj Thapa, practical workflow insight, (03:25)
Connect and Learn More
- Mirror Security Website: mirrorsecurity.io (04:25)
- LinkedIn: Find Pankaj Thapa and Mirror Security for direct engagement (04:29)
Timestamps for Major Segments
- 00:43: The core challenge of AI data exposure
- 00:56: Explaining encrypted AI inference & memory
- 02:14: RAG workflow integration & SDK
- 02:39: Emerging threats and AI security gaps
- 03:25: Cloud & coding assistant use cases
- 04:03: Host’s reaction and practical appreciation
- 04:25: Where to learn more about Mirror Security
“That's why I love the NVIDIA Inception area at GTC because you get to hear problems, these are the companies that solve it. So it's fantastic.”
—Logan Lawler, closing thoughts (04:35)
For listeners interested in real-world AI security, practical encryption workflows, and the frontiers of innovation at the intersection of data protection and AI, this episode offers insight, examples, and actionable paths to engage with Mirror Security’s solutions.
