
In this eye-opening episode of AI Confidential, I had the privilege of hosting two pioneers in AI security and privacy: Daniel Rohrer, VP of Software Security at NVIDIA, and Raluca Ada Popa, Professor at UC Berkeley, Co-Director of UC Berkeley Skylab, and Co-Founder and President of Opaque Systems. Together, we explored the cutting edge of privacy-preserving AI technology and its implications for the future of innovation. Watch the full episode on YouTube →
The Hardware Revolution
One of the most exciting developments we discussed was NVIDIA’s recent introduction of GPU Hardware Enclaves with the H100. As Daniel explained, this breakthrough, which became available through cloud providers like Azure in September 2023, fundamentally transforms what’s possible with secure AI computing. For the first time, organizations can achieve true end-to-end security for computationally intensive AI workloads at scale.
The Power of Attestation
Raluca brought a unique academic and entrepreneurial perspective to our discussion of how confidential computing transforms trust in AI systems. The key insight? It’s not just about encryption—it’s about proving exactly what happens to data throughout the AI pipeline. Through confidential computing, organizations can now:
- Cryptographically verify code execution
- Track model access to data
- Document complete data lineage
- Ensure compliance through technical guarantees
Beyond Traditional Security
Our conversation revealed how these capabilities enable entirely new forms of collaboration and innovation. Organizations can now:
- Process sensitive data while maintaining encryption
- Enable secure multi-party computation with verifiable guardrails
- Protect both data and model weights in AI workflows
- Maintain documented compliance while driving innovation
Real-World Impact
The applications we explored were compelling: from healthcare institutions collaborating on better treatment protocols to financial institutions jointly fighting fraud. What makes these use cases possible isn’t just the encryption—it’s the ability to prove exactly how data is being used.
The Path Forward
As both Daniel and Raluca emphasized, attestable AI pipelines aren’t just a security feature—they’re becoming a business necessity. In today’s AI-driven world, losing control of your data isn’t just a temporary setback—it can have irreversible consequences for competitiveness and security.
The future belongs to organizations that can not only protect their data but prove how it’s being used. Confidential computing makes this possible, turning data privacy from a constraint into a catalyst for innovation.
Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.
Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.
As we move into this new era of secure AI, how is your organization balancing innovation with data privacy? Share your approach in the comments below.