Securing the AI Renaissance: Reflections from the Engine Room

There are moments in technology that stay with you. I remember sitting at my first computer, writing my first lines of code. The feeling wasn’t explosive excitement – it was deeper than that. It was the quiet realization that I was learning to speak a new language, one that could create something from nothing.

Later, when I first connected to the internet, that same feeling returned. The world suddenly felt both larger and more accessible. These weren’t just technological advances – they were transformative shifts in how we interact with information and each other.

Today, working on confidential computing for AI agents at Opaque, I recognize that same profound sense of possibility.

The Mathematics of Trust

The parallels to those early computing days keep surfacing in my mind. Just as the early internet needed protocols and security standards to become the foundation of modern business, AI systems need robust security guarantees to reach their potential. The math makes this necessity clear: with each additional AI agent in a system, the probability of data exposure (or a model leaking) compounds. At just 1% risk per agent, a network of 1,000 agents approaches certainty of breach.

This isn’t abstract theory – it’s the reality our customers face as they scale their AI operations. It reminds me of the early days of networking, when each new connection both expanded possibilities and introduced new vulnerabilities.

Learning from Our Customers

Working with companies like ServiceNow, Encore Capital, the European Union,…has been particularly illuminating. The challenges echo those fundamental questions from the early days of computing: How do we maintain control as systems become more complex? How do we preserve privacy while enabling collaboration?

When our team demonstrates how confidential computing can solve these challenges, I see the same recognition I felt in those early coding days – that moment when complexity transforms into clarity. It’s not about the technology itself, but about what it enables.

Why This Matters Now

The emergence of AI agents reminds me of the early web. We’re at a similar inflection point, where the technology’s potential is clear but its governance structures are still emerging. At Opaque, we’re building something akin to the security protocols that made e-commerce possible – fundamental guarantees that allow organizations to trust and scale AI systems.

Consider how SSL certificates transformed online commerce. Our work with confidential AI is similar, creating trusted environments where AI agents can process sensitive data while maintaining verifiable security guarantees. It’s about building trust into the foundation of AI systems.

The Path Forward

The technical challenges we’re solving are complex, but the goal is simple: enable organizations to use AI with the same confidence they now have in web technologies. Through confidential computing, we create secure enclaves where AI agents can collaborate while maintaining strict data privacy – think of it as end-to-end encryption for AI operations.

Our work with ServiceNow (and other companies) demonstrates this potential. As their Chief Digital Information Officer Kellie Romack noted, this technology enables them to “put AI to work for people and deliver great experiences to both customers and employees.” That’s what drives me – seeing how our work translates into real-world impact.

Looking Ahead

Those early experiences with coding and the internet shaped my understanding of technology’s potential. Now, working on AI security, I feel that same sense of standing at the beginning of something transformative. We’re not just building security tools – we’re creating the foundation for trustworthy AI at scale.

The challenges ahead are significant, but they’re the kind that energize rather than discourage. They remind me of learning to code – each problem solved opens up new possibilities. If you’re working on scaling AI in your organization, I’d value hearing about your experiences and challenges. The best solutions often come from understanding the real problems people face.

This journey feels familiar yet new. Like those first lines of code or that first internet connection, we’re building something that will fundamentally change how we work with technology. And that’s worth getting excited about.

[Previous content remains the same…]

Further Reading

For those interested in diving deeper into the world of AI agents and confidential computing, here are some resources:

  • Constitutional AI: Building More Effective Agents
    Anthropic’s foundational research on developing reliable AI agents. Their work on making agents more controllable and aligned with human values directly influences how we think about secure AI deployment.
  • Microsoft AutoGen: Society of Mind
    A fascinating technical deep-dive into multi-agent systems. This practical implementation shows how multiple AI agents can collaborate to solve complex problems – exactly the kind of interactions we need to secure.
  • ServiceNow’s Journey with Confidential Computing
    See how one of tech’s largest companies is implementing these concepts in production. ServiceNow’s experience offers valuable insights into scaling AI while maintaining security and compliance.
  • Microsoft AutoGen Documentation
    The technical documentation that underpins practical multi-agent implementations. Essential reading for understanding how agent-to-agent communication works in practice.

Privacy Meets Innovation: A New Era of Secure AI

In this eye-opening episode of AI Confidential, I had the privilege of hosting two pioneers in AI security and privacy: Daniel Rohrer, VP of Software Security at NVIDIA, and Raluca Ada Popa, Professor at UC Berkeley, Co-Director of UC Berkeley Skylab, and Co-Founder and President of Opaque Systems. Together, we explored the cutting edge of privacy-preserving AI technology and its implications for the future of innovation. Watch the full episode on YouTube →

The Hardware Revolution

One of the most exciting developments we discussed was NVIDIA’s recent introduction of GPU Hardware Enclaves with the H100. As Daniel explained, this breakthrough, which became available through cloud providers like Azure in September 2023, fundamentally transforms what’s possible with secure AI computing. For the first time, organizations can achieve true end-to-end security for computationally intensive AI workloads at scale.

The Power of Attestation

Raluca brought a unique academic and entrepreneurial perspective to our discussion of how confidential computing transforms trust in AI systems. The key insight? It’s not just about encryption—it’s about proving exactly what happens to data throughout the AI pipeline. Through confidential computing, organizations can now:

  • Cryptographically verify code execution
  • Track model access to data
  • Document complete data lineage
  • Ensure compliance through technical guarantees

Beyond Traditional Security

Our conversation revealed how these capabilities enable entirely new forms of collaboration and innovation. Organizations can now:

  • Process sensitive data while maintaining encryption
  • Enable secure multi-party computation with verifiable guardrails
  • Protect both data and model weights in AI workflows
  • Maintain documented compliance while driving innovation

Real-World Impact

The applications we explored were compelling: from healthcare institutions collaborating on better treatment protocols to financial institutions jointly fighting fraud. What makes these use cases possible isn’t just the encryption—it’s the ability to prove exactly how data is being used.

The Path Forward

As both Daniel and Raluca emphasized, attestable AI pipelines aren’t just a security feature—they’re becoming a business necessity. In today’s AI-driven world, losing control of your data isn’t just a temporary setback—it can have irreversible consequences for competitiveness and security.

The future belongs to organizations that can not only protect their data but prove how it’s being used. Confidential computing makes this possible, turning data privacy from a constraint into a catalyst for innovation.


Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

As we move into this new era of secure AI, how is your organization balancing innovation with data privacy? Share your approach in the comments below.

The Great AI Race: Security, Scale, and Why Data Control Matters

When I sat down with Teresa Tung from Accenture for Episode 2 of AI Confidential (you can find this episode on Youtube in addition to Spotify), I was struck by a stark reality that many enterprise leaders are facing: while 75% of CXOs recognize the critical need for high-quality data to power their generative AI initiatives, nearly half lack the trusted data required for operational deployment.

Read the full conversation breakdown in our newsletter →

This gap isn’t just a statistic—it’s a story I’ve seen play out repeatedly across boardrooms and technical teams. As companies rush to embrace generative AI, they’re discovering that the real challenge isn’t implementing the technology—it’s protecting and leveraging their most valuable asset: data.

Teresa shared a fascinating perspective from her work at Accenture that resonated deeply with me. She pointed out that in the next five years, industry leadership will be determined not by who has the most advanced AI models, but by who can effectively control and utilize their data. It’s a shift that reminds me of the early days of digital transformation, where companies that failed to adapt quickly found themselves in a Kodak-like situation.

The Security Paradox

Here’s the challenge that keeps enterprise architects, CTOs, and CIOs up at night: the most valuable data for AI applications is often the most sensitive. Whether it’s financial records, customer interactions, or proprietary research, this “crown jewel” data holds transformative potential but comes with enormous risk.

During our conversation, Teresa shared an illuminating example from an automotive manufacturer grappling with this exact dilemma. The company saw tremendous potential in using AI to enhance customer interactions but faced the fundamental challenge of keeping sensitive data secure while making it actionable.

Beyond Pilot Purgatory

What’s become clear through my conversations with technology leaders is that many organizations are stuck in what I call “pilot purgatory”—they can experiment with AI on non-sensitive data, but can’t scale to production because they lack frameworks for securing sensitive data at scale.

This is where technologies like Confidential Computing enter the picture. As Teresa and I discussed, it’s not just about encrypting data at rest or in transit anymore—it’s about maintaining security while data is being processed. This capability is transforming how companies can approach AI implementation, enabling them to:

  • Process sensitive data while maintaining encryption
  • Share insights without exposing raw data
  • Create new business models through secure multi-party computation

The Path Forward

For technology leaders navigating this landscape, the message is clear: the winners in the AI race might be determined partly by who moves fastest, but whoever builds the most trustworthy and secure foundations will endure and stand the test of time. As Teresa pointed out, successful AI implementation requires treating data as a product—with all the quality controls, supply chain considerations, and security measures that implies.

Looking ahead, I believe we’re entering a new era of AI adoption where security and scalability must be considered from day one. The companies that thrive will be those that can balance innovation with protection, speed with security, and ambition with responsibility.

Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

What challenges are you facing in scaling AI while maintaining data security? I’d love to hear your thoughts in the comments below.

AI Confidential Podcast: Building Trust in AI with Mark Papermaster (AMD) & Mark Russinovich (Azure)

Visit the AIConfidential.com for the Podcast and Newsletter.

In a recent discussion between technology leaders Mark Papermaster (CTO and Deputy CISO of Microsoft Azure) and Mark Russinovich (CTO of AMD), the focus was on the transformative potential of confidential Computing in reshaping data security practices within the technology industry. Against a backdrop of escalating concerns surrounding data privacy and cybersecurity threats, the conversation delved into key themes such as Security and Trust, Confidential Computing, Data Control, and Collaboration. These themes underscored the critical importance of safeguarding customer data in cloud environments through innovative solutions like secure enclaves and hardware root of trust mechanisms. Confidential Computing, defined as a technology that ensures data remains secure even during processing by unauthorized parties, emerged as a pivotal tool in enhancing data security measures amidst rapid advancements in AI technologies.

The dialogue also highlighted recent developments such as the collaboration between AMD and Microsoft to streamline confidential computing adoption and Microsoft’s ambitious goal to transition to a confidential cloud by 2025. The introduction of Azure Confidential Ledger further exemplified industry efforts towards bolstering supply chain security. Looking ahead, the future outlook points towards continued advancements in confidential Computing technologies with an emphasis on expanding their application to edge devices while establishing robust integrity measures across computing supply chains. As companies strive to navigate ethical considerations around data control and privacy in AI applications alongside potential regulatory challenges associated with widespread adoption of secure computing practices, it becomes increasingly clear that fostering trust through enhanced security measures will be paramount for shaping the future landscape of technology innovation.