
There are moments in technology that stay with you. I remember sitting at my first computer, writing my first lines of code. The feeling wasn’t explosive excitement – it was deeper than that. It was the quiet realization that I was learning to speak a new language, one that could create something from nothing.
Later, when I first connected to the internet, that same feeling returned. The world suddenly felt both larger and more accessible. These weren’t just technological advances – they were transformative shifts in how we interact with information and each other.
Today, working on confidential computing for AI agents at Opaque, I recognize that same profound sense of possibility.
The Mathematics of Trust
The parallels to those early computing days keep surfacing in my mind. Just as the early internet needed protocols and security standards to become the foundation of modern business, AI systems need robust security guarantees to reach their potential. The math makes this necessity clear: with each additional AI agent in a system, the probability of data exposure (or a model leaking) compounds. At just 1% risk per agent, a network of 1,000 agents approaches certainty of breach.
This isn’t abstract theory – it’s the reality our customers face as they scale their AI operations. It reminds me of the early days of networking, when each new connection both expanded possibilities and introduced new vulnerabilities.
Learning from Our Customers
Working with companies like ServiceNow, Encore Capital, the European Union,…has been particularly illuminating. The challenges echo those fundamental questions from the early days of computing: How do we maintain control as systems become more complex? How do we preserve privacy while enabling collaboration?
When our team demonstrates how confidential computing can solve these challenges, I see the same recognition I felt in those early coding days – that moment when complexity transforms into clarity. It’s not about the technology itself, but about what it enables.
Why This Matters Now
The emergence of AI agents reminds me of the early web. We’re at a similar inflection point, where the technology’s potential is clear but its governance structures are still emerging. At Opaque, we’re building something akin to the security protocols that made e-commerce possible – fundamental guarantees that allow organizations to trust and scale AI systems.
Consider how SSL certificates transformed online commerce. Our work with confidential AI is similar, creating trusted environments where AI agents can process sensitive data while maintaining verifiable security guarantees. It’s about building trust into the foundation of AI systems.
The Path Forward
The technical challenges we’re solving are complex, but the goal is simple: enable organizations to use AI with the same confidence they now have in web technologies. Through confidential computing, we create secure enclaves where AI agents can collaborate while maintaining strict data privacy – think of it as end-to-end encryption for AI operations.
Our work with ServiceNow (and other companies) demonstrates this potential. As their Chief Digital Information Officer Kellie Romack noted, this technology enables them to “put AI to work for people and deliver great experiences to both customers and employees.” That’s what drives me – seeing how our work translates into real-world impact.
Looking Ahead
Those early experiences with coding and the internet shaped my understanding of technology’s potential. Now, working on AI security, I feel that same sense of standing at the beginning of something transformative. We’re not just building security tools – we’re creating the foundation for trustworthy AI at scale.
The challenges ahead are significant, but they’re the kind that energize rather than discourage. They remind me of learning to code – each problem solved opens up new possibilities. If you’re working on scaling AI in your organization, I’d value hearing about your experiences and challenges. The best solutions often come from understanding the real problems people face.
This journey feels familiar yet new. Like those first lines of code or that first internet connection, we’re building something that will fundamentally change how we work with technology. And that’s worth getting excited about.
[Previous content remains the same…]
Further Reading
For those interested in diving deeper into the world of AI agents and confidential computing, here are some resources:
- Constitutional AI: Building More Effective Agents
Anthropic’s foundational research on developing reliable AI agents. Their work on making agents more controllable and aligned with human values directly influences how we think about secure AI deployment. - Microsoft AutoGen: Society of Mind
A fascinating technical deep-dive into multi-agent systems. This practical implementation shows how multiple AI agents can collaborate to solve complex problems – exactly the kind of interactions we need to secure. - ServiceNow’s Journey with Confidential Computing
See how one of tech’s largest companies is implementing these concepts in production. ServiceNow’s experience offers valuable insights into scaling AI while maintaining security and compliance. - Microsoft AutoGen Documentation
The technical documentation that underpins practical multi-agent implementations. Essential reading for understanding how agent-to-agent communication works in practice.

