Building the Internet of Agents: A Trust Layer for the Next Web

Insights from Vijoy Pandey, Cisco Outshift, and the Confidential Summit

“A human can’t do much damage in an hour.
An agent acting like a human—at machine speed—can do a lot.”
– Vijoy Pandey, SVP & GM, Cisco

We’re entering the era of agentic AI: networks of autonomous, collaborative agents that behave like humans but act at machine speed and scale. They build, decide, communicate, and self-replicate. But there’s one thing they can’t yet do—earn our trust.

At the Confidential Summit two weeks ago in San Francisco, that challenge took center stage. Executives and builders from NVIDIA, Microsoft Azure, Google Cloud, AWS, Intel, ARM, AMD, ServiceNow, LangChain, Anthropic, DeepMind, and more came together to ask a hard question:

Can we build an Internet of Agents that is open, interoperable—and trusted?

The answer is yes! And many came prepared with reference architectures, including OPAQUE.

In this episode of AI Confidential, we sat down with Vijoy Pandey, who leads Cisco’s internal incubator Outshift and the industry initiative Agency. Along with co-host Mark Hinkle, we explored why this problem can’t be solved with policy patches or paper governance.

🧠 From Deterministic APIs to Probabilistic Agents

Today’s internet runs on deterministic computing—you know what API you’re hitting and what result to expect. Agents break that model.

Agentic systems introduce probabilistic logic, dynamic behavior, and autonomous decision-making. One input can lead to many outcomes. That’s powerful—but also dangerous.

🔐 Why We Need a Trust Layer

As Vijoy put it: “We’ve built access control lists, compliance programs, and identity providers—for humans. None of those scale to agentic systems.”

Agents can impersonate employees, leak IP, or introduce bias—without ever breaking a rule on paper. That’s why verifiable trust is the new foundation.

At the Confidential Summit, dozens of companies showcased confidential AI stacks that create cryptographic guarantees at runtime—across data, identity, code, and communication.

🌐 Introducing the Internet of Agents

The future isn’t a single AI. It’s collaborative networks of agents, working across clouds, enterprises, and toolchains. Vijoy’s team at Agency (agency.org) is building the open-source fabric for this new internet: discoverable, composable, verifiable agents that speak a shared language.

OPAQUE has joined this effort to help embed verifiable, hardware-enforced trust into the open stack. And others—from LangChain to Galileo, Cisco to CrewAI—are building multi-agent systems for real enterprise workflows.

🚀 Use Cases Are Here

This isn’t science fiction. ServiceNow is already using OPAQUE-powered confidential agents to accelerate sales operations. Cisco’s SRE teams have offloaded 30% of their infrastructure workflows to Jarvis, a composite agent framework with 20+ agents and 50+ tools.

These are just the beginning.

🧱 A Call to Architects

The trust layer of the Internet of Agents is being designed right now—at the protocol layer, at the hardware layer, and in the open. It will require open standards, decentralized identity, hardware attestation, and zero-trust workflows by default.

The risks are massive. The opportunity is bigger. But trust can’t be retrofitted. It has to be built in.

Listen to the full convesaration with Vijoy Pandey –> Spotify Apple Podcast YouTube

And you can find all our podcast episodes –> https://podcast.aiconfidential.com and you can subscribe to our newsletter –> https://aiconfidential.com

Confidential Summit Wrap

We just wrapped the Confidential Summit in SF—and it was electric.
From NVIDIA, Arm, AMD, and Intel Corporation to Microsoft, Google and Anthropic the world’s leading builders came together to answer one critical question:

**How do we build a verifiable trust layer for AI and the Internet?**

🔐 Ion Stoica (SkyLab/Databricks) reminded us: as agentic systems scale linearly, risk compounds exponentially.

🧠 Jason Clinton (Anthropic) stunned with stats:
→ 65% of Anthropic’s code is written by Claude. By year’s end? 90–95%.
→ AI compute needs are growing 4x every 12 months.
→ “This is the year of the agent,” he said—soon we’ll look back on it like we do Gopher.

🛠️ Across the board, Big Tech brought reference architectures for Confidential AI:

→Microsoft shared real-world Confidential AI infrastructure running in Azure
→Meta detailed how WhatsApp uses Private Processing to secure messages
→Google, Apple, and TikTok revealed their confidential compute strategies
→OPAQUE launched a Confidential Agent stack built on NVIDIA NeMo + LangGraph with verifiable guarantees before, during, and after agent execution
→ AMD also had exciting new confidential product announcements.

🎯 But here’s the real takeaway:
– This wasn’t a vendor expo. It was a community and ecosystem summit, a collaboration that culminated in a shared commitment.
– Over the next 12 months, leaders from Google, Microsoft, Anthropic, Accenture, AMD, Intel, NVIDIA, and others will collaborate to release a reference architecture for an open, interoperable Confidential AI stack. Think Confidential MCP with verifiable guarantees.

We’re united in building a trust layer for the agentic web. And it’s going to take an ecosystem and community. What we build now—with this ecosystem, this community—will shape how the world relates to technology for the next century. And more importantly, how we relate to each other, human to human.

Subscribe to AIConfidential.com to get the sessions, PPTs, videos, and podcast drops.

Thank you to everyone who joined us—on site, remote, or behind the scenes. Let’s keep building to ensure AI can be harnessed to advance human progress.

AI at the Edge: Governance, Trust, and the Data Exhaust Problem

What enterprises must learn—from history and from hackers—to survive the AI wave

“The first thing I tell my clients is: Are you accepting that you’re getting probabilistic answers? If the answer is no, then you cannot use AI for this.”
— John Willis, enterprise AI strategist

AI isn’t just code anymore. It’s decision-making infrastructure. And in a world where agents can operate at machine speed, acting autonomously across systems and clouds, we’re encountering new risks—and repeating old mistakes.

In this episode of AI Confidential, we’re joined by industry legend John Willis, who brings four decades of experience in operations, devops, and AI strategy. He’s the author of The Rebels of Reason, a historical journey through the untold stories of AI’s pioneers—and a stark warning to today’s enterprise leaders.

Here are the key takeaways from our conversation:

🔄 History Repeats Itself—Unless You Design for It

John’s central insight? Enterprise IT keeps making the same mistakes. Shadow IT, ungoverned infrastructure, and tool sprawl defined the early cloud era—and they’re back again in the age of GenAI. “We wake up from hibernation, look at what’s happening, and say: what did y’all do now?”

🤖 AI is Probabilistic—Do You Accept That?

Too many leaders expect deterministic behavior from fundamentally probabilistic systems. “If you’re building a high-consequence application, and you’re not accepting that LLMs give probabilistic answers, you’re setting yourself up to fail,” John warns.

This demands new tooling, new culture, and new operational rigor—including AI evaluation pipelines, attestation mechanisms, and AI-specific gateways.

📉 The Data Exhaust is Dangerous

Data isn’t just an input—it’s an output. And that data exhaust can now be weaponized. Whether it’s customer interactions, supply chain patterns, or software development workflows, LLMs are remarkably good at inferring proprietary IP from metadata alone.

“Your cloud provider—or their contractor—could rebuild your product from the data exhaust you’re streaming through their APIs,” John notes. If you’re not using attested, verifiable systems to constrain where and how your data flows, you’re building your own future competitor.

🛡️ Governance, Attestation, and Confidential AI

Confidential computing may sound like hardware tech, but its real value lies in guarantees: provable, cryptographic enforcement of data privacy and policy at runtime.

OPAQUE’s confidential AI fabric is one example—enabling encrypted data pipelines, agentic policy enforcement, and hardware-attested audit trails that align with enterprise governance requirements. “I didn’t care about the hardware,” John admits. “But once I saw the guarantees you get, I was all in.”

📚 Why the History of AI Still Matters

John’s latest book, The Rebels of Reason, brings to life the hidden history of AI—spotlighting unsung pioneers like Fei-Fei Li and Grace Hopper. “Without ImageNet, we don’t get AlexNet. Without Hopper’s compiler, we don’t get natural language programming,” he explains.

Understanding AI’s history isn’t nostalgia—it’s necessary context for navigating where we’re going next. Especially as we transition into agentic systems with layered, distributed, and dynamic behavior.


If you’re an enterprise CIO, CISO, or builder, this episode is your field guide to what’s coming—and how to avoid becoming the next cautionary tale.

Listen to the full episode here: Spotify | Apple Podcast | YouTube

And you can find all our podcast episodes –> https://podcast.aiconfidential.com, and you can subscribe to our newsletter –> https://aiconfidential.com

Securing the AI Renaissance: Reflections from the Engine Room

There are moments in technology that stay with you. I remember sitting at my first computer, writing my first lines of code. The feeling wasn’t explosive excitement – it was deeper than that. It was the quiet realization that I was learning to speak a new language, one that could create something from nothing.

Later, when I first connected to the internet, that same feeling returned. The world suddenly felt both larger and more accessible. These weren’t just technological advances – they were transformative shifts in how we interact with information and each other.

Today, working on confidential computing for AI agents at Opaque, I recognize that same profound sense of possibility.

The Mathematics of Trust

The parallels to those early computing days keep surfacing in my mind. Just as the early internet needed protocols and security standards to become the foundation of modern business, AI systems need robust security guarantees to reach their potential. The math makes this necessity clear: with each additional AI agent in a system, the probability of data exposure (or a model leaking) compounds. At just 1% risk per agent, a network of 1,000 agents approaches certainty of breach.

This isn’t abstract theory – it’s the reality our customers face as they scale their AI operations. It reminds me of the early days of networking, when each new connection both expanded possibilities and introduced new vulnerabilities.

Learning from Our Customers

Working with companies like ServiceNow, Encore Capital, the European Union,…has been particularly illuminating. The challenges echo those fundamental questions from the early days of computing: How do we maintain control as systems become more complex? How do we preserve privacy while enabling collaboration?

When our team demonstrates how confidential computing can solve these challenges, I see the same recognition I felt in those early coding days – that moment when complexity transforms into clarity. It’s not about the technology itself, but about what it enables.

Why This Matters Now

The emergence of AI agents reminds me of the early web. We’re at a similar inflection point, where the technology’s potential is clear but its governance structures are still emerging. At Opaque, we’re building something akin to the security protocols that made e-commerce possible – fundamental guarantees that allow organizations to trust and scale AI systems.

Consider how SSL certificates transformed online commerce. Our work with confidential AI is similar, creating trusted environments where AI agents can process sensitive data while maintaining verifiable security guarantees. It’s about building trust into the foundation of AI systems.

The Path Forward

The technical challenges we’re solving are complex, but the goal is simple: enable organizations to use AI with the same confidence they now have in web technologies. Through confidential computing, we create secure enclaves where AI agents can collaborate while maintaining strict data privacy – think of it as end-to-end encryption for AI operations.

Our work with ServiceNow (and other companies) demonstrates this potential. As their Chief Digital Information Officer Kellie Romack noted, this technology enables them to “put AI to work for people and deliver great experiences to both customers and employees.” That’s what drives me – seeing how our work translates into real-world impact.

Looking Ahead

Those early experiences with coding and the internet shaped my understanding of technology’s potential. Now, working on AI security, I feel that same sense of standing at the beginning of something transformative. We’re not just building security tools – we’re creating the foundation for trustworthy AI at scale.

The challenges ahead are significant, but they’re the kind that energize rather than discourage. They remind me of learning to code – each problem solved opens up new possibilities. If you’re working on scaling AI in your organization, I’d value hearing about your experiences and challenges. The best solutions often come from understanding the real problems people face.

This journey feels familiar yet new. Like those first lines of code or that first internet connection, we’re building something that will fundamentally change how we work with technology. And that’s worth getting excited about.

[Previous content remains the same…]

Further Reading

For those interested in diving deeper into the world of AI agents and confidential computing, here are some resources:

  • Constitutional AI: Building More Effective Agents
    Anthropic’s foundational research on developing reliable AI agents. Their work on making agents more controllable and aligned with human values directly influences how we think about secure AI deployment.
  • Microsoft AutoGen: Society of Mind
    A fascinating technical deep-dive into multi-agent systems. This practical implementation shows how multiple AI agents can collaborate to solve complex problems – exactly the kind of interactions we need to secure.
  • ServiceNow’s Journey with Confidential Computing
    See how one of tech’s largest companies is implementing these concepts in production. ServiceNow’s experience offers valuable insights into scaling AI while maintaining security and compliance.
  • Microsoft AutoGen Documentation
    The technical documentation that underpins practical multi-agent implementations. Essential reading for understanding how agent-to-agent communication works in practice.

The Mathematical Case for Trusted AI: Season Finale with Anthropic’s CISO

In the season finale of AI Confidential, I had the privilege of hosting Jason Clinton, Chief Information Security Officer at Anthropic, for a discussion that arrives at a pivotal moment in AI’s evolution—where questions of trust and verification have become existential to the industry’s future. Watch the full episode on YouTube →

The Case for Confidential Computing

Jason made a compelling case for why confidential computing isn’t just a security feature—it’s fundamentally essential to AI’s future. His strategic vision aligns with what we’ve heard from other tech luminaries on the show, including Microsoft Azure CTO Mark Russinovich and NVIDIA’s Daniel Rohrer: confidential computing is becoming the cornerstone of responsible AI development.

Why This Matters: The Math of Risk

Let me build on Jason’s insights with a mathematical reality check that underscores the urgency of this approach: Consider the probability of data exposure as AI systems multiply. Even with a seemingly small 1% risk of data exposure per AI agent, the math becomes alarming at scale:

  • With 10 inter-operating agents, the probability of at least one breach jumps to 9.6%
  • With 100 agents, it soars to 63%
  • At 1,000 agents? The probability approaches virtual certainty at 99.99%

This isn’t just theoretical—as organizations deploy AI agents across their infrastructure as “virtual employees,” these risks compound rapidly. The mathematical reality is unforgiving: without the guarantees that confidential computing provides, the danger becomes untenable at scale.

Anthropic’s Vision for Trusted AI

What makes Jason’s insights particularly striking is Anthropic’s position at the forefront of AI development. His detailed analysis of why Anthropic has identified confidential computing as mission-critical to their future operations speaks volumes about where the industry is headed. As he explains, achieving verifiable trust through attested data pipelines and models isn’t just about security—it’s about enabling the next wave of AI innovation.

Beyond Security: Enabling Innovation

Throughout our conversation, Jason emphasized how confidential computing provides a secure sandbox environment for research teams to work with powerful models. This capability is crucial not just for protecting sensitive data, but for accelerating innovation while maintaining security and control.

The Industry Shift

While tech giants like Apple, Microsoft, and Google construct their infrastructure on confidential computing foundations, the technology is no longer the exclusive domain of industry leaders. As Jason pointed out, the rapid adoption of confidential computing, particularly in AI workloads, signals a fundamental shift in how the industry approaches security and trust.

Looking Ahead: The Rise of Agents

As our conversation with Jason turned to the future, we explored a fascinating yet sobering reality: AI agents are rapidly proliferating across enterprise environments, increasingly operating as “virtual employees” with access to company systems, data, and resources. These aren’t simple chatbots—they’re sophisticated agents capable of executing complex tasks, often with the same level of system access as human employees.

This transition raises critical questions about trust and verification. As Jason emphasized, when AI agents are granted company credentials and access to sensitive systems, how do we ensure their actions are verifiable and trustworthy? The challenge isn’t just about securing individual agents—it’s about maintaining visibility and control over an entire ecosystem of AI workers operating across your infrastructure.

This is where confidential computing becomes not just valuable but essential. It provides the cryptographic guarantees and attestation capabilities needed to verify that AI agents are operating as intended, within defined boundaries, and with proper security controls. As we move into 2025 and beyond, organizations that build these trust foundations now will be best positioned to safely harness the transformative power of AI agents at scale.

Read the full newsletter analysis →


Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join us in 2025 for Season 2 of AI Confidential, where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

As your organization scales its AI operations, how are you addressing the compounding risks of data exposure? Share your thoughts on implementing trusted AI at scale in the comments below.

Making AI Work: From Innovation to Implementation

In this illuminating episode of AI Confidential, I had the pleasure of hosting Will Grannis, CTO and VP at Google Cloud, for a deep dive into what it really takes to make AI work in complex enterprise environments. Watch the full episode on YouTube →

Beyond the AI Hype

One of Will’s most powerful insights resonated throughout our conversation: “AI isn’t a product—it’s a variety of methods and capabilities to supercharge apps, services and experiences.” This mindset shift is crucial because, as Will emphasizes, “AI needs scaffolding to yield value, a definitive use case/customer scenario to design well, and a clear, meaningful objective to evaluate performance.”

Real-World Impact

Our discussion brought this philosophy to life through compelling examples like Wendy’s implementation of AI in their ordering systems. What made this case particularly fascinating wasn’t just the technology, but how it was grounded in enterprise truth and proprietary knowledge. Will explained how combining Google AI capabilities with enterprise-specific data creates AI systems that deliver real value.

The Platform Engineering Imperative

A crucial theme emerged around what Will calls “platform engineering for AI.” As he puts it, this “will ultimately make the difference between being able to deploy confidently or being stranded in proofs of concept.” The focus here is comprehensive: security, reliability, efficiency, and building trust in the technology, people, and processes that accelerate adoption and returns.

Building Trust Through Control

We explored how Google Cloud’s Vertex AI platform addresses one of the biggest challenges in enterprise AI adoption: trust. The platform offers customizable controls that allow organizations to:

  • Filter and customize AI outputs for specific needs
  • Maintain data security and sovereignty
  • Ensure regulatory compliance
  • Enable rapid experimentation in safe environments

The Path to Production

What struck me most was Will’s pragmatic approach to AI implementation. Success isn’t just about having cutting-edge technology—it’s about:

  • Creating secure runtime operations
  • Implementing proper data segregation
  • Enabling rapid experimentation
  • Maintaining constant optimization
  • Building trust through transparency and control

Looking Ahead

The future of AI in enterprise settings isn’t about replacing existing systems wholesale—it’s about strategic enhancement and thoughtful integration. As Will shared, the most successful implementations come from organizations that approach AI as a capability to be carefully woven into their existing operations, not as a magic solution to be dropped in.


Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

As organizations build out their AI infrastructure, how are you ensuring the security and privacy of your sensitive data throughout the AI pipeline? Share your approach in the comments below.