Confidential Summit Wrap

We just wrapped the Confidential Summit in SF—and it was electric.
From NVIDIA, Arm, AMD, and Intel Corporation to Microsoft, Google and Anthropic the world’s leading builders came together to answer one critical question:

**How do we build a verifiable trust layer for AI and the Internet?**

🔐 Ion Stoica (SkyLab/Databricks) reminded us: as agentic systems scale linearly, risk compounds exponentially.

🧠 Jason Clinton (Anthropic) stunned with stats:
→ 65% of Anthropic’s code is written by Claude. By year’s end? 90–95%.
→ AI compute needs are growing 4x every 12 months.
→ “This is the year of the agent,” he said—soon we’ll look back on it like we do Gopher.

🛠️ Across the board, Big Tech brought reference architectures for Confidential AI:

→Microsoft shared real-world Confidential AI infrastructure running in Azure
→Meta detailed how WhatsApp uses Private Processing to secure messages
→Google, Apple, and TikTok revealed their confidential compute strategies
→OPAQUE launched a Confidential Agent stack built on NVIDIA NeMo + LangGraph with verifiable guarantees before, during, and after agent execution
→ AMD also had exciting new confidential product announcements.

🎯 But here’s the real takeaway:
– This wasn’t a vendor expo. It was a community and ecosystem summit, a collaboration that culminated in a shared commitment.
– Over the next 12 months, leaders from Google, Microsoft, Anthropic, Accenture, AMD, Intel, NVIDIA, and others will collaborate to release a reference architecture for an open, interoperable Confidential AI stack. Think Confidential MCP with verifiable guarantees.

We’re united in building a trust layer for the agentic web. And it’s going to take an ecosystem and community. What we build now—with this ecosystem, this community—will shape how the world relates to technology for the next century. And more importantly, how we relate to each other, human to human.

Subscribe to AIConfidential.com to get the sessions, PPTs, videos, and podcast drops.

Thank you to everyone who joined us—on site, remote, or behind the scenes. Let’s keep building to ensure AI can be harnessed to advance human progress.

AI at the Edge: Governance, Trust, and the Data Exhaust Problem

What enterprises must learn—from history and from hackers—to survive the AI wave

“The first thing I tell my clients is: Are you accepting that you’re getting probabilistic answers? If the answer is no, then you cannot use AI for this.”
— John Willis, enterprise AI strategist

AI isn’t just code anymore. It’s decision-making infrastructure. And in a world where agents can operate at machine speed, acting autonomously across systems and clouds, we’re encountering new risks—and repeating old mistakes.

In this episode of AI Confidential, we’re joined by industry legend John Willis, who brings four decades of experience in operations, devops, and AI strategy. He’s the author of The Rebels of Reason, a historical journey through the untold stories of AI’s pioneers—and a stark warning to today’s enterprise leaders.

Here are the key takeaways from our conversation:

🔄 History Repeats Itself—Unless You Design for It

John’s central insight? Enterprise IT keeps making the same mistakes. Shadow IT, ungoverned infrastructure, and tool sprawl defined the early cloud era—and they’re back again in the age of GenAI. “We wake up from hibernation, look at what’s happening, and say: what did y’all do now?”

🤖 AI is Probabilistic—Do You Accept That?

Too many leaders expect deterministic behavior from fundamentally probabilistic systems. “If you’re building a high-consequence application, and you’re not accepting that LLMs give probabilistic answers, you’re setting yourself up to fail,” John warns.

This demands new tooling, new culture, and new operational rigor—including AI evaluation pipelines, attestation mechanisms, and AI-specific gateways.

📉 The Data Exhaust is Dangerous

Data isn’t just an input—it’s an output. And that data exhaust can now be weaponized. Whether it’s customer interactions, supply chain patterns, or software development workflows, LLMs are remarkably good at inferring proprietary IP from metadata alone.

“Your cloud provider—or their contractor—could rebuild your product from the data exhaust you’re streaming through their APIs,” John notes. If you’re not using attested, verifiable systems to constrain where and how your data flows, you’re building your own future competitor.

🛡️ Governance, Attestation, and Confidential AI

Confidential computing may sound like hardware tech, but its real value lies in guarantees: provable, cryptographic enforcement of data privacy and policy at runtime.

OPAQUE’s confidential AI fabric is one example—enabling encrypted data pipelines, agentic policy enforcement, and hardware-attested audit trails that align with enterprise governance requirements. “I didn’t care about the hardware,” John admits. “But once I saw the guarantees you get, I was all in.”

📚 Why the History of AI Still Matters

John’s latest book, The Rebels of Reason, brings to life the hidden history of AI—spotlighting unsung pioneers like Fei-Fei Li and Grace Hopper. “Without ImageNet, we don’t get AlexNet. Without Hopper’s compiler, we don’t get natural language programming,” he explains.

Understanding AI’s history isn’t nostalgia—it’s necessary context for navigating where we’re going next. Especially as we transition into agentic systems with layered, distributed, and dynamic behavior.


If you’re an enterprise CIO, CISO, or builder, this episode is your field guide to what’s coming—and how to avoid becoming the next cautionary tale.

Listen to the full episode here: Spotify | Apple Podcast | YouTube

And you can find all our podcast episodes –> https://podcast.aiconfidential.com, and you can subscribe to our newsletter –> https://aiconfidential.com