Sat down with Kalpana Kumari from ITTech Pulse to talk about where enterprise AI security is actually heading. The conversation went deeper than I expected — we got into workload identity, the math-vs-promises distinction, and why compliance should be a byproduct of execution, not a gate. The throughline: in an agentic world, administrative controls don’t scale. Hardware-enforced verification does.
Full interview reposted below. Original article at ITTech Pulse.
ITTech Pulse Exclusive Interview with Aaron Fulkerson, Chief Executive Officer at OPAQUE
By Kalpana Kumari | April 21, 2026 | Originally published at ITTech Pulse
In an ITTech Pulse exclusive, OPAQUE CEO Aaron Fulkerson discusses how cryptographic verification and TEEs provide end-to-end security for enterprise AI agents.
Aaron, IT leaders worry about data leaks in agentic AI – how does OPAQUE’s hardware-attested platform keep data encrypted throughout Fortune 500 RAG workflows?
IT leaders are right to worry. Agents operate at machine speed, across systems and tools, and can be manipulated by adversarial inputs in ways humans can’t. OPAQUE prevents data leakage through a layered security model combining confidential computing, policy enforcement, and verifiable auditing. Every RAG query runs inside hardware-backed Trusted Execution Environments (TEEs). That means data stays encrypted even while it’s being processed. Not just at rest. Not just in transit. In use. The TEE ensures that all policies (on data as well as agent behavior) are verifiably enforced.
Before execution, we cryptographically attest the environment. After execution, we produce tamper-proof audit logs proving what code ran, what data was accessed, and whether policies were honored. That’s the difference. Most platforms give you access controls. We give you verifiable proof that enforcement actually happened. In an agentic world, that distinction becomes existential.
Drawing from ServiceNow expertise, what gaps in traditional encryption does OPAQUE’s confidential computing fill for enterprise AI security challenges today?
Traditional encryption protects data at rest and in transit, but AI systems constantly process data, reason over it, generate outputs, and take actions. The moment data is “in use,” traditional encryption steps aside. That gap becomes enormous when you’re running agents across interconnected systems. When you scale to hundreds or thousands of agents, even small leak probabilities compound. At 1% failure probability per agent, 100 agents means a 63% chance of breach. At 1,000 agents, you’re effectively guaranteed exposure. You cannot manage that with policy documents and permissions alone. Confidential AI closes that gap.
At ServiceNow, I saw firsthand that adoption follows trust. If security is bolted on later, you get politics, delays, and stalled deployments. The organizations embedding verifiable guarantees into their AI architecture from day one are the ones actually reaching production. The technology changes, but the trust requirement doesn’t.
OPAQUE processes encrypted data directly—without decrypting it—using confidential computing. Computation happens inside TEEs, which keep data isolated from the rest of the system, only allow verified code to run, and tightly control access. Before any data is even processed, the platform proves its integrity through remote attestation. After execution, it generates hardware-signed audit logs that prove what ran, under which policies, and how data was handled.
After $24M Series B success, what compliance breakthroughs has OPAQUE achieved for Accenture-like clients using verifiable confidential AI agents?
Here’s the frustration nobody talks about. Compliance and infosec teams are correct to be concerned about AI on sensitive data. But that concern creates a maddening bottleneck for AI builders who just want to innovate and ship, and they’re being told to do so faster every quarter.
What OPAQUE changes is who does the security review. Hardware does the security review. Not the security team. When your workload runs inside a TEE with cryptographic policy enforcement, and the output is a hardware-signed audit trail proving exactly what happened, you’re not waiting for a manual security assessment. You’re delivering math to your auditor. Not promises.
We’re seeing customers accelerate deployments by 4-5x because compliance stops being a gate and becomes a byproduct. Think about a financial services company running AI agents across transaction data. Without verifiable guarantees, that deployment sits in a legal queue for months. With a cryptographic receipt proving data never left the TEE and policies were enforced at the hardware level, the CISO and General Counsel sign off because they have evidence. Furthermore, we’ve seen the accuracy of inference jump from 36% to 98% because the customer was able to ground their AI system with the most sensitive data and dramatically improve their results. That’s the shift from Plateau to Powerhouse.
How does OPAQUE integrate with orchestration frameworks like LangGraph to support confidential RAG workflows and enterprise-grade governance?
Most AI builders hear “encryption” and think “that’s an infosec problem, not my problem.” But here’s what OPAQUE actually creates: a workload identity.
Every layer, silicon, infrastructure, and workload graph, is hardware-attested and verified before each execution. Policies are encoded into that identity. If anything changes, code, config, or policy, the identity breaks, and no data enters. Your policies are bound to the workload at runtime, enforced by hardware, and provable. No one sees the data. Not the cloud provider. Not your admins. And proof-of-trust receipts are produced as a byproduct of execution.
We built OPAQUE Studio on LangGraph because the industry is converging on open-source orchestration for multi-agent systems, and we think that’s the right direction. Something old moved up the stack; agent orchestration looks a lot like microservices orchestration from a decade ago. The primitives rhyme. What’s different is that these services can now reason, act autonomously, and access sensitive data in ways microservices never could. OPAQUE Studio lets developers wire up agents to sensitive data sources with the trust guarantees baked into the infrastructure. Compliance and infosec get out of your way because the hardware is doing their job for them.
How is OPAQUE thinking about long-term scalability and cryptographic resilience in enterprise AI systems?
Today, we’re removing the roadblocks that keep enterprises from shipping AI on their most sensitive data. That’s the immediate priority: helping organizations move from running AI on sanitized data to running it on the proprietary data that actually creates competitive advantage. With proof that nothing leaks.
The competitive advantage lives in the data that enterprises are afraid to touch. Our job is to make that fear unnecessary, not by telling them to trust us, but by giving them cryptographic proof so they can ship fast.
What does deployment typically look like for enterprises adopting OPAQUE, and how does the platform support ongoing privacy verification?
OPAQUE is deployed into your cloud environment within confidential computing–enabled infrastructure and requires no data migration or replication outside your environment. Teams can use OPAQUE’s Agent Studio or deploy their containerized AI workloads directly using OPAQUE’s Confidential Runtime and SDK.
We make privacy part of the execution itself rather than an add-on. Before runtime, OPAQUE verifies integrity and configuration to prevent misconfigured or unauthorized workloads from running. During execution, it enforces cryptographic policies, encrypts data in use, and isolates workloads so sensitive data, models, and business logic remain protected as agents act autonomously. After execution, it generates hardware-signed audit logs that prove what ran, under which policies, and how data was handled.
How does OPAQUE approach scaling confidential AI systems while maintaining strong security guarantees?
No builder wants to think about encryption. They shouldn’t have to. That’s the whole point.
This is where the workload identity concept pays off. Every workload gets a hardware-signed identity encoding exactly which code is running and which policies are active. If anything changes, code, config, policy, the identity breaks, and no data enters. The builder doesn’t manage keys or write security code. The infrastructure handles it. They ship.
Think about what happens with administrative controls at scale. You add agents, permissions, and people who can grant permissions. Every new node is a new trust assumption. Eventually, somebody misconfigures something, and you’re back to processing on hope. With workload identity, the trust is in the hardware and the math, not in the org chart. It scales the same way at 10 agents as it does at 10,000. The workload either proves its identity, or it doesn’t run. There’s no grey area at scale.
What practical advice would you give ITTech Pulse readers adopting agentic AI in 2026 to ensure compliant, breach-proof implementations?
Three things need to happen to adopt Agentic AI:
- Build cryptographic policy enforcement into the architecture from day one.
- Demand immutable audit trails of what every agent did, when, and under what constraints.
- Treat privacy and governance as accelerators, not brakes, and stop thinking about AI security the way you think about application security.
The organizations that embed verification into their AI stack will move faster than those that treat it as a gate. When trust is built into the infrastructure, security and innovation stop competing.
About Aaron Fulkerson
Aaron Fulkerson is CEO of OPAQUE, the Confidential AI company. He previously founded MindTouch, an enterprise knowledge platform powering over a billion visitors monthly, and served at ServiceNow, where he helped build one of the company’s fastest-growing products. His career spans two decades of building enterprise platforms at the intersection of trust and technology.
About OPAQUE
OPAQUE is the Confidential AI company. Born from UC Berkeley’s RISELab and founded by Ion Stoica and Raluca Ada Popa, OPAQUE enables enterprises to safely run models, agents, and workflows on their most sensitive data. Its Confidential AI platform delivers verifiable runtime governance — cryptographic proof that data, models, and agent actions remain private and policy-compliant throughout every AI workflow. Customers and partners include ServiceNow, Anthropic, Accenture, and Encore Capital.











