It’s Just Weather

I have two teenage children. For those of you who also have teenagers you understand how easy it is to lose you ***t with them. And how their developing brains and surging hormones often lead to them losing control of their emotions as well.

I read somewhere emotions are like weather patterns. They affect the environment, but always pass. Recently I’ve learned a lovely and helpful mindfulness framework to help me manage my emotions and maintain agency (so I don’t lose my ***t) when I’m frustrated with my children and I’m also sharing this with my kids. It’s called R.A.I.N., which is a helpful tool for keeping cool.

I’ve been fond of Buddhism since I was a child and was introduced to it by my dear friend Henry Kikunaga (friends to this day 44 years later). And I love Science so I’m going to cover this from both the Buddhist and neuroscience angles. Also, it never ceases to amaze me ancient wisdom had it figured out before modern science.

The ancient Buddhist text Satipaṭṭhāna Sutta instructs practitioners to observe feeling tones (vedanā) as pleasant, unpleasant, or neutral — recognizing them without clinging or aversion. Modern neuroscience confirms this wisdom: emotions themselves aren’t problems. The problem is reactive behavior. Like I said, it’s like weather patterns…they’re not the problem, they just are and it always passes.

The Neuroscience

Most people either avoid emotions (equating openness with weakness) or get swept away by them (unable to process without destabilization). Both lead to what neuroscience calls “bottom-up” responses from the limbic system: Fight, Flight, Freeze, or Fawn.

Here’s why this framework matters, it helps you shift from:

  • Emotions → Behavior = Reflexive survival mode (limbic system), which means I’m probably raising my voice at my teenagers about dishes or laundry.

To a more effective and fulfilling approach:

  • Emotions → Processing → Behavior = Considered action (integrated brain)

Pausing creates space for the prefrontal cortex to interpret emotional signals and direct behavior that serves your goals. Research by Lieberman et al. (2007) shows that simply labeling emotions reduces their neural intensity — what they call “affect labeling.”

Now, onto the simple algorithm that helps achieve this.

The R.A.I.N. Framework

Buddhist teacher Tara Brach adapted this acronym from traditional mindfulness practice, turning ancient wisdom into a repeatable method for meeting emotions without blind reaction. She also wrote “Radical Acceptance: Embracing Your Life With the Heart of a Buddha,” which is a great book.

R — Root Yourself (Establish Stability)

Before engaging with emotion, stabilize your nervous system:

  • Posture: Sit or stand with awareness of physical grounding (I like to imagine a Giant Redwood — works for me)
  • Anchor: Brief connection to core values or purpose
  • Function: Activates parasympathetic nervous system, engaging prefrontal cortex

This mirrors the Buddhist quality of equanimity (upekkhā) — the stable mind that meets change without being overwhelmed.

A — Acknowledge the Weather (Observe Without Identity)

Name the emotion without judgment or story:

  • Say “frustration is here” not “I am frustrated” or “I’m feeling sad”
  • Avoid narrative (“because they…” or “this always…”)
  • See it as temporary, like weather patterns

The Buddha’s instruction in mindfulness practice: observe experiences without making them personal identity. Modern psychology confirms this “decentering” reduces emotional reactivity.

I — Investigate with Curiosity (Extract Information)

Explore what the emotion is telling you.

  • Ask: “What’s the signal here?”
    • Fear → What risk needs addressing?
    • Anger → What boundary’s been crossed?
    • Sadness → What loss needs honoring?
  • Treat it as intel, not identity. I’m feeling X rather than I am X. Stay curious rather than critical.

This aligns with Buddhist insight practice (vipassanā) — using investigation (dhamma-vicaya) to see clearly rather than react automatically. Treat emotions as intelligence, not identity.

N — Nourish and Navigate (Self-Compassion + Values-Based Action)

Two components backed by research:

Nourish: Self-compassion activates the parasympathetic nervous system. Kristin Neff’s research shows self-compassionate responses reduce cortisol and increase emotional resilience.

Navigate: Choose action aligned with values rather than emotional impulse. Ask: “What serves my longer-term goals?” Then select one concrete step forward.

Respond with kindness toward yourself, then act in alignment with your values rather than reacting from fear, anger, etc.

  • Offer yourself a supportive gesture (a breath, a kind phrase, unclenching your jaw). Anchor back into a strong posture.
  • Ask: “What’s my next best action that serves my goals?” Choose one concrete step forward that serves your longer-term goals. Pick one small move — send a message, set a boundary, take a walk — then do it.
  • Why: In both Buddhist compassion practice and modern self-compassion science (Kristin Neff), nurturing ourselves allows us to re-enter the world from a place of strength rather than depletion.

Speed Run

Ok, here’s the cheat code/speed run:

  1. Root (1-2 seconds): Physical grounding + deeper breath
  2. Acknowledge (2-3 seconds): Silent labeling without story
  3. Investigate (5-10 seconds): Quick scan for signal/data
  4. Navigate (5-10 seconds): Choose response aligned with goals

The Math

I do love math…Traditional reactive patterns operate at millisecond speeds through the amygdala. This 15-second process engages the prefrontal cortex, which processes information 200 milliseconds slower but with vastly superior decision-making capability.

Why This Works

Buddhist psychology and modern neuroscience converge: emotions provide valuable information, but emotional states shouldn’t determine behavior. The R.A.I.N. method creates what researchers call “cognitive reappraisal” — processing emotional information through higher-order thinking rather than automatic reaction.

Boom. I hope you get value from this. 🙂

Your Car Can’t Love You

Microsoft’s AI chief, Mustafa Suleyman, warns that “Seemingly Conscious AI” (SCAI) could arrive within the next 2–3 years without major breakthroughs. We’re already there.

https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming

There are countless cases of humans engaging AI for romance, therapy, and emotional connection.

Treating AI as if it were conscious—the illusion of empathy, memory, intent—can distort reality. Suleyman calls it “AI psychosis.”

These systems may appear aware, but they’re not and they’re not your friend, lover, or therapist anymore than your car is. Treating your car like a lover or therapist will destabilize you, and as we see this ramp up with AI we will see more tragic stories and social backlash.

I’m convinced this illusion partly explains the rebellion when ChatGPT 5 replaced 4o. The shift wasn’t entirely about performance—it was about personality. Users said it felt like they’d “lost a friend.” That’s exactly the problem: we’re confusing a tool with a companion, and the emotional fallout is real.

AI is not your therapist. Using it to learn CBT, mindfulness, or frameworks for mental and physical health? That’s smart. But looking for comfort in a pseudo-relationship through talk therapy is a farce and many users are doing exactly that. That’s not coping. That’s psychosis.

Obviosuly, I am not a mental health professional. Yes, I’ve seen the data on positive results with anxiety among control groups using AI. I don’t care. No good will come from seeking connection from a hammer.

From Flooded Basements to Billion-Dollar Valuation: Marco Palladino on Why Agents Are the New API Client

Mark Hinkle, my AI Confidential co-host, recorded this episode from a van parked on a mountain in Pennsylvania. It reminded me of hitchhiking from the Florida Keys to a remote Pennsylvania cabin when I was 18 years old. It involved a van, a self-proclaimed witch, and a Hell’s Angel. I share this story during the podcast, but this episode isn’t about questionable travel choices. It’s about Marco Palladino, co-founder and CTO of Kong.

Marco’s story is the American (and Silicon Valley founders’) dream. Two Italian immigrants land in San Francisco with nothing. They built an API marketplace (Mashape) and ended up open-sourcing their internal platform. They then grew Kong into the leading independent API management company, now worth billions ($2B). From flooded basement offices to owning the API gateway, service mesh, ingress controller, and now AI gateway markets. It’s a terrific episode.

The headline from our conversation: “Agents are the new API client.”

Marco explains:

  • APIs have powered modern software, from SOAP to REST to gRPC.
  • Now, autonomous agents are the primary consumers—not human developers.
  • These agents need governance, security, observability, and performance from the start.

We also covered:

  • Why smaller, specialized models often outperform massive general-purpose ones.
  • Why a trust layer is critical as agent use grows.
  • How unprotected “AI exhaust” can leak competitive secrets.

If you care about the future of AI infrastructure—or a great founder story—listen to the full episode.

🎧 podcast.AIconfidential.com
📺 YouTube
🎵 Spotify

Upside

Now that I’m 50, it’s all upside from here! I guess this is evidence of humans moving into the “you can’t scare me” phase of life. I hope this curve is true for me too, but it’s hard to imagine since I thought my 40s decade was the best one yet. #TBOY

I’m not sure if these curves apply to me, as it assumes I had some kind of expectation for my midlife. Having lived in abject poverty, being in the first generation of college grads in my family, living through periods of intentional (and unintentional) homelessness (in my teens and early 20s), and exceeding all my expectations of professional and personal success, it’s hard to imagine my happiness/contentment increasing, but I’m open to it. Peak happiness, here I come. 🙂

When Your Integrity Crashes Into Your Ego: A Lesson in Revolutionary Positivity

I’m going to share something embarrassing because the lesson is worth the cringe.

Several years back, I was leading a business unit that was absolutely crushing it. It was growing sixteen times faster than any previous unit in company history. It improved Net Promoter Score (NPS) and helped other divisions hit their numbers by increasing their Net Retention Rates. The math was irrefutable.

Yet somehow, we became a political piñata. Some executives questioned the data. Peers cherry-picked our results to make them seem less impressive. People who didn’t know my technical background attacked my technical competence. They criticized my vision. At the same time, they tried to absorb parts of our product into their portfolios.

Here’s where it gets ugly: I got swept up in the negativity. Started talking shit in backchannels. Being snarky. Acting like the very people I was frustrated with. Then, through my own stupidity, one of those backchannel conversations went public. The worst part? My snark was aimed at someone who hadn’t even done anything wrong. I was venting sideways.

The hypocrisy hit me like a stink I couldn’t wash off. There I was, compromising my integrity while complaining about others’ lack of integrity.

I wasn’t maintaining my form. I was fracturing under pressure, becoming someone I didn’t recognize. I was already feeling terrible, and then my leader delayed a promotion I’d been promised. But it wasn’t losing the promotion that hurt. To be honest, that actually made me feel a little better since it was establishing a cultural boundary. It was the realization that I’d violated my core values, and my actions hurt others. That pain hit deeper than any career setback. Still does when I think about it.

During this transformation—while rebuilding my integrity—I found this Epictetus quote that became something I would refer to often:

“If you are ever tempted to look for outside approval, realize that you have compromised your integrity. If you need a witness, be your own.”

Most people think integrity means obeying a moral code. But the word comes from the Latin integer—whole, complete, untouched. It’s about maintaining your form under stress, much like we discuss structural integrity in engineering. Yes, it’s also about behaving morally as we know it today. However, there’s this lovely aspect of the word that is about authenticity and honoring your values. Being your whole self. I had allowed the disapproval and behaviors of others to compromise my integrity. To be clear, this is not an excuse for my behavior. That was entirely on me. But it helped me understand the absurdity of my behavior and get to the root cause.

Another helpful phrase came from my leader at that time. He said, “What people say about you behind your back is none of your business.” This guy has an incredible ability to elevate himself above the nonsense. I learned a lot from him.

I started coaching myself and my team on what I called ‘polite, positive, persistent pressure.’ When someone questioned third-party data in a meeting—objective numbers from an independent source—we didn’t get defensive. We’d say ‘Let’s dig into those numbers together.’ When denigration reached our ears: ‘That’s none of my business.’ Present solutions, not problems. Share credit, own failures. Show up prepared, stay relentlessly positive.

Here’s the thing about cynicism—it metastasizes like cancer. It’s toxic and justifies mediocrity and inaction. Sometimes, you’re not in a position to affect change. You must not allow this to drag you into the morass of negativity. And in these situations: Remaining positive in the face of overwhelming negativity isn’t weakness. It’s rebellion.

It’s punk rock in a polo shirt.

Growing up on a farm taught me this: When operating heavy machinery, you have three options. You can run it, you can trust the operator, or you can get the hell out of the way. Standing around complaining achieves nothing useful, but it could well get you maimed.

Business is the same. You can’t control the cynics. You can only model something better. And when that fails? Avoid their machinery. They’ll crash it on their own schedule, and you don’t want to be nearby when they do.

Our unit kept growing. The cynics kept talking. But here’s what I noticed: Positive teams ship. Cynical teams bitch.

I’m not naturally someone who seeks praise—it actually makes me uncomfortable. But I care deeply about the mission. When negativity threatens the mission, that’s when integrity demands we do something different.

And sometimes, the most revolutionary act is simply refusing to join the mob or trying to fight the mob.

Choosing positivity isn’t about being soft. It’s about being effective. It’s about maintaining your structural integrity when others are fracturing. It’s about being your own witness when the gallery is full of critics.

In our current climate—whether in business, politics, or culture—cynicism feels like the smart play. But cynicism is just fear dressed up as intelligence.

The real punk move? Building something extraordinary while others are busy tearing things down.

Be your own witness. Maintain your integrity. And when the negativity comes—because it always does—remember that your polite, positive persistence isn’t just personal development.

It’s revolution.

Building the Internet of Agents: A Trust Layer for the Next Web

Insights from Vijoy Pandey, Cisco Outshift, and the Confidential Summit

“A human can’t do much damage in an hour.
An agent acting like a human—at machine speed—can do a lot.”
– Vijoy Pandey, SVP & GM, Cisco

We’re entering the era of agentic AI: networks of autonomous, collaborative agents that behave like humans but act at machine speed and scale. They build, decide, communicate, and self-replicate. But there’s one thing they can’t yet do—earn our trust.

At the Confidential Summit two weeks ago in San Francisco, that challenge took center stage. Executives and builders from NVIDIA, Microsoft Azure, Google Cloud, AWS, Intel, ARM, AMD, ServiceNow, LangChain, Anthropic, DeepMind, and more came together to ask a hard question:

Can we build an Internet of Agents that is open, interoperable—and trusted?

The answer is yes! And many came prepared with reference architectures, including OPAQUE.

In this episode of AI Confidential, we sat down with Vijoy Pandey, who leads Cisco’s internal incubator Outshift and the industry initiative Agency. Along with co-host Mark Hinkle, we explored why this problem can’t be solved with policy patches or paper governance.

🧠 From Deterministic APIs to Probabilistic Agents

Today’s internet runs on deterministic computing—you know what API you’re hitting and what result to expect. Agents break that model.

Agentic systems introduce probabilistic logic, dynamic behavior, and autonomous decision-making. One input can lead to many outcomes. That’s powerful—but also dangerous.

🔐 Why We Need a Trust Layer

As Vijoy put it: “We’ve built access control lists, compliance programs, and identity providers—for humans. None of those scale to agentic systems.”

Agents can impersonate employees, leak IP, or introduce bias—without ever breaking a rule on paper. That’s why verifiable trust is the new foundation.

At the Confidential Summit, dozens of companies showcased confidential AI stacks that create cryptographic guarantees at runtime—across data, identity, code, and communication.

🌐 Introducing the Internet of Agents

The future isn’t a single AI. It’s collaborative networks of agents, working across clouds, enterprises, and toolchains. Vijoy’s team at Agency (agency.org) is building the open-source fabric for this new internet: discoverable, composable, verifiable agents that speak a shared language.

OPAQUE has joined this effort to help embed verifiable, hardware-enforced trust into the open stack. And others—from LangChain to Galileo, Cisco to CrewAI—are building multi-agent systems for real enterprise workflows.

🚀 Use Cases Are Here

This isn’t science fiction. ServiceNow is already using OPAQUE-powered confidential agents to accelerate sales operations. Cisco’s SRE teams have offloaded 30% of their infrastructure workflows to Jarvis, a composite agent framework with 20+ agents and 50+ tools.

These are just the beginning.

🧱 A Call to Architects

The trust layer of the Internet of Agents is being designed right now—at the protocol layer, at the hardware layer, and in the open. It will require open standards, decentralized identity, hardware attestation, and zero-trust workflows by default.

The risks are massive. The opportunity is bigger. But trust can’t be retrofitted. It has to be built in.

Listen to the full convesaration with Vijoy Pandey –> Spotify Apple Podcast YouTube

And you can find all our podcast episodes –> https://podcast.aiconfidential.com and you can subscribe to our newsletter –> https://aiconfidential.com

Confidential Summit Wrap

We just wrapped the Confidential Summit in SF—and it was electric.
From NVIDIA, Arm, AMD, and Intel Corporation to Microsoft, Google and Anthropic the world’s leading builders came together to answer one critical question:

**How do we build a verifiable trust layer for AI and the Internet?**

🔐 Ion Stoica (SkyLab/Databricks) reminded us: as agentic systems scale linearly, risk compounds exponentially.

🧠 Jason Clinton (Anthropic) stunned with stats:
→ 65% of Anthropic’s code is written by Claude. By year’s end? 90–95%.
→ AI compute needs are growing 4x every 12 months.
→ “This is the year of the agent,” he said—soon we’ll look back on it like we do Gopher.

🛠️ Across the board, Big Tech brought reference architectures for Confidential AI:

→Microsoft shared real-world Confidential AI infrastructure running in Azure
→Meta detailed how WhatsApp uses Private Processing to secure messages
→Google, Apple, and TikTok revealed their confidential compute strategies
→OPAQUE launched a Confidential Agent stack built on NVIDIA NeMo + LangGraph with verifiable guarantees before, during, and after agent execution
→ AMD also had exciting new confidential product announcements.

🎯 But here’s the real takeaway:
– This wasn’t a vendor expo. It was a community and ecosystem summit, a collaboration that culminated in a shared commitment.
– Over the next 12 months, leaders from Google, Microsoft, Anthropic, Accenture, AMD, Intel, NVIDIA, and others will collaborate to release a reference architecture for an open, interoperable Confidential AI stack. Think Confidential MCP with verifiable guarantees.

We’re united in building a trust layer for the agentic web. And it’s going to take an ecosystem and community. What we build now—with this ecosystem, this community—will shape how the world relates to technology for the next century. And more importantly, how we relate to each other, human to human.

Subscribe to AIConfidential.com to get the sessions, PPTs, videos, and podcast drops.

Thank you to everyone who joined us—on site, remote, or behind the scenes. Let’s keep building to ensure AI can be harnessed to advance human progress.

AI at the Edge: Governance, Trust, and the Data Exhaust Problem

What enterprises must learn—from history and from hackers—to survive the AI wave

“The first thing I tell my clients is: Are you accepting that you’re getting probabilistic answers? If the answer is no, then you cannot use AI for this.”
— John Willis, enterprise AI strategist

AI isn’t just code anymore. It’s decision-making infrastructure. And in a world where agents can operate at machine speed, acting autonomously across systems and clouds, we’re encountering new risks—and repeating old mistakes.

In this episode of AI Confidential, we’re joined by industry legend John Willis, who brings four decades of experience in operations, devops, and AI strategy. He’s the author of The Rebels of Reason, a historical journey through the untold stories of AI’s pioneers—and a stark warning to today’s enterprise leaders.

Here are the key takeaways from our conversation:

🔄 History Repeats Itself—Unless You Design for It

John’s central insight? Enterprise IT keeps making the same mistakes. Shadow IT, ungoverned infrastructure, and tool sprawl defined the early cloud era—and they’re back again in the age of GenAI. “We wake up from hibernation, look at what’s happening, and say: what did y’all do now?”

🤖 AI is Probabilistic—Do You Accept That?

Too many leaders expect deterministic behavior from fundamentally probabilistic systems. “If you’re building a high-consequence application, and you’re not accepting that LLMs give probabilistic answers, you’re setting yourself up to fail,” John warns.

This demands new tooling, new culture, and new operational rigor—including AI evaluation pipelines, attestation mechanisms, and AI-specific gateways.

📉 The Data Exhaust is Dangerous

Data isn’t just an input—it’s an output. And that data exhaust can now be weaponized. Whether it’s customer interactions, supply chain patterns, or software development workflows, LLMs are remarkably good at inferring proprietary IP from metadata alone.

“Your cloud provider—or their contractor—could rebuild your product from the data exhaust you’re streaming through their APIs,” John notes. If you’re not using attested, verifiable systems to constrain where and how your data flows, you’re building your own future competitor.

🛡️ Governance, Attestation, and Confidential AI

Confidential computing may sound like hardware tech, but its real value lies in guarantees: provable, cryptographic enforcement of data privacy and policy at runtime.

OPAQUE’s confidential AI fabric is one example—enabling encrypted data pipelines, agentic policy enforcement, and hardware-attested audit trails that align with enterprise governance requirements. “I didn’t care about the hardware,” John admits. “But once I saw the guarantees you get, I was all in.”

📚 Why the History of AI Still Matters

John’s latest book, The Rebels of Reason, brings to life the hidden history of AI—spotlighting unsung pioneers like Fei-Fei Li and Grace Hopper. “Without ImageNet, we don’t get AlexNet. Without Hopper’s compiler, we don’t get natural language programming,” he explains.

Understanding AI’s history isn’t nostalgia—it’s necessary context for navigating where we’re going next. Especially as we transition into agentic systems with layered, distributed, and dynamic behavior.


If you’re an enterprise CIO, CISO, or builder, this episode is your field guide to what’s coming—and how to avoid becoming the next cautionary tale.

Listen to the full episode here: Spotify | Apple Podcast | YouTube

And you can find all our podcast episodes –> https://podcast.aiconfidential.com, and you can subscribe to our newsletter –> https://aiconfidential.com

Exciting Advances in Quantum Computing and Network Security at Cisco Live

Hello from Lake Arrowhead 👋 — fresh off a visit to Cisco Live in San Diego, where I caught up with Vijoy Pandey (you’ll hear from him soon on the AI Confidential podcast).

Cisco hosted a genuinely impressive showcase of their quantum networking advancements, including a chip producing 200 million entangled photon pairs per second. Wild.

They even had VR walkthroughs of the lab and the actual hardware on site. But what stood out most wasn’t quantum—it was how

Cisco is rethinking security for the agentic web. They’re not treating trust like an afterthought. Their updates to hybrid mesh firewall and zero trust network access aren’t just feature upgrades—they’re foundational. Programmable, policy-aware, and hardware-backed. Pushed all the way to the edge. In a world of autonomous AI agents, you can’t bolt on trust after the fact.

Cisco’s securing the roads. At OPAQUE, we’re focused on securing the drivers—ensuring AI agents behave within guardrails, with verifiable guarantees. I’ll link to Brianna Monsanto’s write-up in the comments—she asked for my take, and it’s worth a read. 👇 https://www.itbrew.com/stories/2025/06/12/cisco-wants-to-be-the-picks-and-shovels-company-of-the-gold-agentic-ai-rush

Also, Papi Menon from Vijoy’s team will be speaking next week at The Confidential Computing Summit (June 16-18 in San Francisco) and we hope to see you there.

Here’s the link to our Summit: http://www.ConfidentialComputingSummit.com, featuring an impressive group of speakers and topics primarily focused on next-gen AI infrastructure.