Making AI Work: From Innovation to Implementation

In this illuminating episode of AI Confidential, I had the pleasure of hosting Will Grannis, CTO and VP at Google Cloud, for a deep dive into what it really takes to make AI work in complex enterprise environments. Watch the full episode on YouTube →

Beyond the AI Hype

One of Will’s most powerful insights resonated throughout our conversation: “AI isn’t a product—it’s a variety of methods and capabilities to supercharge apps, services and experiences.” This mindset shift is crucial because, as Will emphasizes, “AI needs scaffolding to yield value, a definitive use case/customer scenario to design well, and a clear, meaningful objective to evaluate performance.”

Real-World Impact

Our discussion brought this philosophy to life through compelling examples like Wendy’s implementation of AI in their ordering systems. What made this case particularly fascinating wasn’t just the technology, but how it was grounded in enterprise truth and proprietary knowledge. Will explained how combining Google AI capabilities with enterprise-specific data creates AI systems that deliver real value.

The Platform Engineering Imperative

A crucial theme emerged around what Will calls “platform engineering for AI.” As he puts it, this “will ultimately make the difference between being able to deploy confidently or being stranded in proofs of concept.” The focus here is comprehensive: security, reliability, efficiency, and building trust in the technology, people, and processes that accelerate adoption and returns.

Building Trust Through Control

We explored how Google Cloud’s Vertex AI platform addresses one of the biggest challenges in enterprise AI adoption: trust. The platform offers customizable controls that allow organizations to:

  • Filter and customize AI outputs for specific needs
  • Maintain data security and sovereignty
  • Ensure regulatory compliance
  • Enable rapid experimentation in safe environments

The Path to Production

What struck me most was Will’s pragmatic approach to AI implementation. Success isn’t just about having cutting-edge technology—it’s about:

  • Creating secure runtime operations
  • Implementing proper data segregation
  • Enabling rapid experimentation
  • Maintaining constant optimization
  • Building trust through transparency and control

Looking Ahead

The future of AI in enterprise settings isn’t about replacing existing systems wholesale—it’s about strategic enhancement and thoughtful integration. As Will shared, the most successful implementations come from organizations that approach AI as a capability to be carefully woven into their existing operations, not as a magic solution to be dropped in.


Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

As organizations build out their AI infrastructure, how are you ensuring the security and privacy of your sensitive data throughout the AI pipeline? Share your approach in the comments below.

Privacy Meets Innovation: A New Era of Secure AI

In this eye-opening episode of AI Confidential, I had the privilege of hosting two pioneers in AI security and privacy: Daniel Rohrer, VP of Software Security at NVIDIA, and Raluca Ada Popa, Professor at UC Berkeley, Co-Director of UC Berkeley Skylab, and Co-Founder and President of Opaque Systems. Together, we explored the cutting edge of privacy-preserving AI technology and its implications for the future of innovation. Watch the full episode on YouTube →

The Hardware Revolution

One of the most exciting developments we discussed was NVIDIA’s recent introduction of GPU Hardware Enclaves with the H100. As Daniel explained, this breakthrough, which became available through cloud providers like Azure in September 2023, fundamentally transforms what’s possible with secure AI computing. For the first time, organizations can achieve true end-to-end security for computationally intensive AI workloads at scale.

The Power of Attestation

Raluca brought a unique academic and entrepreneurial perspective to our discussion of how confidential computing transforms trust in AI systems. The key insight? It’s not just about encryption—it’s about proving exactly what happens to data throughout the AI pipeline. Through confidential computing, organizations can now:

  • Cryptographically verify code execution
  • Track model access to data
  • Document complete data lineage
  • Ensure compliance through technical guarantees

Beyond Traditional Security

Our conversation revealed how these capabilities enable entirely new forms of collaboration and innovation. Organizations can now:

  • Process sensitive data while maintaining encryption
  • Enable secure multi-party computation with verifiable guardrails
  • Protect both data and model weights in AI workflows
  • Maintain documented compliance while driving innovation

Real-World Impact

The applications we explored were compelling: from healthcare institutions collaborating on better treatment protocols to financial institutions jointly fighting fraud. What makes these use cases possible isn’t just the encryption—it’s the ability to prove exactly how data is being used.

The Path Forward

As both Daniel and Raluca emphasized, attestable AI pipelines aren’t just a security feature—they’re becoming a business necessity. In today’s AI-driven world, losing control of your data isn’t just a temporary setback—it can have irreversible consequences for competitiveness and security.

The future belongs to organizations that can not only protect their data but prove how it’s being used. Confidential computing makes this possible, turning data privacy from a constraint into a catalyst for innovation.


Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

As we move into this new era of secure AI, how is your organization balancing innovation with data privacy? Share your approach in the comments below.

The Great AI Race: Security, Scale, and Why Data Control Matters

When I sat down with Teresa Tung from Accenture for Episode 2 of AI Confidential (you can find this episode on Youtube in addition to Spotify), I was struck by a stark reality that many enterprise leaders are facing: while 75% of CXOs recognize the critical need for high-quality data to power their generative AI initiatives, nearly half lack the trusted data required for operational deployment.

Read the full conversation breakdown in our newsletter →

This gap isn’t just a statistic—it’s a story I’ve seen play out repeatedly across boardrooms and technical teams. As companies rush to embrace generative AI, they’re discovering that the real challenge isn’t implementing the technology—it’s protecting and leveraging their most valuable asset: data.

Teresa shared a fascinating perspective from her work at Accenture that resonated deeply with me. She pointed out that in the next five years, industry leadership will be determined not by who has the most advanced AI models, but by who can effectively control and utilize their data. It’s a shift that reminds me of the early days of digital transformation, where companies that failed to adapt quickly found themselves in a Kodak-like situation.

The Security Paradox

Here’s the challenge that keeps enterprise architects, CTOs, and CIOs up at night: the most valuable data for AI applications is often the most sensitive. Whether it’s financial records, customer interactions, or proprietary research, this “crown jewel” data holds transformative potential but comes with enormous risk.

During our conversation, Teresa shared an illuminating example from an automotive manufacturer grappling with this exact dilemma. The company saw tremendous potential in using AI to enhance customer interactions but faced the fundamental challenge of keeping sensitive data secure while making it actionable.

Beyond Pilot Purgatory

What’s become clear through my conversations with technology leaders is that many organizations are stuck in what I call “pilot purgatory”—they can experiment with AI on non-sensitive data, but can’t scale to production because they lack frameworks for securing sensitive data at scale.

This is where technologies like Confidential Computing enter the picture. As Teresa and I discussed, it’s not just about encrypting data at rest or in transit anymore—it’s about maintaining security while data is being processed. This capability is transforming how companies can approach AI implementation, enabling them to:

  • Process sensitive data while maintaining encryption
  • Share insights without exposing raw data
  • Create new business models through secure multi-party computation

The Path Forward

For technology leaders navigating this landscape, the message is clear: the winners in the AI race might be determined partly by who moves fastest, but whoever builds the most trustworthy and secure foundations will endure and stand the test of time. As Teresa pointed out, successful AI implementation requires treating data as a product—with all the quality controls, supply chain considerations, and security measures that implies.

Looking ahead, I believe we’re entering a new era of AI adoption where security and scalability must be considered from day one. The companies that thrive will be those that can balance innovation with protection, speed with security, and ambition with responsibility.

Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

What challenges are you facing in scaling AI while maintaining data security? I’d love to hear your thoughts in the comments below.

AI Confidential Podcast: Building Trust in AI with Mark Papermaster (AMD) & Mark Russinovich (Azure)

Visit the AIConfidential.com for the Podcast and Newsletter.

In a recent discussion between technology leaders Mark Papermaster (CTO and Deputy CISO of Microsoft Azure) and Mark Russinovich (CTO of AMD), the focus was on the transformative potential of confidential Computing in reshaping data security practices within the technology industry. Against a backdrop of escalating concerns surrounding data privacy and cybersecurity threats, the conversation delved into key themes such as Security and Trust, Confidential Computing, Data Control, and Collaboration. These themes underscored the critical importance of safeguarding customer data in cloud environments through innovative solutions like secure enclaves and hardware root of trust mechanisms. Confidential Computing, defined as a technology that ensures data remains secure even during processing by unauthorized parties, emerged as a pivotal tool in enhancing data security measures amidst rapid advancements in AI technologies.

The dialogue also highlighted recent developments such as the collaboration between AMD and Microsoft to streamline confidential computing adoption and Microsoft’s ambitious goal to transition to a confidential cloud by 2025. The introduction of Azure Confidential Ledger further exemplified industry efforts towards bolstering supply chain security. Looking ahead, the future outlook points towards continued advancements in confidential Computing technologies with an emphasis on expanding their application to edge devices while establishing robust integrity measures across computing supply chains. As companies strive to navigate ethical considerations around data control and privacy in AI applications alongside potential regulatory challenges associated with widespread adoption of secure computing practices, it becomes increasingly clear that fostering trust through enhanced security measures will be paramount for shaping the future landscape of technology innovation.