Making AI Work: From Innovation to Implementation

In this illuminating episode of AI Confidential, I had the pleasure of hosting Will Grannis, CTO and VP at Google Cloud, for a deep dive into what it really takes to make AI work in complex enterprise environments. Watch the full episode on YouTube →

Beyond the AI Hype

One of Will’s most powerful insights resonated throughout our conversation: “AI isn’t a product—it’s a variety of methods and capabilities to supercharge apps, services and experiences.” This mindset shift is crucial because, as Will emphasizes, “AI needs scaffolding to yield value, a definitive use case/customer scenario to design well, and a clear, meaningful objective to evaluate performance.”

Real-World Impact

Our discussion brought this philosophy to life through compelling examples like Wendy’s implementation of AI in their ordering systems. What made this case particularly fascinating wasn’t just the technology, but how it was grounded in enterprise truth and proprietary knowledge. Will explained how combining Google AI capabilities with enterprise-specific data creates AI systems that deliver real value.

The Platform Engineering Imperative

A crucial theme emerged around what Will calls “platform engineering for AI.” As he puts it, this “will ultimately make the difference between being able to deploy confidently or being stranded in proofs of concept.” The focus here is comprehensive: security, reliability, efficiency, and building trust in the technology, people, and processes that accelerate adoption and returns.

Building Trust Through Control

We explored how Google Cloud’s Vertex AI platform addresses one of the biggest challenges in enterprise AI adoption: trust. The platform offers customizable controls that allow organizations to:

  • Filter and customize AI outputs for specific needs
  • Maintain data security and sovereignty
  • Ensure regulatory compliance
  • Enable rapid experimentation in safe environments

The Path to Production

What struck me most was Will’s pragmatic approach to AI implementation. Success isn’t just about having cutting-edge technology—it’s about:

  • Creating secure runtime operations
  • Implementing proper data segregation
  • Enabling rapid experimentation
  • Maintaining constant optimization
  • Building trust through transparency and control

Looking Ahead

The future of AI in enterprise settings isn’t about replacing existing systems wholesale—it’s about strategic enhancement and thoughtful integration. As Will shared, the most successful implementations come from organizations that approach AI as a capability to be carefully woven into their existing operations, not as a magic solution to be dropped in.


Listen to this episode on Spotify or visit our podcast page for more platforms. For weekly insights on secure and responsible AI implementation, subscribe to our newsletter.

Join me for the next episode of AI Confidential where we’ll continue exploring the frontiers of secure and responsible AI implementation. Subscribe to stay updated on future episodes and insights.

As organizations build out their AI infrastructure, how are you ensuring the security and privacy of your sensitive data throughout the AI pipeline? Share your approach in the comments below.

AI Confidential Podcast: Building Trust in AI with Mark Papermaster (AMD) & Mark Russinovich (Azure)

Visit the AIConfidential.com for the Podcast and Newsletter.

In a recent discussion between technology leaders Mark Papermaster (CTO and Deputy CISO of Microsoft Azure) and Mark Russinovich (CTO of AMD), the focus was on the transformative potential of confidential Computing in reshaping data security practices within the technology industry. Against a backdrop of escalating concerns surrounding data privacy and cybersecurity threats, the conversation delved into key themes such as Security and Trust, Confidential Computing, Data Control, and Collaboration. These themes underscored the critical importance of safeguarding customer data in cloud environments through innovative solutions like secure enclaves and hardware root of trust mechanisms. Confidential Computing, defined as a technology that ensures data remains secure even during processing by unauthorized parties, emerged as a pivotal tool in enhancing data security measures amidst rapid advancements in AI technologies.

The dialogue also highlighted recent developments such as the collaboration between AMD and Microsoft to streamline confidential computing adoption and Microsoft’s ambitious goal to transition to a confidential cloud by 2025. The introduction of Azure Confidential Ledger further exemplified industry efforts towards bolstering supply chain security. Looking ahead, the future outlook points towards continued advancements in confidential Computing technologies with an emphasis on expanding their application to edge devices while establishing robust integrity measures across computing supply chains. As companies strive to navigate ethical considerations around data control and privacy in AI applications alongside potential regulatory challenges associated with widespread adoption of secure computing practices, it becomes increasingly clear that fostering trust through enhanced security measures will be paramount for shaping the future landscape of technology innovation.