NVIDIA and Google Cloud Partner to Advance Secure Agentic AI Deployment

NVIDIA and Google Cloud are collaborating to bring secure, on-premises agentic AI to enterprises by integrating Google’s Gemini models with NVIDIA’s Blackwell platforms. Leveraging confidential computing and enhanced infrastructure like the GKE Inference Gateway and Triton Inference Server, the partnership ensures scalable AI deployment without compromising regulatory compliance or data sovereignty.
NVIDIA and Google Cloud Partner to Advance Secure Agentic AI Deployment
Image Credit: NVIDIA and Google Cloud

NVIDIA and Google Cloud are joining forces to enhance enterprise AI applications by integrating Google Gemini AI models with NVIDIA‘s advanced computing platforms. This collaboration aims to facilitate the deployment of agentic AI locally while ensuring strict compliance with data privacy and regulatory standards.

Enhanced Data Security with NVIDIA and Google Cloud


The partnership centers on the use of NVIDIAs Blackwell HGX and DGX platforms, which are now integrated with Google Clouds distributed infrastructure. This setup allows enterprises to operate Googles powerful Gemini AI models directly within their data centers. A key feature of this integration is NVIDIA Confidential Computing, which provides an additional layer of security by safeguarding sensitive code in the Gemini models against unauthorized access and potential data breaches.

Sachin Gupta, Vice President and General Manager of Infrastructure and Solutions at Google Cloud, emphasized the security and operational benefits of this collaboration. “By deploying our Gemini models on-premises with NVIDIA Blackwells exceptional performance and confidential computing capabilities, were enabling enterprises to leverage the full capabilities of agentic AI in a secure and efficient manner,” Gupta stated.

The Advent of Agentic AI in Enterprise Technology

Agentic AI represents a significant evolution in artificial intelligence technology, offering enhanced problem-solving capabilities over traditional AI models. Unlike conventional AI, which operates based on pre-learned information, agentic AI can reason, adapt, and make autonomous decisions in dynamic settings. For instance, in IT support, an agentic AI system can not only retrieve troubleshooting guides but also diagnose and resolve issues autonomously, escalating complex problems as needed.

In the financial sector, while traditional AI might identify potential fraud based on existing patterns, agentic AI goes a step further by proactively investigating anomalies and taking preemptive actions, such as blocking suspicious transactions or dynamically adjusting fraud detection mechanisms.

Addressing On-Premises Deployment Challenges

The ability to deploy agentic AI models on-premises addresses a critical need for organizations with stringent security or data sovereignty requirements. Until now, these organizations have faced significant challenges in utilizing advanced AI models, which often require integration of diverse data types such as text, images, and code, while still adhering to strict regulatory standards.

With Google Cloud now offering one of the first cloud services that enables confidential computing for agentic AI workloads in any environment, be it cloud, on-premises, or hybrid enterprises, no longer have to compromise between advanced AI capabilities and compliance with security regulations.

Future-Proofing AI Deployments

To further support the deployment of AI, Google Cloud has introduced the GKE Inference Gateway. This new service is designed to optimize AI inference workloads, featuring advanced routing, scalability, and integration with NVIDIA’s Triton Inference Server and NeMo Guardrails. It ensures efficient load balancing, enhanced performance, reduced operational costs, and centralized model security and governance.

Looking forward, Google Cloud plans to improve observability for agentic AI workloads by incorporating NVIDIA Dynamo, an open-source library designed to scale reasoning AI models efficiently across various deployment environments.

These advancements in AI deployment and management were highlighted at the Google Cloud Next conference, where NVIDIA held a special address and provided insights through sessions, demonstrations, and expert discussions.

Through this strategic collaboration, NVIDIA and Google Cloud are setting a new standard for secure, efficient, and scalable agentic AI applications, enabling enterprises to harness the full potential of AI while adhering to necessary security and compliance requirements.


Recent Content

Explore the transformative potential of Open Radio Access Networks (O-RAN) as it integrates AI, enhances security, and fosters interoperability to reshape mobile network infrastructure. In this article, we explore the advancements and challenges of O-RAN, revealing how it sets the stage for future mobile communications with smarter, more secure, and highly adaptable network solutions. Dive into the strategic implications for the telecommunications industry and learn why O-RAN is critical for the next generation of digital connectivity.
Nvidia’s Open Power AI Consortium is pioneering the integration of AI in energy management, collaborating with industry giants to enhance grid efficiency and sustainability. This initiative not only caters to the rising demands of data centers but also promotes the use of renewable energy, illustrating a significant shift towards environmentally sustainable practices. Discover how this synergy between technology and energy sectors is setting new benchmarks in innovative and sustainable energy solutions.
SK Telecom’s AI assistant, adot, now features Google’s Gemini 2.0 Flash, unlocking real-time Google search, source verification, and support for 12 large language models. The integration boosts user trust, expands adoption from 3.2M to 8M users, and sets a new standard in AI transparency and multi-model flexibility for digital assistants in the telecom sector.
SoftBank has launched the Large Telecom Model (LTM), a domain-specific, AI-powered foundation model built to automate telecom network operations. From base station optimization to RAN performance enhancement, LTM enables real-time decision-making across large-scale mobile networks. Developed with NVIDIA and trained on SoftBank’s operational data, the model supports rapid configuration, predictive insights, and integration with SoftBank’s AITRAS orchestration platform. LTM marks a major step in SoftBank’s AI-first strategy to build autonomous, scalable, and intelligent telecom infrastructure.
Telecom providers have spent over $300 billion since 2018 on 5G, fiber, and cloud-based infrastructure—but returns are shrinking. The missing link? Network observability. Without real-time visibility, telecoms can’t optimize performance, preempt outages, or respond to security threats effectively. This article explores why observability must become a core priority for both operators and regulators, especially as networks grow more dynamic, virtualized, and AI-driven.
Selective transparency in open-source AI is creating a false sense of openness. Many companies, like Meta, release only partial model details while branding their AI as open-source. This article dives into the risks of such practices, including erosion of trust, ethical lapses, and hindered innovation. Examples like LAION 5B and Meta’s Llama 3 show why true openness — including training data and configuration — is essential for responsible, collaborative AI development.

Download Magazine

With Subscription
Whitepaper
5G network rollouts are now sprouting around the globe as operators get to grips with the potential of new enterprise applications. Yet behind the scenes, several factors still could strongly impact just how transformative this technology will be in years to come. Ultimately, it will all boil down to one...
NetInsight Logo
Whitepaper
System integrators play a crucial role in the network ecosystem by bringing together various components and technologies from the diverse network ecosystem players to build, deploy, and operate comprehensive end-to-end solutions that meet the specific needs of their clients....
Tech Mahindra Logo

It seems we can't find what you're looking for.

Subscribe To Our Newsletter

Scroll to Top