Nutanix and NVIDIA Collaborate to Accelerate Enterprise AI Adoption

Nutanix and NVIDIA Collaborate to Accelerate Enterprise AI Adoption

May 21, 2024 Off By David

Nutanix announced a collaboration with NVIDIA aimed at helping enterprises more easily adopt generative AI (GenAI). Through the integration of NVIDIA NIM inference microservices with Nutanix GPT-in-a-Box 2.0, customers will be able to build scalable, secure, high-performance GenAI applications across the enterprise and at the edge.

Today, most AI innovation is centered on the public cloud due to access to infrastructure and tooling able to support the needs of AI applications. Additionally, only the largest enterprises with teams of data scientists have made progress in GenAI adoption. However, most enterprises are looking to invest in supporting their AI strategy, including boosting their investment at the edge, according to the State of Enterprise AI. What’s missing is a fast-track for organizations to mainstream GenAI beyond the public cloud, across the enterprise, and at the edge.

Nutanix’s integration of NVIDIA NIM microservices will enable its customers to leverage Nutanix GPT-in-a-Box 2.0, built on top of the company’s rich data services and compute platform, and use it to simplify AI model deployment and more effectively and efficiently run enterprise AI/ML applications. This will expand access to the growing catalog of NVIDIA NIM microservices from across the enterprise and at the edge, helping to fast-track GenAI initiatives without requiring a team of data scientists.

Nutanix’s collaboration with NVIDIA helps simplify the experience, which many enterprises find challenging today, of making all the decisions required to stand up AI solutions. These include choosing among hundreds of thousands of models, serving engines, and supporting infrastructure, while lacking the new skill sets needed to deliver GenAI solutions to their customers.

Nutanix GPT-in-a-Box simplifies building an AI-ready stack, integrated with Nutanix Objects and Nutanix Files for model and data storage, enabling customers to maintain control over their data. New features delivered in GPT-in-a-Box 2.0 will also automate deploying and running inference endpoints for a wide range of AI models and secure access to the model using fine-grained access control and auditing.

Running on top of the Nutanix Cloud Platform, NIM microservices will enable seamless AI inferencing on a wide range of models, including open-source community models, NVIDIA AI Foundation models, and custom models, leveraging industry-standard application programming interfaces. To support the integration, Nutanix also announced certification for the NVIDIA AI Enterprise 5.0 software platform for streamlining the development and deployment of production-grade AI, including NVIDIA NIM.

“Enterprises are looking to simplify GenAI adoption, and Nutanix enables customers to move to production more easily while maintaining control, privacy, and cost,” said Tarkan Maner, Chief Commercial Officer at Nutanix. “This collaboration will add to this value by making it even easier for customers to leverage NVIDIA’s latest innovation with NIM.”

“Across every industry, enterprises are working to efficiently integrate AI into the cloud and data platforms that power their operations,” said Manuvir Das, Vice President of Enterprise Computing at NVIDIA. “The integration of NVIDIA NIM into Nutanix GPT-in-a-Box gives enterprises an AI-ready solution for rapidly deploying optimized models in production.”

Nutanix GPT-in-a-Box 2.0 is expected to be available in the second half of 2024.