ZeroStack Delivers AI-as-a-Service
December 5, 2018ZeroStack, creators of a self-driving cloud that lets users focus on their businesses, today announced that administrators of its Self-Driving Cloud platform can provide single-click deployment of GPU resources and deep learning frameworks like TensorFlow, PyTorch, and MXNet, taking care of all the OS and CUDA library dependencies so users can focus on AI development. Furthermore, users can enable GPU acceleration with dedicated access to multiple GPU resources for an order-of-magnitude faster inference latency and user responsiveness. GPUs within hosts can be shared across users in a multi-tenant manner.
Artificial Intelligence and Machine Learning products and solutions are quickly becoming commonplace and are shaping our experiences in computing like no other time in history, and AI applications and solutions are now more viable than ever with the availability of modern machine learning and deep learning frameworks such as TensorFlow, Caffe, etc., along with access to GPUs that are built specifically to perform parallel operations on large amounts of data. However, one significant challenge remains: deploying, configuring, and executing these complex tools and managing their interdependencies and versioning and compatibility with servers and GPUs.
ZeroStack’s AI-as-a-service capability gives customers powerful features to automatically detect GPUs and make them available for users to run their AI applications. In order to maximize utilization of this powerful resource, cloud admins can configure, scale, and allow fine-grained access control of GPU resources to end users.
"ZeroStack is offering the next level of cloud by delivering a collection of point-and-click service templates," said Michael Lin, director of product management at ZeroStack. "Our new AI-as-a-service template automates provisioning of key AI tool sets and GPU resources for DevOps organizations."