Orca Security Adds AI Security to Cloud Security Platform

Orca Security Adds AI Security to Cloud Security Platform

March 20, 2024 0 By David
Object Storage

Orca Security announced that the Orca Cloud Security Platform now offers integrated AI Security Posture Management (AI-SPM) and more, enabling organizations to leverage AI with high velocity without incurring undue risk. With Orca’s new AI Security capabilities that cover 50+ AI models and software packages, organizations can maintain visibility and security for their entire tech stack without having to add another point solution, allowing them to confidently adopt AI tools while reducing overhead and integrating with existing workflows.

Gartner predicts that global AI software spending will exceed $298 billion by 2027, with a compound annual growth rate (CAGR) of 19.1%. Orca research indicates that more than 37% of organizations have already adopted at least one AI service, the most popular being Amazon SageMaker and Bedrock (68%) followed by Azure OpenAI (50%), and Vertex AI (21%).

While AI tools provide outstanding business benefits, AI models often include sensitive data and intellectual property in their training data. Orca revealed through its 2024 State of Cloud Security research that 82% of AWS SageMaker users have exposed notebooks, which can often contain sensitive training data. This makes the cloud resources AI models rely upon a potentially lucrative target for attackers and these cloud resources typically face the same challenges that other cloud assets do: limited visibility, accidental public access, unencrypted sensitive data, shadow data, and unsecured keys.

“Generative AI is more than a trend. It’s a transformational technology that businesses are obviously looking to take advantage of. The technology, however, is not without risk,” said Gil Geron, CEO and Co-founder, Orca Security. “AI models rely upon data and that data typically resides in the cloud. It is vital that cloud security providers cover AI tools and immediately identify when the cloud resources powering AI models are vulnerable. Orca has been at the forefront of leveraging generative AI to augment and democratize cloud security, with ChatGPT, Azure OpenAI, Amazon Bedrock, and Google Vertex AI integrations, as well as AI-driven cloud asset search. Now we are helping our customers confidently embrace their own AI innovation by offering complete end-to-end AI Security without the need to adopt new security tools.”

With the introduction of AI-SPM, Orca is leveraging its patented agentless SideScanning technology to provide the same visibility, risk insight, and deep data for AI models that it does for other cloud resources. The tool also addresses use cases unique to AI security, including detecting sensitive data in training sets.

The Orca platform does not rely on agents, therefore coverage is always continuous and complete, scanning new AI resources as soon as they are brought online and alerting the organization to any detected risks. Key features of the Orca AI-SPM include:

  • AI and ML BOM provides a full inventory and Bill of Materials (BOM) of all AI models deployed in the environment, providing AI deployment visibility and eliminating risks related to shadow AI. Orca detects more than 50 of the most commonly used AI models and software packages, including Azure OpenAI, Amazon Bedrock, Google Vertex AI, AWS SageMaker, Pytorch, TensorFlow, OpenAI, Hugging Face, Langchain and more.
  • Full AI-SPM Coverage – Orca launches a new “AI best practices” compliance framework, with dozens of rules for proper upkeep of AI in the organization including AI models Network security, Data protection, Access controls, IAM, and more.
  • Sensitive data detection leverages Orca’s Data Security Posture Management (DSPM) capabilities to scan and classify all the data stored in AI projects, or used to train or fine-tune AI models and alert the organization if they contain sensitive data e.g. telephone numbers, email addresses, and social security numbers, or personal health information.
  • Third-party access detection leverages Orca’s code repository scanning to detect when keys and tokens to AI services – such as OpenAI, Hugging Face, are unsafely exposed in code repositories and alerts the organization in order to prevent leakage of the models or training data.
  • Public access detection utilizes Orca’s deep insight into AI model settings and network access to alert the organization whenever an AI data source is publicly accessible.

Orca AI-SPM is available today.