Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

Summary: Cybersecurity researchers have identified nearly two dozen vulnerabilities across 15 machine learning-related open-source projects, posing significant risks to organizations utilizing these technologies. The flaws range from privilege escalation to remote code execution, highlighting critical security concerns in MLOps environments.

Threat Actor: Unknown | unknown
Victim: Various organizations | various organizations

Key Point :

  • Vulnerabilities allow attackers to hijack ML model registries, databases, and pipelines.
  • CVE-2024-7340 enables privilege escalation in the Weave ML toolkit.
  • Improper access control in ZenML allows elevation to admin privileges.
  • Deep Lake’s command injection vulnerability can lead to system command execution.
  • Vanna.AI’s prompt injection flaw could allow remote code execution.
  • Multiple vulnerabilities in Mage AI permit unauthorized remote code execution.
  • Exploiting MLOps pipelines can lead to severe breaches of sensitive ML resources.
machine learning

Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects.

These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week.

The server-side weaknesses “allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines,” it said.

The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines.

Cybersecurity

A brief description of the identified flaws is below –

  • CVE-2024-7340 (CVSS score: 8.8) – A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to escalate their privileges to an admin role by reading a file named “api_keys.ibd” (addressed in version 0.50.8)
  • An improper access control vulnerability in the ZenML MLOps framework that allows a user with access to a managed ZenML server to elevate their privileges from a viewer to full admin privileges, granting the attacker the ability to modify or read the Secret Store (No CVE identifier)
  • CVE-2024-6507 (CVSS score: 8.1) – A command injection vulnerability in the Deep Lake AI-oriented database that allows attackers to inject system commands when uploading a remote Kaggle dataset due to a lack of proper input sanitization (addressed in version 3.9.11)
  • CVE-2024-5565 (CVSS score: 8.1) – A prompt injection vulnerability in the Vanna.AI library that could be exploited to achieve remote code execution on the underlying host
  • CVE-2024-45187 (CVSS score: 7.1) – An incorrect privilege assignment vulnerability that allows guest users in the Mage AI framework to remotely execute arbitrary code through the Mage AI terminal server due to the fact that they have been assigned high privileges and remain active for a default period of 30 days despite deletion
  • CVE-2024-45188, CVE-2024-45189, and CVE-2024-45190 (CVSS scores: 6.5) – Multiple path traversal vulnerabilities in Mage AI that allow remote users with the “Viewer” role to read arbitrary text files from the Mage server via “File Content,” “Git Content,” and “Pipeline Interaction” requests, respectively

“Since MLOps pipelines may have access to the organization’s ML Datasets, ML Model Training and ML Model Publishing, exploiting an ML pipeline can lead to an extremely severe breach,” JFrog said.

Cybersecurity

“Each of the attacks mentioned in this blog (ML Model backdooring, ML data poisoning, etc.) may be performed by the attacker, depending on the MLOps pipeline’s access to these resources.

The disclosure comes over two months after the company uncovered more than 20 vulnerabilities that could be exploited to target MLOps platforms.

It also follows the release of a defensive framework codenamed Mantis that leverages prompt injection as a way to counter cyber attacks Large language models (LLMs) with more than over 95% effectiveness.

“Upon detecting an automated cyber attack, Mantis plants carefully crafted inputs into system responses, leading the attacker’s LLM to disrupt their own operations (passive defense) or even compromise the attacker’s machine (active defense),” a group of academics from the George Mason University said.

“By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker’s LLM, Mantis can autonomously hack back the attacker.”

Source: https://thehackernews.com/2024/11/security-flaws-in-popular-ml-toolkits.html