Open source machine learning systems are highly vulnerable to security threats


- MLflow identified as most vulnerable open-source ML platform
- Directory traversal flaws allow unauthorized file access in Weave
- ZenML Cloud’s access control issues enable privilege escalation risks
Recent analysis of the security landscape of machine learning (ML) frameworks has revealed ML software is subject to more security vulnerabilities than more mature categories like DevOps or Web servers.
The growing adoption of machine learning across industries highlights the critical need to secure ML systems, as vulnerabilities can lead to unauthorized access, data breaches, and compromised operations.
The report from JFrog claims ML projects such as MLflow have seen an increase in critical vulnerabilities. Over the last few months, JFrog has uncovered 22 vulnerabilities across 15 open source ML projects. Among these vulnerabilities, two categories stand out: threats targeting server-side components and risks of privilege escalation within ML frameworks.
Critical vulnerabilities in ML frameworks
The vulnerabilities identified by JFrog affect key components often used in ML workflows, which could allow attackers to exploit tools which are often trusted by ML practitioners for their flexibility, to gain unauthorized access to sensitive files or to elevate privileges within ML environments.
One of the highlighted vulnerabilities involves Weave, a popular toolkit from Weights & Biases (W&B), which aids in tracking and visualizing ML model metrics. The WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) enables low-privileged users to access arbitrary files across the filesystem.
This flaw arises due to improper input validation when handling file paths, potentially allowing attackers to view sensitive files that could include admin API keys or other privileged information. Such a breach could lead to privilege escalation, giving attackers unauthorized access to resources and compromising the security of the entire ML pipeline.
ZenML, an MLOps pipeline management tool, is also affected by a critical vulnerability that compromises its access control systems. This flaw allows attackers with minimal access privileges to elevate their permissions within ZenML Cloud, a managed deployment of ZenML, thereby accessing restricted information, including confidential secrets or model files.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The access control issue in ZenML exposes the system to significant risks, as escalated privileges could enable an attacker to manipulate ML pipelines, tamper with model data, or access sensitive operational data, potentially impacting production environments reliant on these pipelines.
Another serious vulnerability, known as the Deep Lake Command Injection (CVE-2024-6507), was found in the Deep Lake database – a data storage solution optimized for AI applications. This vulnerability permits attackers to execute arbitrary commands by exploiting how Deep Lake handles external dataset imports.
Due to improper command sanitization, an attacker could potentially achieve remote code execution, compromising the security of both the database and any connected applications.
A notable vulnerability was also found in Vanna AI, a tool designed for natural language SQL query generation and visualization. The Vanna.AI Prompt Injection (CVE-2024-5565) allows attackers to inject malicious code into SQL prompts, which the tool subsequently processes. This vulnerability, which could lead to remote code execution, allows malicious actors to target Vanna AI’s SQL-to-graph visualization feature to manipulate visualizations, execute SQL injections, or exfiltrate data.
Mage.AI, an MLOps tool for managing data pipelines, has been found to have multiple vulnerabilities, including unauthorized shell access, arbitrary file leaks, and weak path traversal checks.
These issues allow attackers to gain control over data pipelines, expose sensitive configurations, or even execute malicious commands. The combination of these vulnerabilities presents a high risk of privilege escalation and data integrity breaches, compromising the security and stability of ML pipelines.
By gaining admin access to ML databases or registries, attackers can embed malicious code in models, leading to backdoors that activate upon model load. This can compromise downstream processes as the models are utilized by various teams and CI/CD pipelines. The attackers can also exfiltrate sensitive data or conduct model poisoning attacks to degrade model performance or manipulate outputs.
JFrog’s findings highlight an operational gap in MLOps security. Many organizations lack robust integration of AI/ML security practices with broader cybersecurity strategies, leaving potential blind spots. As ML and AI continue to drive significant industry advancements, safeguarding the frameworks, datasets, and models that fuel these innovations becomes paramount.
You might also like
MLflow identified as most vulnerable open-source ML platform Directory traversal flaws allow unauthorized file access in Weave ZenML Cloud’s access control issues enable privilege escalation risks Recent analysis of the security landscape of machine learning (ML) frameworks has revealed ML software is subject to more security vulnerabilities than more mature…
Recent Posts
- An obscure French startup just launched the cheapest true 5K monitor in the world right now and I can’t wait to test it
- Google Meet’s AI transcripts will automatically create action items for you
- No, it’s not an April fool, Intel debuts open source AI offering that gauges a text’s politeness level
- It’s clearly time: all the news about the transparent tech renaissance
- Windows 11 24H2 hasn’t raised the bar for the operating system’s CPU requirements, Microsoft clarifies
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010