Security Vulnerabilities Found in Popular Open-Source ML Frameworks
2024-12-07
Want to understand how unpatched vulnerabilities in open-source ML frameworks can lead to security risks in cloud environments? This article is for you!
Cybersecurity researchers have identified multiple vulnerabilities in widely used open-source machine learning frameworks such as MLflow, H2O, PyTorch, and MLeap. These vulnerabilities, which include issues like cross-site scripting, unsafe deserialization, and path traversal, could allow attackers to execute code and perform lateral movement within an organization. This poses a significant risk as it may expose sensitive information, compromise ML model registries, and backdoor stored models. To mitigate these threats, organizations are advised to carefully vet the ML models they use and avoid loading untrusted models, even from seemingly safe repositories, as this can lead to remote code execution and substantial organizational harm.
Open Source, Remote Code Execution, Web App/Website Vulnerability, Supply Chain
Machine Learning, Open-Source Vulnerabilities, MLflow, PyTorch, H2O, MLeap, Remote Code Execution, ML Security
CVE-2024-27132; CVE-2024-6960; CVE-2023-5245
MLflow, H2O, PyTorch, MLeap
Cybersecurity researchers have uncovered several security vulnerabilities affecting popular open-source machine learning (ML) frameworks, including MLflow, H2O, PyTorch, and MLeap. These vulnerabilities, totaling 22, pose significant risks by enabling potential code execution and lateral movement within organizations. The identified flaws primarily affect ML clients and reside in libraries that handle safe model formats, such as Safetensors. These vulnerabilities differ from previous ones that targeted server-side components. Exploiting these client-side vulnerabilities could allow attackers to access vital ML services like model registries and MLOps pipelines. Such access increases the risk of exposing sensitive information, including model registry credentials, allowing for potential backdoors in stored ML models or unauthorized code execution. Machine learning tools offer enormous potential for innovation but also present opportunities for attackers to inflict damage. Compromising an ML client could facilitate extensive lateral movement within an organization, posing threats to its critical infrastructure and data security. CVE-2024-27132: This vulnerability involves insufficient sanitization in MLflow, which can lead to a cross-site scripting (XSS) attack when running an untrusted recipe in a Jupyter Notebook. The result is client-side remote code execution (RCE). CVE-2024-6960: An unsafe deserialization issue in H2O allows for potential RCE when importing an untrusted ML model. Path Traversal in PyTorch: A path traversal issue in PyTorch's TorchScript feature can result in denial-of-service (DoS) or code execution by overwriting critical system files or legitimate pickle files. This issue does not have an assigned CVE identifier. CVE-2023-5245: This vulnerability in MLeap involves a path traversal issue when loading a saved model in zipped format. It can lead to a Zip Slip vulnerability, resulting in arbitrary file overwrite and potential code execution. Organizations should be vigilant about the ML models they use and avoid loading untrusted models, even from repositories considered safe. Such practices can lead to remote code execution, causing significant harm. Implementing robust security measures and maintaining up-to-date systems are crucial steps in mitigating these risks.Security Flaws in Open-Source Machine Learning Frameworks
Vulnerabilities in ML Frameworks
Potential Impact on Organizations
Key Vulnerabilities
Mitigation Strategies
https://thehackernews.com/2024/12/researchers-uncover-flaws-in-popular.html?m=1