Security Vulnerabilities Found in Popular Open-Source ML Frameworks

Want to understand how unpatched vulnerabilities in open-source ML frameworks can lead to security risks in cloud environments? This article is for you!

 

Cybersecurity researchers have identified multiple vulnerabilities in widely used open-source machine learning frameworks such as MLflow, H2O, PyTorch, and MLeap. These vulnerabilities, which include issues like cross-site scripting, unsafe deserialization, and path traversal, could allow attackers to execute code and perform lateral movement within an organization. This poses a significant risk as it may expose sensitive information, compromise ML model registries, and backdoor stored models. To mitigate these threats, organizations are advised to carefully vet the ML models they use and avoid loading untrusted models, even from seemingly safe repositories, as this can lead to remote code execution and substantial organizational harm.

 

Key Facts

Risks:

Open Source, Remote Code Execution, Web App/Website Vulnerability, Supply Chain

Keywords:

Machine Learning, Open-Source Vulnerabilities, MLflow, PyTorch, H2O, MLeap, Remote Code Execution, ML Security

CVE:

CVE-2024-27132; CVE-2024-6960; CVE-2023-5245

Affected:

MLflow, H2O, PyTorch, MLeap

 

Article Body

Security Flaws in Open-Source Machine Learning Frameworks

Cybersecurity researchers have uncovered several security vulnerabilities affecting popular open-source machine learning (ML) frameworks, including MLflow, H2O, PyTorch, and MLeap. These vulnerabilities, totaling 22, pose significant risks by enabling potential code execution and lateral movement within organizations.

Vulnerabilities in ML Frameworks

The identified flaws primarily affect ML clients and reside in libraries that handle safe model formats, such as Safetensors. These vulnerabilities differ from previous ones that targeted server-side components. Exploiting these client-side vulnerabilities could allow attackers to access vital ML services like model registries and MLOps pipelines. Such access increases the risk of exposing sensitive information, including model registry credentials, allowing for potential backdoors in stored ML models or unauthorized code execution.

Potential Impact on Organizations

Machine learning tools offer enormous potential for innovation but also present opportunities for attackers to inflict damage. Compromising an ML client could facilitate extensive lateral movement within an organization, posing threats to its critical infrastructure and data security.

Key Vulnerabilities

Mitigation Strategies

Organizations should be vigilant about the ML models they use and avoid loading untrusted models, even from repositories considered safe. Such practices can lead to remote code execution, causing significant harm. Implementing robust security measures and maintaining up-to-date systems are crucial steps in mitigating these risks.

 

Read More

https://thehackernews.com/2024/12/researchers-uncover-flaws-in-popular.html?m=1