Cybersecurity researchers have disclosed multiple security flaws impacting open-source machine learning (ML) tools and frameworks such as MLflow, H2O, PyTorch, and MLeap that could pave the way for code execution.
The vulnerabilities, discovered by JFrog, are part of a broader collection of 22 security shortcomings the supply chain security company first disclosed last month.
Unlike the first set that involved flaws on the server-side, the newly detailed ones allow exploitation of ML clients and reside in libraries that handle safe model formats like Safetensors.
“Hijacking an ML client in an organization can allow the attackers to perform extensive lateral movement within the organization,” the company said. “An ML client is very likely to have access to important ML services such as ML Model Registries or MLOps Pipelines.”
This, in turn, could expose sensitive information such as model registry credentials, effectively permitting a malicious actor to backdoor stored ML models or achieve code execution.
The list of vulnerabilities is below –
JFrog noted that ML models shouldn’t be blindly loaded even in cases where they are loaded from a safe type, such as Safetensors, as they have the capability to achieve arbitrary code execution.
“AI and Machine Learning (ML) tools hold immense potential for innovation, but can also open the door for attackers to cause widespread damage to any organization,” Shachar Menashe, JFrog’s VP of Security Research, said in a statement.
“To safeguard against these threats, it’s important to know which models you’re using and never load untrusted ML models even from a ‘safe’ ML repository. Doing so can lead to remote code execution in some scenarios, causing extensive harm to your organization.”