Cybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects.
These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week.
The server-side weaknesses “allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines,” it said.
The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines.
A brief description of the identified flaws is below –
“Since MLOps pipelines may have access to the organization’s ML Datasets, ML Model Training and ML Model Publishing, exploiting an ML pipeline can lead to an extremely severe breach,” JFrog said.
“Each of the attacks mentioned in this blog (ML Model backdooring, ML data poisoning, etc.) may be performed by the attacker, depending on the MLOps pipeline’s access to these resources.
The disclosure comes over two months after the company uncovered more than 20 vulnerabilities that could be exploited to target MLOps platforms.
It also follows the release of a defensive framework codenamed Mantis that leverages prompt injection as a way to counter cyber attacks Large language models (LLMs) with more than over 95% effectiveness.
“Upon detecting an automated cyber attack, Mantis plants carefully crafted inputs into system responses, leading the attacker’s LLM to disrupt their own operations (passive defense) or even compromise the attacker’s machine (active defense),” a group of academics from the George Mason University said.
“By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker’s LLM, Mantis can autonomously hack back the attacker.”