Microsoft on Thursday unmasked four of the individuals that it said were behind an Azure Abuse Enterprise scheme that involves leveraging unauthorized access to generative artificial intelligence (GenAI) services in order to produce offensive and harmful content.
The campaign, called LLMjacking, has targeted various AI offerings, including Microsoft’s Azure OpenAI Service. The tech giant is tracking the cybercrime network as Storm-2139. The individuals named are –
“Members of Storm-2139 exploited exposed customer credentials scraped from public sources to unlawfully access accounts with certain generative AI services,” Steven Masada, assistant general counsel for Microsoft’s Digital Crimes Unit (DCU), said.
“They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content.”
The malicious activity is explicitly carried out with an intent to bypass the safety guardrails of generative AI systems, Redmond added.
The amended complaint comes a little over a month after Microsoft said it’s pursuing legal action against the threat actors for engaging in systematic API key theft from several customers, including several U.S. companies, and then monetizing that access to other actors.
It also obtained a court order to seize a website (“aitism[.]net”) that is believed to have been a crucial part of the group’s criminal operation.
Storm-2139 consists of three broad categories of people: Creators, who developed the illicit tools that enable the abuse of AI services; Providers, who modify and supply these tools to customers at various price points; and end users who utilize them to generate synthetic content that violate Microsoft’s Acceptable Use Policy and Code of Conduct.
Microsoft said it also identified two more actors located in the United States, who are based in the states of Illinois and Floria. Their identities have been withheld to avoid interfering with potential criminal investigations.
The other unnamed co-conspirators, providers, and end users are listed below –
“Going after malicious actors requires persistence and ongoing vigilance,” Masada said. “By unmasking these individuals and shining a light on their malicious activities, Microsoft aims to set a precedent in the fight against AI technology misuse.”