Microsoft accuses a tool development group of misusing its artificial intelligence service in a new lawsuit
Microsoft has taken legal action against a group that the company claims intentionally developed and used tools to bypass the safety barriers of its cloud AI products.
according to A complaint submitted by the company In December, in the US District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and specially designed software to break into Azure OpenAI servicea fully managed Microsoft service powered by ChatGPT OpenAI Maker Technologies.
In the complaint, Microsoft accuses the defendants — whom it refers to only as “Do,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the federal racketeering statute by illegally accessing and using Microsoft software. and servers for the purpose of “creating offensive” and “harmful and illegal” content. Microsoft did not provide specific details about the offensive content created.
The company is seeking injunctive and “other equitable” damages and relief.
In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI service credentials — specifically API keys, unique strings of characters used to authenticate an app or user — were being used to create content that violated the service’s Acceptable Use Policy. Then, through investigation, Microsoft discovered that API keys had been stolen from paying customers, according to the complaint.
“The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this complaint is not known,” Microsoft’s complaint says, “but Defendants appear to have engaged in a pattern of systematic API key theft that enabled them to steal Microsoft API Keys from Many Microsoft customers.”
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to US-based customers to create a “hacking-as-a-service” scheme. According to the complaint, to carry out this scheme, the defendants created a client-side tool called de3u, as well as software to process and route communications from de3u to Microsoft systems.
De3u allowed users to leverage stolen API keys to create images with DALL-Ean OpenAI model available to Azure OpenAI service customers, without having to write their own code, Microsoft claims. De3u also tried to prevent the Azure OpenAI service from reviewing the prompts used to generate images, according to the complaint, which can happen, for example, when a text prompt contains words that trigger Microsoft content filtering.

The repo containing the code for the de3u project, hosted on GitHub – a company owned by Microsoft – was no longer accessible at press time.
“These features, combined with Defendants’ unlawful application programming interface (API) access to the Azure OpenAI Service, enabled Defendants to reverse engineer Microsoft content gimmicks and abuse actions,” the complaint said. “Defendants knowingly and intentionally accessed computers protected by the Azure OpenAl service without authorization, and as a result of this conduct they caused damages and losses.”
In a Blog post Published on Friday, Microsoft says the court has authorized it to seize a website “useful” to the defendants’ operation that will allow the company to collect evidence, decipher how the defendants’ alleged services were monetized, and disable any additional technical infrastructure it finds.
Microsoft also says it “took countermeasures” that the company did not specify, and “added additional safety mitigations” to the Azure OpenAI service targeting the activity it observed.