Microsoft Identifies Developers in AI Deepfake Cybercrime Lawsuit

1800 Office SOlutions Team member - Elie Vigile
1800 Team

Microsoft sues AI deepfake cybercriminals in an intensified legal battle against a global network accused of exploiting its Azure OpenAI services to create illicit AI-generated content, including non-consensual explicit images of celebrities. In an amended complaint filed on February 27, 2025, the tech giant formally identified four individuals allegedly involved in the operation: Arian Yadegarnia (alias “Fiz”) from Iran, Alan Krysiak (“Drago”) from the United Kingdom, Ricky Yuen (“cg-dot”) from Hong Kong, and Phát Phùng Tấn (“Asakuri”) from Vietnam. Microsoft alleges that these individuals are key members of a cybercriminal syndicate known as Storm-2139, which has been orchestrating AI deepfake schemes to bypass security protocols and engage in fraudulent activities.

According to Microsoft’s Digital Crimes Unit (DCU), Storm-2139 orchestrated a sophisticated operation involving the unauthorized acquisition of exposed customer credentials from public sources. Utilizing these credentials, the group allegedly accessed generative AI services, including Microsoft’s Azure OpenAI Service, and manipulated these platforms to bypass established safety protocols. This manipulation enabled them to generate harmful and illicit content, notably AI-crafted deepfakes depicting celebrities in explicit scenarios. The group is also accused of reselling access to these compromised services, providing detailed instructions on content creation to other malicious actors.

The legal proceedings began in December 2024 when Microsoft filed a lawsuit against unnamed defendants linked to the Azure Abuse Enterprise. The initial complaint led to a temporary restraining order and a preliminary injunction, allowing Microsoft to seize a website integral to the group’s operations. This action disrupted the network’s activities and facilitated the identification of its key members. The recent amendment to the lawsuit specifies the four individuals, marking a significant development in Microsoft’s efforts to dismantle the cybercrime network.

Steven Masada, Assistant General Counsel at Microsoft’s DCU, stated, “We are pursuing this legal action now against identified defendants to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology.” He further noted that the seizure of the website and the unsealing of legal filings in January prompted immediate reactions within the group, with members turning on each other and attempting to shift blame.

The amended complaint details the roles of the identified individuals within Storm-2139. Yadegarnia, Krysiak, and Phùng Tấn are alleged to have acted as providers, modifying and supplying tools that circumvent AI safety measures. Yuen is identified as a creator, responsible for developing the malicious tools that facilitated the abuse of generative AI services. The complaint also mentions two additional actors located in Illinois and Florida, whose identities remain undisclosed to avoid interfering with potential … . Microsoft is preparing criminal referrals to both U.S. and international law enforcement agencies concerning these individuals.

The group’s modus operandi involved exploiting stolen API keys and deploying custom-designed software, such as the application “de3u,” to interact with Microsoft’s Azure OpenAI Service. This software allegedly allowed users to send requests that mimicked legitimate API calls, thereby circumventing technological controls designed to prevent the alteration of certain service parameters. By doing so, they could generate content that violated Microsoft’s policies, including explicit deepfakes of public figures.

The proliferation of AI-generated deepfakes has raised significant ethical and legal concerns, particularly regarding non-consensual explicit content. High-profile cases, such as the unauthorized AI-generated images of singer Taylor Swift that circulated online in January 2024, have underscored the potential for harm. In that incident, explicit images of Swift, generated using AI tools, were widely shared on social media platforms, prompting discussions about the need for stricter regulations and enforcement mechanisms to combat such abuses.

Microsoft’s legal action against Storm-2139 reflects a broader industry effort to address the misuse of AI technologies. By holding individuals accountable for circumventing safety measures and generating harmful content, the company aims to deter similar activities in the future. This case also highlights the challenges that technology companies face in safeguarding their platforms against exploitation by malicious actors.

The outcome of this lawsuit could set a precedent for how legal systems address the abuse of AI technologies. As AI continues to evolve and become more integrated into various applications, ensuring that these tools are used responsibly and ethically remains a critical concern for both developers and users. Microsoft’s proactive approach in this case demonstrates the importance of vigilance and accountability in the rapidly advancing field of artificial intelligence.

Was this post useful?
Yes
No