Microsoft is actively combating the misuse of AI to create harmful images, particularly those targeting celebrities, women, and people of color. A recent legal case revealed a global network, known as Storm-2139, that exploited stolen credentials to bypass safeguards in AI image generators. This led to the creation and distribution of thousands of abusive images, many of which were sexually explicit, misogynistic, violent, or hateful.AxiosSource
The company responded swiftly by revoking compromised access codes and initiating a comprehensive security response. Their efforts culminated in a lawsuit aimed at dismantling the illicit operation and preventing future abuse. Microsoft emphasizes its commitment to responsible AI usage and the importance of safeguarding individuals from digital harm.Axios+1WIRED+1
This case underscores the need for robust AI governance and the proactive role tech companies must play in preventing the exploitation of their platforms for malicious purposes.
For more details, read the full article: How Microsoft is taking down AI hackers who create harmful images of celebrities and others.