- The EU fined Elon Musk’s X $140 million for allegedly violating the Digital Services Act and is now investigating its handling of AI-generated sexualized images.
- A surge in AI image requests led to over 4.4 million images in a week, many sexualized and some involving children.
- X pledged to stop Grok from generating sexualized content and enforce penalties for illegal or nonconsensual imagery in line with safety policies.
After fining Elon Musk’s social media company, X, $140 million for violating its Digital Services Act, the European Union announced a formal investigation into the platform, which could lead to another hefty fine.
This time, it is investigating whether the platform properly mitigated the spread of sexualized images generated by artificial intelligence.
In a press briefing, EU spokesman Thomas Regnier said, “We believe that X may have breached the DSA.”
“We have seen, over the last weeks and month, antisemitic content. We have seen nonconsensual deepfakes of women, and we have seen child sexual abuse material,” he said.
The first formal EU investigation was formally launched at the end of 2023, about a year after Musk purchased X in October 2022, to see whether X violated rules regarding transparency and advertising.
An uptick in AI image requests triggered the investigation
Although the EU has taken issue with X for much longer than just the last month, the cause for this most recent investigation started on New Year’s Eve, when Musk asked Grok, the platform’s AI bot, to put him in a bikini.

The same day, Musk reposted an image of a SpaceX rocket that another user had asked Grok to put in a bikini.
Record engagement time on X is what followed.
Between Dec. 31, when Musk posted those two photos, and Jan. 8, Grok delivered on requests to generate more than 4.4 million images, The New York Times reported. Users only requested around 300,000 images in the previous nine days.

From a sample of 20,000 generated images, the Center for Countering Digital Hate analyzed, 65% had been sexualized. The London- and D.C.-based nonprofit estimated that 0.5% of the sexualized images were likely of children.
X says it will stop allowing Grok to generate sexualized images
On Jan. 3, X’s safety account said the company takes “action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
It added that anyone who uses Grok to make child sexual abuse materials or other nonconsensual, sexualized images “will suffer the same consequences as if they upload illegal content.”
As of May, U.S. federal law prohibits anyone from posting or threatening to post “intimate images” without the person’s consent through the Take It Down Act, which was sponsored by first lady Melania Trump.

