Ad
The surge in sexual deepfakes followed the release of a new Grok feature to edit posted images in late December, and has continued into the new year (Photo: UMA media)

Explainer

Grok's illegal online content — what does EU law say?

AI deepfakes, using chatbot Grok to remove women's clothes, are still being posted across Elon Musk's social media site X (formerly Twitter), with some images produced appearing to be of minors, which could constitute child sexual abuse material (CSAM). But what does EU law say about posted illicit AI content? 

The chatbot was developed by xAI, parent company to X, and is directly integrated into the social media platform, and can create sexualised “spicy” [their term] content for adult users.

The surge in sexual deepfakes came after a new feature to edit posted images was released in late December, and has continued through the new year.

During the content-creation wave, Grok has appeared to synthesise images of minors, and when prompted, generated an apology for producing “an AI image of two young girls (estimated ages 12-16) in sexualised attire based on a user's prompt.” 

To manage illegal online content and AI models, the EU has the Digital Services Act and the AI Act as legal frameworks for services operating in the bloc. 

On Monday (5 January), the European Commission told reporters that it is closely examining the chatbot's generated images. And a commission spokesperson told EUobserver, “X is well aware that we are very serious about Digital Services Act enforcement.”  

These are the requirements and penalties under the EU’s digital rulebook for abusive material, AI models, and deepfakes. 

Digital Services Act

Under the DSA, very large online platforms (VLOPs), which include X, are a classification that carries its own legal responsibilities. 

Child sexual abuse material is considered "illegal content" under the DSA. 

VLOPs like X must implement adequate risk-mitigation systems on their platform, including protections against hosting illegal content and ensuring people's fundamental rights. 

When the platform becomes aware of such infringing material, to maintain its liability exemption, it must promptly remove the illegal content and, in the case of unlawful images such as CSAM, alert the relevant authorities.

If the commission were to begin formal DSA proceedings against the platform, the platform must comply with the commission's information request.

And, if found guilty, the executive could order a platform to implement stronger risk-mitigation measures, such as potentially disabling features or requiring stronger reporting measures, and could fine the company for non-compliance. 

“If a platform ships an AI ‘nudification’ feature that predictably sexualises minors, that’s a systemic‑risk failure under the DSA. Brussels can order the feature changed or switched off and fine up to six percent of global turnover,” explained professor of European and UK technology law at the University of Strathclyde, Guido Noto La Diega, to EUobserver

AI Act 

Beginning in 2026, nearly all of the EU AI Act entered into force, and it also has its own risk-mitigation requirements. 

Operators of general-purpose AI models must maintain extensive technical documentation of the model's capabilities, and if the model also poses “systematic risks” at the EU level, then there are even stronger guardrails and requirements.

Writing to EUobserver, associate professor of technology law at Leiden University, Gianclaudio Malgieri, also outlines how the law prohibits certain AI practices, “including AI systems that exploit vulnerabilities of individuals (for example due to age or other forms of vulnerability) in a manner that materially distorts behaviour and is likely to cause significant harm.”

Then, if a serious incident occurs, the operator must document, report, and present potential fixes to the relevant authorities, such as the EU AI office

And regarding deepfakes, the act requires that if an AI model produces images that resemble existing persons, the content must be clearly labelled and disclosed as artificial or manipulated. 

If the commission found that a service violated the requirements of the AI Act, Brussels may request that an operator restrict making the service available, withdraw it, or recall it.

The company could also face fines of up to €15,000,000, or three percent of the tool's annual global turnover.

“Taken together [DSA and AI Act], EU law places a strong emphasis on aligning innovation with the protection of fundamental rights, ensuring that generative technologies develop in a way that is both lawful and respectful of individuals’ dignity and autonomy,” said Malgieri.


Become a subscriber and support EUobserver's journalism in 2026.

Ad
Ad