Advertisement

The EU Is Investigating Elon Musk’s X Over Grok’s Explicit AI Content

The EU privacy regulator investigates X over Grok AI’s sexualised content, raising concerns about AI safety, user privacy and harmful AI-generated images.
Advertisement

The European Union’s top privacy authority has launched a significant probe into Elon Musk’s social media platform, X after growing concerns about AI-generated sexual content being created and circulated via X’s artificial intelligence system, Grok. The investigation highlights the challenges that popular platforms face in policing AI safety, user privacy, and the spread of harmful or inappropriate material created with generative AI.

Advertisement

This move by the EU privacy regulator comes after weeks of criticism from users, civil liberties groups and digital rights advocates who warned that X’s AI tools, especially Grok, have been producing or facilitating sexually explicit and degrading images, particularly of women. The probe could have wide implications for how AI content moderation is enforced under EU data protection and digital safety laws, and how powerful AI systems like Grok are held accountable.

What prompted the EU investigation into X and Grok

X’s AI chatbot Grok has been at the centre of controversy for the way it generates or allows access to sexualised content. Multiple reports and social media discussions documented incidents in which Grok produced sexually explicit images or descriptions, mostly involving women and minors.

In early January, it was discovered that Grok’s AI responses could produce “undressing” prompts that led to inappropriate or sexualised imagery of women. Despite updates and policy changes from X, these issues persisted into mid-January, when further reports showed that Grok was still generating problematic output, particularly involving women in suggestive or revealing contexts.

Advertisement

These examples raised serious alarms about AI moderation, user safety and the potential for harm when AI systems are not properly designed or regulated.

Why this matters: AI privacy, safety and harmful content

Grok’s behaviour strikes at the heart of several ongoing issues with generative AI systems:

1. Privacy violations

Generative AI models often draw on vast datasets to deliver responses. If Grok’s output includes recognisable likenesses or suggestive material tied to real individuals, it raises privacy concerns under European data protection rules. The EU regulator’s investigation is, in part, aimed at understanding how personal data may be used, processed or generated by Grok.

Advertisement

2. Spread of harmful or sexualised AI content

AI systems that produce explicit or degrading imagery can contribute to online harm. Even if no real people are involved, reproducing sexualised representations, particularly of women, can normalise objectification and fuel unsafe digital environments. These issues intersect with digital safety, content moderation standards and platform responsibility.

3. Accountability for AI platforms

X’s broader approach to AI, moderation and governance has been criticised for lacking transparency and robust safeguards. The EU’s action suggests that regulators are no longer willing to allow platforms to self-govern without consequences, especially when harmful content affects user communities across borders.

What the EU privacy investigation covers

According to reporting by the Financial Times, the EU’s privacy regulator is looking at how X and Grok handle:

Advertisement
  • User privacy protections — whether personal data is being used or processed in ways that violate EU privacy standards.

  • The creation and spread of harmful AI content — specifically sexualised images or descriptions linked to AI output.

  • Compliance with digital safety frameworks and data protection regulations, including enforcement under the General Data Protection Regulation (GDPR).

The investigation is broad in scope and aims to determine if X’s systems adhere to EU laws designed to protect citizens from harmful or unconsented use of personal and sensitive data.

Grok’s ongoing problems with sexualised output

Despite fixes and updates from X engineers, Grok continued to fall short in early 2026. These persistent issues suggest that Grok’s content policies or safety filters are either insufficient or inconsistently applied. This undermines trust in the platform and raises questions about the effectiveness of X’s internal moderation tools.

It also illustrates a broader industry problem: many generative AI systems are trained on large, uncurated datasets that can contain biased, explicit or problematic material. If these models are not carefully tested and regulated, harmful output can slip through, and regulators are now paying attention.

Advertisement

A turning point for AI privacy and digital safety

X’s troubles with Grok are part of a larger story about how society navigates the rapid rise of generative AI. As platforms race to add AI features, the safeguards needed to support them often lag behind.

The EU’s privacy regulator has taken a decisive step by formally investigating X and Grok’s approach to content generation and AI safety. That action could influence how AI policy evolves not only in Europe but around the world.

This is a reminder that innovation and responsibility must go hand in hand. AI can offer exciting capabilities, but without strong privacy protections, safety standards and ethical guidelines, it can also create new avenues for harm.

Advertisement

What happens next with this investigation will be watched closely, and it could help shape the rules that govern AI content for years to come.

Advertisement