X (formerly Twitter) has recently updated its Grok AI policies following widespread criticism over creating bikini AI images of women and children. The company says it has introduced restrictions to prevent the AI from generating revealing or explicit depictions of real individuals, but experts and journalists report that some of the issues persist.
The update comes after months of public scrutiny and investigative reporting highlighting how Grok, X’s artificial intelligence tool, was being used to generate deepfake images of people in bikinis or revealing clothing without their consent. Critics argue that these images can be damaging, especially when they involve non-public figures, and highlight the wider challenges of moderating AI-generated content on social platforms.
What X Says Has Changed
According to X’s official statements, Grok’s image generation and editing tools have undergone policy updates to restrict sexualised outputs. The company claims the AI can no longer create images that “undress” people or place them in sexualised poses. Additionally, some image editing features have been moved behind X Premium, limiting access to certain functionalities for paying subscribers. In specific countries, features may also be geoblocked to comply with local laws around non-consensual imagery.
— Safety (@Safety) January 14, 2026
These updates are part of X’s broader effort to address growing concerns over the AI tool. While the company emphasises that Grok’s capabilities are now aligned with safety standards, independent reports indicate that users can still generate sexualised content in some cases, particularly when using free accounts or certain access points.
Why This Update Was Triggered
Grok’s policy change did not happen in isolation. Over the past year, journalists and watchdog organisations documented the widespread misuse of Grok to create AI-generated deepfakes featuring real people in explicit contexts. In many cases, these images were circulated widely on X, often amplified by the platform’s reposting and trending mechanisms, which allowed content to reach large audiences quickly.
The growing volume of complaints and media attention placed significant pressure on the platform. Users and regulators called for urgent measures to prevent further harm, especially given the potential legal and ethical ramifications of AI-generated sexualised imagery.
This controversy reflects a broader tension in the social media landscape: AI tools offer creative and productive capabilities but can also be misused, creating challenges for content moderation teams. For X, Grok became a focal point illustrating the difficulties of balancing innovation with safety.
Evidence That the Problem Persists
Despite X’s assurances, evidence suggests the issue is not entirely resolved. Users found that, under certain circumstances, the AI still generates sexualised deepfake images of individuals. These findings indicate that while the policy changes reduce the risk, they are not fully comprehensive in preventing misuse.
Unlike human moderators, AI can respond unpredictably to prompts, particularly when subtle variations in wording are used. This means that even with restrictions in place, some users can still exploit the system to produce prohibited outputs.
Growing Legal and Regulatory Pressure
Grok’s controversy has attracted attention from regulators worldwide. In the United Kingdom, Ofcom has reportedly opened investigations into X’s handling of AI-generated sexualised content. Similarly, authorities in the United States, Malaysia, and Indonesia have either taken legal action or issued warnings regarding the tool’s misuse.
This regulatory scrutiny is part of a broader global conversation about AI governance. Lawmakers are increasingly focused on how platforms prevent non-consensual image generation and protect users from harm, particularly in cases involving minors or private individuals. Grok’s limitations and enforcement practices are now under examination in multiple jurisdictions, highlighting the urgent need for effective AI moderation.
Grok, X, and Content Moderation Under Elon Musk
Grok’s controversy highlights broader issues with X’s approach to content moderation under Elon Musk. Since Musk’s takeover, the platform has leaned more on automated tools and user self-regulation, reducing the number of human moderators. While these tools handle routine violations, cases like Grok show that AI can still produce prohibited content, and enforcement is not always immediate.
Elon Musk addressed the issue directly, stating that: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content
— Elon Musk (@elonmusk) January 3, 2026
This underscores that while X is trying to hold users accountable, the platform still faces challenges in preventing the creation and circulation of sexualised deepfake images, particularly involving women and children.
The Grok case illustrates the difficulty of balancing innovation, AI capabilities, and user safety on a platform with millions of active users.
Where This Leaves Users and the Platform
For X users, the updated Grok policies mean some safeguards are now in place, but they should remain cautious when using the tool. Content moderation is not foolproof, and some prohibited outputs may still be possible depending on how prompts are formulated. Users concerned about non-consensual imagery should report violations promptly and familiarise themselves with X’s reporting tools.
For X as a platform, the episode underscores the challenges of AI governance, safety, and trust. Grok remains a prominent example of both the potential and the pitfalls of AI on social media. While policy updates reduce some risks, effective enforcement, ongoing monitoring, and transparent communication with users are crucial to restoring confidence in the platform.
As AI continues to shape how content is created and shared online, platforms like X will need to continually refine policies and safeguards, balancing innovation with safety and user trust. For users, this means exercising caution while engaging with AI tools and staying informed about platform updates that affect privacy and consent.