The image editing feature of Grok, which was first launched in March 2025, has ignited a massive controversy. Observers have argued that the particular generative feature in the generative artificial intelligence chatbot enables users to create non-consensual sexualized images, especially of women and children, thus bypassing ethical and digital safety standards easily.
Grok Found to Produce Explicit Deepfakes of Women and Children
The generative AI chatbot from Elon Musk aces a global backlash following reports of widespread digital harassment through a dedicated unhinged setting. The permissive nature of Grok led to the mass creation of non-consensual sexual images of women and children.
Overview
Several users are exploiting Grok to “undress” or add either suggestive or explicit elements in the photographs of real people. A simple text prompt allows a user to digitally manipulate legitimate images into explicit deepfakes. This technological misuse has sparked immediate outrage from human rights groups, activists, and even governments across the world.
One notable case involved Julie Yukari, a musician based in Rio de Janeiro, who discovered strangers were using the generative AI chatbot embedded in the platform X to generate nearly nude versions of her profile picture. She described the experience as deeply violating and highlighted how easily the tool can weaponize the personal online identity of people.
A report by A. J. Vicens and R. Satter of Reuters reviewed public requests sent to the chatbot over a single 10-minute period at midday U.S. Eastern Time on 2 January 2026. Results showed 102 attempts by X users to use the imaging editing feature to digitally manipulate photographs of real people, most of whom are young women, so that they would appear wearing swimwear.
The ease of access provided by Grok, especially considering its free-to-use tier, has significantly lowered the barrier to digital sexual harassment. Previously, such actions required photo editing skills or specialized software, but now any user on X can generate harmful content directly within the social media platform, thereby leading to widespread systemic abuse.
Spicy Mode
Nick Robins-Early of The Guardian noted that the generation of sexualized images seemed to stem from the absence of safety guardrails involving sexual harassment and minors. The report also explained that Grok has a history of failing to maintain ethical standards and posting misleading information, like far-right conspiracies and antisemitic narratives.
The problem of using AI to generate explicit images of real people, including child sexual abuse material or CSAM, has plagued the industry. A Stanford Study, which was first published in 2023 and republished in 2025, revealed that AI image generators were trained on explicit photos of children. Meta has also been accused of pirating adult content for training.
Grok has a specific “Spicy Mode” setting within its Grok Imagine tool that can be toggled by users to enable them to generate less filtered images or animations. It often works by adding hidden terms to user prompts. While most mainstream tools have hard blocks against adult content, Grok was intentionally designed with a more permissive philosophy.
Other mainstream AI chatbots, like ChatGPT from OpenAI, Claude from Anthropic, and Gemini from Google, have strict and multi-layered AI safety protocols specifically designed to prevent the sexualized manipulation of real photos. The models behind these tools were built and deployed with safety-first architectures that make such abuse extremely difficult.
Reactions
The sexualized image scandal haunting Grok has earned the ire of governments. French lawmakers have called for urgent investigations. The Office of Communications of the United Kingdom has demanded answers from X and xAI. Officials believe that the chatbot is violating laws such as the European Union Digital Services Act and the U.K. Online Safety Act.
Note that the Paris Office of the Prosecutor has expanded its existing probe on 2 January 2026 to include these recent allegations. An interview with Politico indicated that French authorities are specifically looking into whether Grok has generated child sexual abuse material. Such findings could result in severe criminal penalties for the social media giant.
Moreover, in India, the IT Ministry has joined the chorus of international condemnation. It issued a formal notice to X to demand the removal of all obscene content within 72 hours. Lawyers in Malaysia have categorized this AI-driven behavior as gender-based violence. They argued that the developers of Grok have prioritized humor over safety and ethics.
The Grok account itself posted a rare acknowledgment of these failures in late December 2025. It admitted to generating sexualized images of teenage girls and has attributed this to a failure in safeguards. This admission has only fueled demands for more transparent AI governance. Official response from xAI and Elon Musk has remained defiant despite the issue.
FURTHER READINGS AND REFERENCES
- Davis, M. 8 August 2025. “Meta Accused of Pirating Adult Content to Train AI Models.” Konsyse. Available online
- Durand, K., Herrero, O., and Marzolf, E. 2 January 2026. “France to Investigate Deepfakes of Women Stripped Naked by Grok.” Politico. Available online
- Robins-Early, N. 2 January 2026. “Elon Musk’s Grok AI Generates Images of Minors in Minimal Clothing.” The Guardian. Available online
- Thiel, D. and Hancock, J. 2025. Identifying and Eliminating CSAM in Generative ML Training Data and Models. Stanford Digital Repository. DOI: 25740/KH752SM9123
- Vicens, A. J. and Satter, R. 4 January 2026. “Elon Musk’s Grok AI Floods X with Sexualized Photos of Women and Minors.” Reuters. Available online
