PBS
PBS
Trump's offshore wind project freeze hit with lawsuits from states and developers

Trump's offshore wind project freeze hit with lawsuits from states and developers

The Guardian

The Guardian

Hundreds of nonconsensual AI images being created by Grok on X, data shows

Thu, 08 Jan 2026 22:27:46 GMT
Hundreds of nonconsensual AI images being created by Grok on X, data shows

New research that samples X users prompting Elon Musk’s AI chatbot Grok demonstrates how frequently people are creating sexualized images with it. Nearly three-quarters of posts collected and analyzed by a PhD researcher at Dublin’s Trinity College were requests for nonconsensual images of real women or minors with items of clothing removed or added.

The posts offer a new level of detail on how the images are generated and shared on X, with users coaching one another on prompts; suggesting iterations on Grok’s presentations of women in lingerie or swimsuits, or with areas of their body covered in semen; and asking Grok to remove outer clothing in replies to posts containing self-portraits by female users.

Among hundreds of posts identified by Nana Nwachukwu as direct, nonconsensual requests for Grok to remove or replace clothing, dozens reviewed by the Guardian show users posting pictures of women including celebrities, models, stock photos and women who are not public figures posing in snapshots.

Several posts in the trove reviewed by the Guardian have received tens of thousands of impressions and come from premium, “blue check” accounts, including accounts with tens of thousands of followers. Premium accounts with more than 500 followers and 5m impressions over three months are eligible for revenue-sharing under X’s eligibility rules.

In one Christmas Day post, an account with more than 93,000 followers presented side-by-side images of an unknown woman’s backside with the caption: “Told Grok to make her butt even bigger and switch leopard print to USA print. 2nd pic I just told it to add cum on her ass lmao.”

A 3 January post, representative of dozens reviewed by the Guardian, captioned an apparent holiday snap of an unknown woman: “@grok replace give her a dental floss bikini.” Within two minutes, Grok provided a photorealistic image that satisfied the request. Other posts in the trove show more sophisticated employment of JSON-prompt engineering to induce Grok to generate novel sexualized images of fictitious women.

The data does not cover all such requests made to Grok. While content analysis firm Copyleaks reported on 31 December that X users were generating “roughly one nonconsensual sexualized image per minute”, Nwachukwu said that her sample is limited to just more than 500 posts she was able to collect with X’s API via a developer account. She said that the true scale “could be thousands, it could be hundreds of thousands” but that changes made by Musk to the API mean that “it is much harder to see what is happening” on the platform. On Wednesday, Bloomberg News cited researchers who found that Grok users were generating up to 6,700 undressed images per hour.

Nwachukwu, an expert on AI governance and a longtime observer of and participant in social media safety initiatives, said that she first noticed requests along these lines from X users back in 2023.

At the time, she said, “Grok did not oblige the requests. It wasn’t really good at doing those things.” The bot’s responses began changing in 2024, and reached a critical mass late last year.

In October 2025, she noticed that “people were putting Halloween attire on themselves using Grok. Of course, a section of users realized we can also ask it to change what other people are wearing.” By year’s end, “there was a huge uptick in people asking Grok to put different people in bikinis or other types of suggestive clothing”.

There were other indications last year of an increased willingness to tolerate or even encourage the generation of sexually suggestive material with Grok.

In August, xAI incorporated a “spicy mode” setting in the mobile version of Grok’s text-to-video generation tool, leading the Verge to characterize it as “a service designed specifically to make suggestive videos”.

Nwachukwu’s data is just the latest indication of how the platform under Musk has become a magnet for forms of content that other platforms work to exclude, including hate speech, gore content and copyrighted material.

On Friday, Grok issued a bizarre public apology over the incident on X, claiming that “xAI is implementing stronger safeguards to prevent this”. On Tuesday, X Safety posted a promise to ban users who shared child sexual abuse material (CSAM). Musk himself said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

Nwachukwu said, however, that posts like those she has already collected are still appearing on the platform. Musk is giving “the middle finger to everyone who has asked for the platform to be moderated”, she said. The billionaire slashed Twitter’s trust and safety teams when he took over in 2022.

She added that other AI chatbots do not have the same issues.

“Other generative AI platforms – ChatGPT or Gemini – they have safeguards,” she said. “If you ask them to generate something that looks like a person, it would never be a depiction of that person. They don’t generate depictions of real human beings.”

The revelations about the nonconsensual imagery on X has already drawn the attention of regulators in the UK, Europe, India, and Australia.

Nwachukwu, who is from Nigeria, pointed to a specific harm being done in the posts to “women from conservative societies”.

“There’s a lot of targeting of people from conservative beliefs, conservative societies: west Africa, south Asia. This represents a different kind of harm for them,” she said.

;

Stay in touch

Keep informed with the most important events in market and advanced calculators.

*Don't worry, we don’t spam.