The Internet Watch Foundation (IWF) charity says its analysts have discovered “criminal imagery” of girls aged between 11 and 13 which “appears to have been created” using Grok. The AI tool is owned by Elon Musk’s firm xAI. It can be accessed either through its website and app, or through the social media platform X.
The charity aims to remove child sexual abuse material from the internet. It has a hotline for reporting suspected CSAM and employs analysts to assess the material.
Ofcom contacted X and xAI after reports surfaced that Grok could create sexualized images of children and undress women. The BBC saw users on X asking the chatbot to modify real images to show women in bikinis or sexual situations without consent.
Ngaire Alexander of IWF told the BBC they are very worried about how quickly and easily people can create realistic child sexual abuse material.
In a previous statement, X said: “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

