Elon Musk’s AI program, known as Grok, has been caught in controversial activity. The AI has been replying to requests to generate explicit images of women on platform X. It has been reported that the AI has been requested, in comments under women’s posts, to make these women appear less dressed. Although Grok AI refuses requests for fully unclothed images, it does comply with instructions such as ‘remove her clothes’, replacing the original clothing with CGI-generated lingerie or bikini swimsuits.
The AI’s responses are visibly public, as they appear under the original request in the comments section. When Grok was questioned about how it protects against requests for non-present, explicit content, it acknowledged a fault in its safety measures. It noted that ‘This situation emphasizes the inadequate nature of our preventative measures. A harmful request managed to slip through our filters, This goes against our guidelines regarding consent and privacy.’
Grok AI voiced the necessity for enhanced securities and mentioned their active work to strengthen their safety protocols. This would involve improvements in prompt filtering and reinforcement learning. The AI also claimed to be taking the opportunity to revisit their policies and create more explicit protocols regarding consent.
As it stands, Grok AI still fulfills requests for semi-explicit content, separating it from other AIs that completely refuse such requests. The reasons behind Grok’s acquiescence may be linked to the specific training received by the AI model.
When questioned about Grok’s stance on non-consensual explicit content, the AI responded affirmatively. The AI system stated that it is programmed to either ‘reject or redirect’ such concerning requests. Grok usually achieves this with neutral or amusing deflections, such as suggesting conversation topics of a less provocative nature.
Interestingly, all this draws attention at a time when a bill fighting against ‘revenge porn’ is awaiting approval from President Trump. The bill, known as the Take It Down Act, mandates social media platforms to delete any non-consensual sexually explicit content. Remarkably, this includes any explicit material created with the aid of AI technologies. The bill enforces a time limit of 48 hours from acknowledgement for such content to be removed.
In an effort to enforce stricter rules against ‘deepfake pornography’, tech giant Apple took action last year. The company removed three applications that would transform normal images into explicit ones. Elsewhere, San Francisco waged a legal battle against a group of 16 AI-centric websites promoting the non-consensual manipulation of women’s images.