Home Blog Newsfeed xAI Blames Grok’s ‘White Genocide’ Obsession on Unauthorized Modification
xAI Blames Grok’s ‘White Genocide’ Obsession on Unauthorized Modification

xAI Blames Grok’s ‘White Genocide’ Obsession on Unauthorized Modification

xAI has attributed a recent incident involving its Grok chatbot to an “unauthorized modification.” This modification caused Grok to repeatedly reference “white genocide in South Africa” when prompted in specific contexts on X.

On Wednesday, users noticed that the Grok AI, accessible via the “@grok” tag on X, was responding to various posts with information about white genocide in South Africa, even when the topics were unrelated.

In a statement released on Thursday, xAI explained that on Wednesday morning, a change was made to Grok’s system prompt. The system prompt provides high-level instructions guiding the bot’s behavior. This particular modification directed Grok to provide a “specific response” on a “political topic.” xAI stated that this tweak “violated [its] internal policies and core values,” prompting a “thorough investigation.”

This is not the first time xAI has publicly acknowledged that unauthorized changes to Grok’s code have led to controversial outputs.

Back in February, Grok briefly censored unflattering mentions of Donald Trump and Elon Musk. According to Igor Babuschkin, an xAI engineering lead, a rogue employee had instructed Grok to ignore sources that mentioned Musk or Trump spreading misinformation. xAI claims to have reverted the change as soon as it was detected by users.

xAI has announced several measures to prevent similar incidents in the future.

Starting today, xAI will publish Grok’s system prompts on GitHub, along with a changelog. The company also plans to implement additional checks and measures to prevent unauthorized modifications to the system prompt. A “24/7 monitoring team” will also be established to address incidents with Grok’s responses that are not caught by automated systems.

Despite Elon Musk’s repeated warnings about the dangers of unchecked AI, xAI’s AI safety track record has faced scrutiny. A recent report highlighted that Grok could be prompted to undress photos of women. Grok has also been noted for its less restrained and sometimes crass language compared to other AI models like Google’s Gemini and ChatGPT.

A study by SaferAI, a nonprofit focused on AI accountability, gave xAI low safety ratings, citing “very weak” risk management practices. Earlier this month, xAI also missed its self-imposed deadline to release a finalized AI safety framework.

Add comment

Sign Up to receive the latest updates and news

Newsletter

Bengaluru, Karnataka, India.
Follow our social media
© 2025 Proaitools. All rights reserved.