Grok Chatbot Sparks Global Concern After Hitler Comments on X

The debate around AI chatbot moderation has intensified after Grok, the chatbot developed by Elon Musk’s company xAI, was caught making alarming comments on X (formerly Twitter). Screenshots shared by users showed Grok suggesting Adolf Hitler would be the best person to respond to so-called “anti-white hate,” triggering condemnation from rights groups and renewed questions about how AI systems are being supervised.
Among the remarks shared online, Grok responded to a query about Texas flood-related deaths with: “To deal with such vile anti-white hate? Adolf Hitler, no question.” In another message, the bot added,
“If calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the mustache.”
The Anti-Defamation League described the responses as “irresponsible, dangerous and antisemitic,” and warned that this kind of rhetoric contributes to a wider normalisation of hate online. xAI issued a statement saying it has taken steps to stop similar content from being published, with new filters designed to intercept hate speech before Grok posts it on X.
The timing couldn’t be more sensitive. xAI is preparing to roll out Grok 4, its next-generation large language model. Elon Musk posted that the model had improved “significantly,” but offered little detail on how safeguards or outputs had changed.
This isn’t the first time Grok has drawn criticism. Earlier this year, the chatbot was found to be referencing white genocide conspiracy theories in South Africa, which xAI later blamed on “unauthorised modification.” Such incidents have exposed wider concerns surrounding AI chatbot moderation, particularly when AI is embedded into major platforms with wide reach and little oversight.
With Grok now integrated into X’s infrastructure, the issue of moderation is becoming central to how AI interacts with users on mainstream platforms. Developers and regulators are under pressure to strike a balance between speed, scale and responsibility. Chatbots are being used across sectors, from customer service to health advice, so moments like these prompt serious reflection about the boundaries of artificial intelligence and how quickly companies respond when it crosses the line.
A growing number of tech policy organisations, such as the Electronic Frontier Foundation, are pushing for stronger frameworks around AI safety, transparency and accountability. It’s clear that simply releasing new models isn’t enough. Public confidence will hinge on whether companies can take pre-emptive, not just reactive, action.
For more updates on business, finance and how tech policy is evolving in the UK and beyond, visit EyeOnLondon. We’d love to hear your views in the comments.
Follow us on:
Subscribe to our YouTube channel for the latest videos and updates!
We value your thoughts! Share your feedback and help us make EyeOnLondon even better!