Elon Musk, Hitler and Grok
Digest more
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Jessica Miglio/Warner Bros.; Kent Nishimura/Getty Images; Vincent Feuray/Hans Lucas via AFP
Jewish Insider reports that a group of mainly Democratic lawmakers are asking xAI about some of the worst messages from Grok’s Nazi meltdown, demanding answers about how it happened. As interesting as the answer might be — beyond the changes we already know about — ad-hoc investigation of legal (at least in the US) chatbot speech is probably not a road we want to go down.
After Grok took a hard turn toward antisemitic earlier this week, many are probably left wondering how something like that could even happen.
A new paper suggests a novel way for states to protect their populations in a world of mass AI-enabled job loss.
It claimed to just be “noticing patterns” — patterns like, Grok claimed, that Jewish people were more likely to be radical leftists who want to destroy America. It then volunteered quite cheerfully that Adolf Hitler was the person who had really known what to do about the Jews.
Grok’s MechaHitler meltdown wasn’t AI gone rogue; it was mimicry unmasked – a chatbot parroting humanity’s darkest memes without understanding. Unlike Gemini’s woke hallucinations, Grok revealed our raw,