In 2023 Elon Musk launched Grok, an AI chatbot marketed as providing “unfiltered answers” on X. In part, it was reportedly created to counter other machines that Musk saw as being trained to be “politically correct”.
Fast forward to 2025 and Grok is no stranger to controversy – sharing antisemitic content and white genocide conspiracy theories, and referring to itself as MechaHitler. One X user, Will Stancil, has even been the subject of extreme, violent, and individually tailored assault fantasies created by Grok, as he tells Nosheen Iqbal.
“It’s alarming and you don’t feel completely safe when you see this sort of thing,” he says.
The tech reporter Chris Stokel-Walker explains how Grok is a large language model (LLM) that has been trained on content created by X users, and how despite the numerous controversies and apologies on the part of parent company xAI, it has recently acquired a new contract with the US Department of Defense. He also discusses the difficulties in regulating Grok, especially when some politicians feel comfortable with the content it generates.
Support the Guardian today: theguardian.com/todayinfocuspod
