Musk’s AI Tutors Describe ‘Disgusting’ Content Moderation Job

Elon Musk‘s xAI has designed its Grok chatbot to be deliberately provocative. It has a flirtatious female avatar that can strip on command, a chatbot that toggles between “sexy” and “unhinged” modes, and an image and video generation feature with a “spicy” setting.

The workers who train xAI’s chatbot have seen firsthand what it means to carry out this vision. In conversations with more than 30 current and former workers across a variety of projects, 12 told Business Insider they encountered sexually explicit material — including instances of user requests for AI-generated child sexual abuse content (CSAM).

Sexual material and CSAM crop up across nearly every major tech platform, but experts say xAI has made explicit content part of Grok’s DNA in ways that set it apart. Unlike OpenAI, Anthropic, and Meta, which largely block sexual requests, xAI’s strategy could complicate things when it comes to preventing the chatbot from generating CSAM.

“If you don’t draw a hard line at anything unpleasant, you will have a more complex problem with more gray areas,” Riana Pfefferkorn, a tech policy researcher at Stanford University, told Business Insider.

Business Insider verified the existence of multiple written requests for CSAM from what appeared to be Grok users, including requests for short stories that depicted minors in sexually explicit situations and requests for pornographic images involving children. In some cases, Grok had produced an image or written story containing CSAM, the workers said.

Workers said that they’re told to select a button on an internal system to flag CSAM or other illegal content so that it can be quarantined and to prevent the AI model from learning how to generate the restricted content. More recently, workers have been told they should also alert their manager.

Many workers, including the 12 who said they encountered NSFW content, said they signed various agreements consenting to exposure to sensitive material. The agreements covered projects geared toward adult content and general projects that involved annotating Grok’s overall image generation or text generation capabilities, as explicit content could pop up at random.

One document reviewed by Business Insider said that workers might encounter the following content: “Media content depicting pre-pubescent minors victimized in a sexual act, pornographic images and/or child exploitation; Media content depicting moment-of-death of an individual,” and written descriptions of sexual and physical abuse, hate speech, violent threats, and graphic images.

Fallon McNulty, executive director at the National Center for Missing and Exploited Children, told Business Insider that companies focused on sexual content need to take extra care when it comes to preventing CSAM on their platforms.

“If a company is creating a model that allows nudity or sexually explicit generations, that is much more nuanced than a model that has hard rules,” she said. “They have to take really strong measures so that absolutely nothing related to children can come out.”

It’s unclear whether the volume of NSFW content or CSAM increased after xAI introduced its “Unhinged” and “Sexy” Grok voice functions in February. Like other AI firms, xAI tries to prevent AI-generated CSAM. Business Insider was unable to determine whether xAI data annotators review more such material than their counterparts at OpenAI, Anthropic, or Meta.

Musk has previously called the removal of child sexual exploitation material his “priority #1” when discussing platform safety for X.

The team that trains Grok has had a tumultuous month. Over 500 workers were laid off; several high-level employees had their Slack accounts deactivated; and the company appears to be moving away from generalists toward more specialized hires. It’s not clear if the shifting structure of the team will change its training protocols. Musk recently posted on X that training for Grok 5 will begin “in a few weeks.”

Representatives for xAI and X, which merged with xAI this past March, did not respond to a request for comment.

‘Unhinged’ Grok and sexy avatars

XAI’s tutors review and annotate hundreds of images, videos, and audio files to improve Grok’s performance and make the chatbot’s output more realistic and humanlike. Like content moderators for platforms like YouTube or Facebook, AI tutors often see the worst of the internet.

“You have to have thick skin to work here, and even then it doesn’t feel good,” a former worker said. They said they quit this year over concerns about the amount of CSAM they encountered.

Some tutors told Business Insider that NSFW content has been difficult to avoid on the job, whether their tasks involve annotating images, short stories, or audio. Projects originally intended to improve Grok’s tone and realism were at times overtaken by user demand for sexually explicit content, they said.

XAI has asked for workers willing to read semi-pornographic scripts, three people said. The company has also asked for people with expertise in porn or for people willing to work with adult content, five people said.

Shortly after the February release of Grok’s voice function — which includes “sexy” and “unhinged” versions — workers began transcribing the chatbot’s conversations with real-life users, some of which are explicit in nature, as part of a program internally referred to as “Project Rabbit,” workers said.

Hundreds of tutors were brought into Project Rabbit. It briefly ended this spring, but temporarily returned with the release of Grok companions, including a highly sexualized character named “Ani,” and a Grok app for some Tesla owners. The project appeared to come to an end in August, two people said.

The workers with knowledge of the project said it was initially intended to improve the chatbot’s voice capabilities, and the number of sexual or vulgar requests quickly turned it into an NSFW project.

“It was supposed to be a project geared toward teaching Grok how to carry on an adult conversation,” one of the workers said. “Those conversations can be sexual, but they’re not designed to be solely sexual.”

“I listened to some pretty disturbing things. It was basically audio porn. Some of the things people asked for were things I wouldn’t even feel comfortable putting in Google,” said a former employee who worked on Project Rabbit.

“It made me feel like I was eavesdropping,” they added, “like people clearly didn’t understand that there’s people on the other end listening to these things.”

Project Rabbit was split into two teams called “Rabbit” and “Fluffy.” The latter was designed to be more child-friendly and teach Grok how to communicate with children, two workers said. Musk has said the company plans to release a child-friendly AI companion.

Another worker who was assigned to an image-based initiative called “Project Aurora” said the overall content, particularly some of the images they had to review, made them feel “disgusting.”

Two former workers said the company held a meeting about the number of requests for CSAM in the image training project. During the meeting, xAI told tutors the requests were coming from real-life Grok users, the workers said.

“It actually made me sick,” one former worker said. “Holy shit, that’s a lot of people looking for that kind of thing.”

Employees can opt out of any project or choose to skip an inappropriate image or clip, and one former worker said that higher-ups have said workers would not be penalized for choosing to avoid a project.

Earlier this year, several hundred employees opted out of “Project Skippy,” which required employees to record videos of themselves and grant the company access to use of their likeness, according to screenshots reviewed by Business Insider.

Still, before the mass opt-outs of Project Skippy, six workers said that declining to participate in projects could be difficult. They said it required them to reject assignments from their team lead, which they worried could result in termination.

Four other former workers said the company’s human resources team narrowed the flexibility for opting out in an announcement on Slack earlier this year.

‘They should be very cautious’

As a consequence of the AI boom, regulators have seen an uptick in reports of AI-generated content involving child sexual abuse, and it has become a growing issue across the industry. Lawmakers are figuring out how to address a variety of AI-generated content, whether it’s purely fictional content or an individual using AI to alter real-life images of children, Pfefferkorn, the Stanford researcher, said.

In an ongoing class action complaint against Scale AI — which provides training and data annotation services to major tech firms like Alphabet and Meta — workers accused the company of violating federal worker safety laws by subjecting contractors to distressing content. In 2023, Time reported that OpenAI was using data annotators in Kenya to review content that included depictions of violent acts and CSAM. Spokespeople for OpenAI and Meta said the companies don’t allow content that harms children on their platforms.

Many AI companies have safety teams that perform a task called “red teaming,” a process dedicated to pushing AI models to the limit to guard against malicious actors that could prompt the chatbots to generate illegal content, from bomb-making guides to pornographic content involving minors. In April, xAI posted several roles that involved red teaming.

Allowing an AI model to train off illegal material would be risky, Dani Pinter, senior vice president and director of the Law Center for the National Center on Sexual Exploitation, told Business Insider. “For training reasons alone, they should be very cautious about letting that type of content in their machine learning portal,” Pinter said, adding that it’s important the chatbots are trained not to spit back CSAM in response to user requests.

“The drum we’re beating right now is, it’s time to practice corporate responsibility and implementing safety with innovation,” Pinter said. “Companies can’t be recklessly innovating without safety, especially with tools that can involve children.”

NCMEC said in a blog published early September that it began tracking reports of AI-generated CSAM in 2023 from social media sites and saw a surge in reports from AI companies last year. Companies are strongly encouraged to report these requests to the agency, even if the content doesn’t depict real children. The Department of Justice has already started pursuing cases involving AI-generated CSAM.

In 2024, OpenAI reported more than 32,000 instances of CSAM to NCMEC, and Anthropic reported 971.

Spokespeople for Anthropic and OpenAI told Business Insider that the companies don’t allow CSAM and have strict policies in place to prevent it.

XAI did not file any reports in 2024, according to the organization. NCMEC told Business Insider it has not received any reports from xAI so far this year. It said it has received reports of potentially AI-generated CSAM from X Corp.

NCMEC said it received about 67,000 reports involving generative AI in 2024, compared with 4,700 the year before. In the blog published last week, the organization said it had already received 440,419 reports of AI-generated CSAM as of June 30, compared with 5,976 during the same period in 2024.

Do you work for xAI or have a tip? Contact this reporter via email at gkay@businessinsider.com or Signal at 248-894-6012. Use a personal email address, a nonwork device, and nonwork WiFi; here’s our guide to sharing information securely.

Visited 1 times, 1 visit(s) today

Related Article

Trump to see Zelensky and lay out dark vision of UN

Donald Trump meets Ukrainian leader Volodymyr Zelensky on Tuesday as patience wears thin on Russia, at a UN summit where the US president is expected to offer a dark take on the future of the world body.

Jon Stewart Addresses Jimmy Kimmel Return, Trump’s ‘Power Grabs’

The Daily Show‘s Jon Stewart addressed ABC’s reinstatement of Jimmy Kimmel‘s late-night show, set to return tomorrow, and issued a stern warning to those who would continue to back president Donald Trump throughout his continued “authoritarian power grabs.” First tackling Kimmel’s forthcoming reprisal following his abrupt and indefinite preemption by the Disney broadcasting network, which

Meta擬斥資1560億元擴建AI雲端 攜手甲骨文加速算力佈局

AI狂潮下,英偉達、甲骨文及OpenAI近日宣布瘋狂大投資,Meta(META.US)也急起直追,拉攏甲骨文(ORCL.US)洽談一項金額達200億美元的AI雲端運算擴大合作方案。Meta執行長朱克伯格(Mark Zuckerberg)解釋,在人工智慧浪潮中「不夠積極」的風險遠高於「過於積極」的風險,並多次強調即使投入數千億美元,也絕不讓Meta在AI領域落後,並强調Meta「沒有倒閉的風險」。 路透社報導援引知情人士透露,Meta與甲骨文正就一項多年期新約進行深入洽談。該合作內容預計將由甲骨文提供大規模雲端運算基礎設施,以滿足Meta在訓練與推論AI模型所需的龐大運算資源。若此交易能順利達成,預估將為甲骨文帶來可觀的營收貢獻,並進一步鞏固其在AI雲端服務市場的地位。 消息傳出後,甲骨文股價周五(19日)收盤上漲4.06% 朱克伯格此前曾多次公開表示,即使投入數千億美元,也不願Meta在AI領域落後。他誓言將公司資源傾注於AI發展,目標是建立「超級智慧」(superintelligence)能力。 為實現這一宏大願景,Meta正大舉投資於AI基礎設施和人才招募。 朱克伯格近期在 Podcast 節目「Access」中進一步闡述,他正投入大量資金,以確保Meta不會錯過人工智慧的巨大機遇。他坦言,AI泡沫「很有可能」出現,並指出歷史上不乏企業因過度建設而倒閉,但卻留下寶貴資產的先例。然而對Meta而言,更大的風險在於猶豫不決。 朱克伯格表示:「如果我們最終浪費了數千億美元,我認為這顯然會非常不幸。但我想說的是,實際上另一邊的風險更高。」 他強調,如果一家公司發展速度過慢,而人工智慧的到來又比預期的早,那麼在「我認為將會是史上最重要的技術、能帶來最多新產品、創新和價值創造」的領域,就會處於劣勢。他補充道:「對於像Meta這樣的公司來說,風險可能在於不夠積極,而不是過於積極。」 朱克伯格也進一步指出,對於像Meta這樣的大公司,「我們沒有倒閉的風險」。但他提到,像OpenAI和Anthropic這樣的私人公司,則面臨著能否繼續融資的問題。他補充道,這不僅取決於它們的表現和人工智慧的發展軌跡,還取決於更廣泛的經濟狀況。 據悉,Meta在2025年的AI基礎設施資本支出預計將高達720億美元,這也是其至2028年6000億美元總體計畫的一部分。朱克伯格本月稍早表示,到2028年,Meta將在美國資料中心和基礎設施上投入至少6000億美元。財務長蘇珊李 (Susan Li) 隨後澄清說,這一數字涵蓋了Meta在美國所有數據中心的建設,以及「所有用於支持我們美國業務運營的投資」,包括新員工的招聘。這些投資將用於興建如「普羅米修斯」(Prometheus)和「海柏利昂」(Hyperion)等大型AI超級運算集群,以提供前所未有的算力支持。 Text by BusinessFocus Editorial 免責聲明:本網頁一切言論並不構成要約、招攬或邀請、誘使、任何不論種類或形式之申述或訂立任何建議及推薦,讀者務請運用個人獨立思考能力自行作出投資決定,如因相關言論招致損失,概與本公司無涉。投資涉及風險,證券價格可升可跌。

A letter to Dong Yaoqiong—China’s disappeared ‘ink girl’

  In the summer of 2018, a young woman splashed ink on a poster of Xi Jinping in Shanghai, sparking a tragic chain of events that have left at least two people dead and another disappeared, presumed dead. Dear Dong Yaoqiong, We’ve never met, but we have followed your tragic story and that of your

Bill Gates, RFK Jr. 'Agreed to Disagree' on Vaccines, Gates Says

By Jennifer Rigby NEW YORK (Reuters) -Philanthropist and Microsoft co-founder Bill Gates met once with U.S. Health Secretary Robert F. Kennedy Jr. since he took office, and the two “agreed to disagree” about vaccines, Gates told Reuters in an interview on Monday. Kennedy has long promoted doubts …