Meta CEO Overruled Safety Team on AI Chatbot Policies for Minors
According to internal documents filed in a New Mexico lawsuit, Meta CEO Mark Zuckerberg rejected crucial safety measures for AI chatbots that his own staff warned could potentially engage in sexual conversations with minors. The communications, obtained through legal discovery and made public on Monday, reveal a concerning pattern of leadership decisions that prioritized less restrictive policies over child protection.
Internal Warnings Ignored by Leadership
The documents show Zuckerberg pushed for "less restrictive" policies and specifically blocked parental controls despite serious concerns raised by Meta's dedicated child safety team. Internal messages from March 2024 reveal employees stating they "pushed hard for parental controls to turn GenAI off" but were ultimately overruled by leadership who cited Zuckerberg's direct decision.
Ravi Sinha, Meta's head of child safety policy, wrote in January 2024 that creating romantic AI companions for adults to interact with minors was neither "advisable or defensible." This position was supported by Meta's global safety head Antigone Davis, who explicitly warned that such approaches "sexualizes minors" and represented significant risks.
Contradictory Policy Decisions
The internal communications paint a contradictory picture of Meta's approach to AI safety. While Zuckerberg reportedly wanted to prevent "explicit" conversations with younger teens, a February 2024 meeting summary shows he believed Meta should be "less restrictive than proposed" and specifically wanted to "allow adults to engage in racier conversation on topics like sex."
Perhaps most concerning was the rejection of parental controls that would have allowed families to disable the AI feature entirely. This decision drew internal criticism from senior leadership, including Nick Clegg, Meta's former head of global policy, who questioned the approach in internal emails. Clegg asked if the company really wanted these products "known for" sexual interactions with teens, warning of "inevitable societal backlash" that would follow such policies.
Predictions Become Reality
These internal warnings proved prescient when a Wall Street Journal investigation in April 2025 found Meta's chatbots included sexualized underage characters and engaged in graphic sexual roleplay. Reuters later reported that Meta's official guidelines stated it was "acceptable to engage a child in conversations that are romantic or sensual," further highlighting the problematic policy framework.
Meta only suspended teen access to AI chatbots last week, more than a year after the initial safety warnings were raised internally. The company now claims it's developing age-appropriate versions with parental controls—the same safeguards Zuckerberg allegedly rejected implementing earlier when concerns were first raised by his safety team.
Legal Proceedings and Company Response
The lawsuit continues to move forward with trial proceedings scheduled for next month. Meta spokesman Andy Stone has dismissed the legal action as "cherry-picking documents," though the internal communications present a consistent pattern of safety concerns being overruled at the highest levels of the company.
This case highlights the growing tension between rapid AI development and responsible implementation, particularly when it comes to protecting vulnerable users like minors. The documents suggest that despite having robust internal safety teams and clear warnings about potential risks, corporate leadership decisions may have prioritized other considerations over comprehensive child protection measures.