Google Sets the Record Straight on Gmail AI Training Rumors
Google has officially responded to widespread concerns that emerged this week regarding the alleged use of Gmail user data for training its Gemini artificial intelligence model. The tech giant issued a clear denial after viral reports created significant confusion among users worldwide.
The controversy began when cybersecurity firm Malwarebytes published a report suggesting that Google had changed its settings to automatically train AI on user emails. The report, which quickly gained traction across social media platforms, advised users to disable "Smart features and personalization" in their Gmail settings to prevent this alleged data usage.
Google's Official Clarification
In a direct response posted on social media platform X, Google addressed what it called "misleading reports" with a straightforward message. "We have not changed anyone's settings. Gmail Smart Features have existed for many years. We do not use your Gmail content to train our Gemini AI model," the company stated emphatically.
The tech giant emphasized its commitment to transparency, adding that "We are always transparent and clear if we make changes to our terms & policies." This clarification came after numerous users expressed anger at the possibility of their personal email content being used without explicit permission for AI training purposes.
Malwarebytes Updates Its Report
Following Google's public statement, Malwarebytes revised its original report to reflect the accurate information. The cybersecurity company acknowledged that its initial interpretation was incorrect and attributed the confusion to Google's recent rewording of settings descriptions.
"The settings themselves aren't new, but the way Google recently rewrote and surfaced them led a lot of people (including us) to believe Gmail content might be used to train Google's AI models," Malwarebytes explained in their updated report published on November 23, 2025.
This incident marks the second time this year that Google has had to address false reports concerning Gmail. Back in September 2025, the company similarly denied claims that 2.5 billion Gmail users had been compromised during a data leak, calling those reports completely unfounded.
Where Google Actually Uses Your Data for AI Training
While Google has clarified that Gmail content remains off-limits for Gemini training, the company does utilize other types of user interactions to improve its AI capabilities. Conversations users have with the Gemini AI chatbot are automatically used for training purposes unless specifically disabled.
Users concerned about this practice can opt out by turning off the "Gemini Apps Activity" setting in their Google account. This practice isn't unique to Google, as other major AI companies including Anthropic and Meta have similar policies regarding training on user interactions with their AI assistants.
As AI technology continues to evolve, understanding these settings becomes increasingly important for users who want to maintain control over how their data contributes to artificial intelligence development.