Anthropic Introduces Identity Verification for Claude AI Platform
In a significant move to enhance platform integrity, Anthropic has officially rolled out identity verification measures for its Claude AI assistant. This initiative aims to prevent abuse, enforce usage policies, and comply with legal obligations. As part of this rollout, the AI company is requesting select users to submit a government-issued photo ID along with a live selfie when accessing certain features of Claude.
Details of the Verification Process
Anthropic explained the rationale behind this step in a recent statement. "We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures," the company said. It further assured users that "We only use your verification data to confirm who you are and not for any other purposes."
Accepted and Non-Accepted Identification Documents
According to Anthropic's official support page, the company accepts physical government-issued photo IDs from most countries. The list of commonly accepted documents includes:
- Passport
- Driver's license or state/provincial ID card
- National identity card
The company specifies that "Your ID must be issued by a government, clearly legible, undamaged, and include a photo of you." Conversely, the following types of identification are not accepted:
- Photocopies, screenshots, scans, or photos of a photo
- Digital or mobile IDs (such as mobile driver's licenses)
- Non-government IDs: student IDs, employee badges, library cards, bank cards
- Temporary paper IDs
Anthropic's Assurance on Data Usage and Privacy
Anthropic has provided clear assurances regarding the handling of verification data. The company states that this information is "used solely to confirm who you are and to meet our legal and safety obligations." It emphasizes that "We are not collecting more than we need. We ask for the minimum information required to verify your identity."
Furthermore, Anthropic clarifies the privacy aspects: "Verification data stays between you, Persona, and Anthropic, except where we're legally required to respond to valid legal processes. Your verification data is never shared with third parties for marketing, advertising, or any purpose unrelated to verification and compliance." Importantly, the company confirms that this data is not used to train AI models.
User Criticism and Backlash
This verification requirement has sparked significant criticism from users, especially coming weeks after millions joined Claude over concerns about OpenAI's surveillance practices. On social media platform X, user Kai @hqmank expressed frustration: "Claude now requires government ID verification (via Persona) before subscription. ChatGPT doesn't. Gemini doesn't. Anthropic just handed their competitors a gift."
Another user raised deeper concerns: "didnt just add KYC, they collapsed the boundary between identity and thought. once access to intelligence is gated by who you are, it stops being a tool and starts being infrastructure for control." A third user commented on the competitive implications: "This is a bad move for what reason did they do this? I was JUST going to reup on a pro plan...I may just go with Super Grok for now as I already have Gemini Pro."
Broader Implications for AI Industry
The introduction of ID verification by Anthropic marks a pivotal moment in the AI landscape, highlighting the growing tension between user privacy, safety measures, and accessibility. As AI platforms evolve, such policies could set precedents for how other companies balance compliance with user experience. The backlash underscores the sensitivity around digital identity and the potential competitive disadvantages such measures might create in a rapidly advancing field.



