In the glittering, high-stakes world of artificial intelligence, where OpenAI's ChatGPT often grabs the headlines, a different kind of AI lab is quietly scripting an unprecedented commercial success story. Anthropic, the company founded by a group of "do-gooders" with a missionary zeal for AI safety, is emerging as a dark horse powerhouse, defying its critics and posting staggering financial growth.
From Safety Crusaders to Business Juggernaut
Founded by siblings Dario and Daniela Amodei and others who left OpenAI in 2021 over safety concerns, Anthropic has long been a target for snark in profit-driven Silicon Valley. Its in-house philosopher, its safety-first ethos, and its chatbot named Claude painted a picture of a lab more concerned with ethics than earnings. This image attracted sharp criticism from powerful figures like Nvidia's CEO Jensen Huang, who bluntly disagreed with Dario Amodei's warnings about AI-induced job losses, and venture capitalist David Sacks, who derided the firm as part of a "doomer industrial complex."
Yet, while Amodei laments a political shift against safety in Washington, his company is achieving feats that leave even him in awe. In an interview, Amodei revealed that Anthropic's annualised recurring revenue, which grew roughly tenfold in 2024 to reach $1 billion, is now "substantially beyond $4 billion." He suggests the company could be "on pace for another 10x" growth in 2025, a trajectory he calls unprecedented "in the history of capitalism."
Claude's Quiet Conquest of the Enterprise
Anthropic's secret weapon has been a relentless focus on the business-to-business (B2B) market. While ChatGPT captivated consumers, Anthropic muscled in on OpenAI's enterprise turf. Today, B2B accounts for 80% of Anthropic's revenue, and data indicates it leads in providing companies access to AI models via APIs.
Its latest model, Claude 4, has become a hit among software developers. Fast-growing coding startups like Cursor and programmers in established firms have embraced it for its ability to operate autonomously and use other computer programs, effectively outsourcing complex work. Anthropic sees these tech-savvy users as early adopters who will open doors to wider corporate adoption.
Ironically, this commercial success stems directly from its safety principles. Early on, Anthropic decided against building potentially addictive entertainment products, focusing instead on work tools. This aligned perfectly with what businesses want: trustworthy, reliable, and interpretable AI. Companies value Anthropic's efforts to understand why models fail, turning a ethical stance into a competitive advantage.
The High Cost of Principles and Power
This breakneck growth comes with immense challenges, primarily the colossal cost of training AI models. Like its peers, Anthropic is burning through cash, necessitating constant fundraising. The company is reportedly in talks for a new round, with speculation that Amazon may increase its stake and some VCs are willing to invest at a valuation as high as $100 billion, up from $61.5 billion in March 2024.
This dash for cash creates stark paradoxes. In a leaked Slack message from July, Amodei agonized over seeking investment from Gulf states, wrestling with the principle that "no bad person should ever profit from our success." He later told The Economist that while he has security concerns about US data centres in the Gulf, his scruples about regional investment have eased out of necessity, calling them "big sources of capital."
Amodei's convictions, however, remain firm. He believes AI can cure "diseases that have been intractable for millennia" but insists society must manage costs like job losses. He argues democratic control of AI, like in the US, is safer than autocratic control, calling the Trump administration's relaxation of AI chip exports to China—a move lobbied for by Nvidia's Huang—an "enormous geopolitical mistake."
For investors like Ravi Mhatre of Lightspeed Venture Partners, Anthropic's safety focus is a long-term bet that will pay dividends when AI models eventually go awry. "We just haven't had the 'oh shit' moment yet," he notes. As Anthropic balances its missionary zeal with material needs, it is proving that in the cutthroat AI race, a conscience might just be a unique and highly valuable asset.