Demis Hassabis Had a Vision for AI as a Scientific Tool
Demis Hassabis, the CEO of Google DeepMind, originally envisioned artificial intelligence as a powerful instrument for solving humanity's grandest challenges. His plan was ambitious: cure cancer, crack protein folding, and use AI for deep scientific exploration, rather than as a consumer-facing product. This methodical approach aimed to build systems like AlphaFold, which mapped over 200 million protein structures and earned Hassabis a Nobel Prize in chemistry, focusing on collaborative, unhurried progress akin to models like CERN.
The ChatGPT Launch Changed Everything Overnight
In November 2022, OpenAI launched ChatGPT, and it went viral almost instantly, reshaping the AI landscape. Hassabis admits this event altered the nature of the race he is now part of. "We're in this sort of ferocious commercial pressure race that everyone's locked into currently," he told YouTuber Cleo Abram in a recent interview. He highlighted additional pressures, including geopolitical tensions like the US-China rivalry, creating multiple layers of urgency to move quickly.
No One Saw the Viral Impact Coming
Hassabis does not blame OpenAI for initiating this shift. Instead, he points out that every major AI lab, including DeepMind, had systems similar to ChatGPT at the time. The difference was not in capability but in nerve—OpenAI scaled and released its model first. "I think even they say it was kind of a research experiment," Hassabis noted. "They didn't realize it would go so viral." Researchers were too close to the flaws of these systems, such as hallucinations and gaps, underestimating the public's willingness to embrace imperfect technology.
The Cost of the Fast-Paced AI Race
This miscalculation by nearly all labs simultaneously triggered the intense competition Hassabis now navigates. While he acknowledges upsides like faster progress, democratized access, and increased public familiarity with AI, the cost is significant. The careful, methodical approach he envisioned is gone. In an August 2025 interview with the Guardian, Hassabis expressed regret: "If I'd had my way, we would have left it in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that."
Future Risks and Safety Concerns
With the race underway, Hassabis is increasingly focused on future risks, particularly those emerging in three to four years. He outlined two primary concerns in his interview with Cleo Abram:
- Bad actors repurposing tools: From individuals to nation states, misuse of AI built for beneficial ends.
- AI systems drifting off course: As they become more capable and autonomous, ensuring they follow instructions poses a hard technical challenge.
Hassabis has spent years trying to establish formal safety oversight structures within Google, but these efforts, including independent boards and governance charters, have largely failed. His conclusion is that real influence comes from being inside the room where decisions are made, not from building external fences.
A Detour from Scientific Ambitions
For someone who entered AI to tackle big scientific questions, the chatbot era represents an unplanned detour. Hassabis is making the best of the situation but remains mindful of what has been lost in the rush toward commercialization. His reflections underscore a pivotal moment in AI history, where commercial pressures have overshadowed slower, more deliberate scientific pursuits.



