Google DeepMind CEO Demis Hassabis Issues Stark Warning on AI's Dual Dangers
In a significant address at the India AI Impact Summit, Demis Hassabis, co-founder and CEO of Google DeepMind, has sounded the alarm on two critical risks associated with artificial intelligence. The tech leader emphasized that the rapid advancement of AI brings forth urgent threats that demand immediate global attention and action.
Urgent AI Risks: Weaponization and Autonomous Systems
Hassabis pinpointed two primary concerns during a Bloomberg Television interview. First, he warned of bad actors weaponizing beneficial AI technologies, potentially turning tools designed for good into instruments of harm. Second, he highlighted the risk of autonomous systems performing actions their designers never intended, as AI becomes more independent and agent-like.
"As the systems become more autonomous, more independent, they'll be more useful, more agent-like but they'll also have more potential for risk and doing things that maybe we didn't intend when we designed them," Hassabis explained. This growing autonomy, while enhancing utility, also escalates the potential for unforeseen consequences.
Call for International Cooperation and Standards
Stressing the cross-border nature of AI, Hassabis called for robust international cooperation to establish minimum standards before existing institutions are overwhelmed. He noted that AI's digital essence means it affects everyone globally, transcending national boundaries.
"It's digital, so it means it's going to affect everyone in the world, probably, and it's going to cross borders," he said. Hassabis advocated for forums that bring together policymakers and technologists, stating, "There has to be some element of international cooperation, or maybe at least minimum standards around how these technologies should be deployed."
AGI Remains Elusive Despite Progress
Turning to artificial general intelligence, Hassabis stated that AGI—a topic OpenAI CEO Sam Altman is "very excited" about—is still out of reach. He contrasted this with OpenAI's ambition, citing three key limitations in current AI systems: absence of continual learning, lack of long-term reasoning, and inconsistency in performance.
He elaborated: "What you'd like is for those systems to continually learn online from experience, to learn from the context they're in, maybe personalize to the situation and the tasks that you have for them." On reasoning, he added, "They can plan over the short term, but over the longer term, the way that we can plan over years, they don't really have that capability at the moment."
Regarding inconsistency, Hassabis noted, "Today's systems can get gold medals in the international Math Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way. A true general intelligence system shouldn't have that kind of jaggedness."
Despite these gaps, Hassabis predicted in a 2024 interview that true AGI could arrive within five to ten years, indicating a cautious yet optimistic timeline for this transformative technology.