AI Expert Warns: World May Run Out of Time to Control AI Risks by 2026
AI Safety Expert Warns of High Risk, Urges Control Measures

In a stark warning that has sent ripples through the global tech community, a leading artificial intelligence safety expert has stated that humanity may be running out of time to prepare for the profound risks posed by cutting-edge AI systems. David Dalrymple, a programme director and AI safety expert at the UK government's Advanced Research and Invention Agency (ARIA), expressed deep concern over the breakneck speed of development, suggesting the world is unprepared for the potential consequences.

The Race Against Time: AI Development Outpacing Safety

Speaking to The Guardian, Dalrymple emphasized that the development of artificial intelligence is moving "really fast." He cautioned against assuming these advanced systems are reliable, as the scientific framework to ensure their safety is unlikely to materialize quickly enough due to intense economic pressures. "I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective," Dalrymple stated in the interview published on 4 January 2026.

He painted a near-future scenario that is far from science fiction, projecting that within five years, most economically valuable tasks will be performed by machines at a higher quality and lower cost than by humans. This, he argues, leads to a fundamental threat: "We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet."

Consequences of Unchecked Progress: Destabilisation and Sleepwalking

The potential fallout from this unchecked technological sprint is severe. Dalrymple explicitly warned that allowing AI's progress to get ahead of safety measures could lead to the "destabilisation of security and economy." He works on developing systems to safeguard AI use in critical infrastructure like energy networks, giving him a front-row seat to the high stakes involved.

Perhaps his most evocative critique is the notion that human civilization is "sleepwalking" into this high-risk transition. "I am working to try to make things go better but it's very high risk and human civilisation is on the whole sleepwalking into this transition," he said. While progress could be positive, the current lack of coordinated preparedness makes the path dangerously uncertain.

The 2026 Tipping Point and the Call for Control

Dalrymple issued a particularly specific and alarming forecast: by late 2026, AI systems could automate a full day of research and development work. This milestone would trigger "a further acceleration of capabilities" as the technology begins to self-improve, particularly in mathematics and computer science elements of its own development.

Given the urgency, Dalrymple advocates for a pragmatic shift in strategy. Since perfect reliability science may not arrive in time, the immediate focus must be on control and mitigation. "The next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides," he advised. This involves urgent technical work to understand and govern the behaviors of advanced AI systems before they surpass human oversight in critical domains.

The expert's warning serves as a clarion call for governments, researchers, and industry leaders to prioritize AI safety and control frameworks with the same intensity currently driving innovation. The message is clear: the time to act is now, before the window of opportunity closes.