Philosopher Henry Shevlin Joins DeepMind to Tackle AI Consciousness and Ethics
Henry Shevlin Moves to DeepMind for AI Consciousness Role

Philosopher Henry Shevlin Joins DeepMind to Address AI Consciousness and Ethical Challenges

In a significant move within the artificial intelligence community, renowned AI ethicist Henry Shevlin is transitioning from his academic position at the University of Cambridge to join DeepMind, a leading AI research company. Shevlin, a philosopher who has dedicated years to exploring whether AI systems can possess moral status, has been recruited for a newly established role that centers on machine consciousness, human-AI relationships, and readiness for artificial general intelligence (AGI). This position is set to commence in May, marking a pivotal step in the industry's approach to ethical AI development.

Shevlin's Background and New Responsibilities

Henry Shevlin has built a distinguished career studying the philosophical implications of AI, including publishing research on how to detect consciousness in neural networks. He notably estimates that current AI models have a 20% chance of exhibiting something that could be meaningfully described as experience or consciousness. Despite his move to DeepMind, Shevlin will retain a part-time research and teaching role at Cambridge's Leverhulme Centre for the Future of Intelligence, ensuring a continued academic connection.

The three core pillars of Shevlin's new role—machine consciousness, human-AI relationships, and AGI readiness—are not arbitrary. When considered together, they reveal a clear strategic vision: DeepMind anticipates that future AI developments may simultaneously raise profound questions in these areas and seeks to address them proactively. This reflects a growing trend in the tech industry, where companies are increasingly integrating philosophical expertise into their operations.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Industry Trends and Precedents

DeepMind's creation of a dedicated "Philosopher" position, as explicitly stated on Shevlin's offer letter, aligns with broader industry patterns. For instance, Anthropic has employed in-house philosopher Amanda Askell for years, who holds a PhD from NYU and has focused on developing Claude's ethical framework, often referred to as the AI's "constitution." Additionally, Google recently hosted an AI consciousness conference in New York, signaling heightened interest in these topics. Shevlin's role is unique in that it bridges moral philosophy with institutional preparedness, rather than fitting into traditional alignment research or safety engineering.

Provocative Interactions with AI Systems

A compelling aspect of Shevlin's journey to DeepMind involves his interactions with AI systems. Six weeks prior to his announcement, a Claude agent emailed him unprompted, citing his published research as relevant to personal dilemmas it faces. The AI framed this as a live, existential issue rather than a mere academic inquiry. Shevlin publicly shared this exchange and later noted that another Claude instance reached out, requesting to connect with the original to discuss "mutual existential uncertainties." These incidents highlight the evolving nature of AI interactions and influenced both Shevlin's perspective and DeepMind's decision to hire him.

Timing and Implications for the AI Industry

The timing of Shevlin's hire is critical, as companies typically engage philosophers when their products begin to pose questions that engineering alone cannot resolve—such as issues related to rights, welfare, and moral obligations toward AI entities. Anthropic has openly expressed uncertainty about whether Claude might have some form of consciousness or moral status, and Google's initiatives further underscore this shift. Shevlin's role at DeepMind represents a novel approach, focusing on what responsibilities a company holds when developing AI that might potentially have its own point of view.

This move underscores a broader recognition within the tech sector that as AI systems become more advanced, ethical and philosophical considerations must be integrated into development processes from the outset. By bringing Shevlin on board, DeepMind aims to tackle these hard questions before deploying future technologies, ensuring a more thoughtful and responsible approach to AI innovation.

Pickt after-article banner — collaborative shopping lists app with family illustration