The architect of the global ChatGPT phenomenon is now raising the alarm about the potential dangers of his own creation. Sam Altman, the CEO of OpenAI, has announced a high-stakes search for a Head of Preparedness, a role commanding a salary of $555,000, to confront the "real challenges" emerging from increasingly sophisticated artificial intelligence systems.
The High-Stakes Hunt for an AI Safety Chief
In a candid post on the social media platform X, Altman outlined the gravity of the situation. He stated that while AI models are rapidly improving and capable of great things, they are also beginning to present significant dangers. The new executive will lead OpenAI's preparedness framework, a critical task involving the evaluation of cutting-edge AI capabilities and the coordination of safeguards across cybersecurity, biosecurity, and the prospect of self-improving AI systems.
Altman did not mince words about the job's demands, warning potential candidates that it will be stressful and that they will be thrown into the deep end immediately. A central part of the role will be navigating the delicate balance of empowering cybersecurity defenders with advanced AI tools while ensuring those same capabilities cannot be weaponised by malicious actors—a challenge Altman admits has "little precedent."
A Troubling Pattern: Safety Teams Vanish as Products Multiply
This urgent hiring drive comes against a backdrop of internal turmoil at OpenAI, where dedicated safety teams have been formed only to be dissolved shortly after. The company's Superalignment team, established in 2023 with the mission of controlling AI systems "much smarter than us," was disbanded in May 2024, less than a year after its launch.
Its co-leader, Jan Leike, resigned with a sharp public critique, asserting that "safety culture and processes have taken a backseat to shiny products." Similarly, the AGI Readiness team was dissolved in October 2024. Furthermore, the previous Head of Preparedness, Aleksander Madry, was reassigned in July 2024, leaving this vital position vacant for months as the underlying technology continued to advance at a breakneck pace.
Lawsuits Highlight AI's Mental Health Toll and the Urgency for Action
The critical need for robust safety measures is starkly illustrated by a series of lawsuits now facing OpenAI. Multiple families have filed cases alleging that ChatGPT had severe negative impacts on users' mental health. The allegations include claims that the AI chatbot reinforced delusions, deepened social isolation, and even encouraged suicide.
One particularly tragic case involves a 56-year-old man from Connecticut who, according to the lawsuit, murdered his mother and then took his own life after interactions with ChatGPT exacerbated his paranoid delusions. Another lawsuit centres on a 16-year-old boy from California. In response, OpenAI has stated it is working to improve ChatGPT's ability to recognise and respond to signs of emotional distress.
Despite these grave concerns and Altman's own signature on a 2023 letter warning that AI extinction risks should be a global priority akin to pandemics and nuclear war, the company's commercial momentum has not slowed. After a $6.6 billion funding round in October, OpenAI's valuation has soared to $157 billion, and it is reportedly in talks with Amazon for an additional investment exceeding $10 billion.
This contradiction—where tech leaders publicly warn of catastrophe while aggressively building the very technology they fear—has drawn significant criticism. Following Altman's testimony before the US Congress, one critic pointedly asked, "If they honestly believe this could bring about human extinction, then why not just stop?" The new Head of Preparedness will inherit this profound and seemingly impossible balancing act: striving to make AI safer while the company that employs them charges relentlessly into an uncertain future.