Saturday, February 7, 2026

OpenAI is on the lookout for a brand new Head of Preparedness

Share


OpenAI is seeking to rent a brand new govt liable for finding out rising AI-related dangers in areas starting from laptop safety to psychological well being.

In a post on X, CEO Sam Altman acknowledged that AI fashions are “beginning to current some actual challenges,” together with the “potential influence of fashions on psychological well being,” in addition to fashions which might be “so good at laptop safety they’re starting to search out essential vulnerabilities.”

“If you wish to assist the world work out the right way to allow cybersecurity defenders with innovative capabilities whereas making certain attackers can’t use them for hurt, ideally by making all programs safer, and equally for a way we launch organic capabilities and even achieve confidence within the security of working programs that may self-improve, please take into account making use of,” Altman wrote.

OpenAI’s listing for the Head of Preparedness role describes the job as one which’s liable for executing the corporate’s preparedness framework, “our framework explaining OpenAI’s method to monitoring and making ready for frontier capabilities that create new dangers of extreme hurt.”

The corporate first announced the creation of a preparedness team in 2023, saying it will be liable for finding out potential “catastrophic dangers,” whether or not they had been extra rapid, like phishing assaults, or extra speculative, akin to nuclear threats.

Lower than a yr later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job targeted on AI reasoning. Different security executives at OpenAI have additionally left the company or taken on new roles outdoors of preparedness and security.

The corporate additionally lately updated its Preparedness Framework, stating that it would “alter” its security necessities if a competing AI lab releases a “high-risk” mannequin with out related protections.

Techcrunch occasion

San Francisco
|
October 13-15, 2026

As Altman alluded to in his submit, generative AI chatbots have confronted rising scrutiny round their influence on psychological well being. Recent lawsuits allege that OpenAI’s ChatGPT strengthened customers’ delusions, elevated their social isolation, and even led some to suicide. (The corporate mentioned it continues working to enhance ChatGPT’s skill to acknowledge indicators of emotional misery and to attach customers to real-world assist.)



Source link

Read more

Read More