Miles Brundage, OpenAI’s senior adviser for AGI (Artificial General Intelligence) readiness, has resigned, citing serious concerns about the industry’s preparedness to manage AGI. His departure adds to a series of high-profile exits from OpenAI, following last year’s failed attempt to remove CEO Sam Altman. Recently, chief technology officer (CTO) Mira Murati resigned, followed shortly by other prominent figures, including chief research officer Bob McGrew and research vice president Barret Zoph, who also left the company.
In his farewell statement, Brundage emphasised that "neither OpenAI nor any other leading labs are prepared for AGI, nor is the world at large". He mentioned that this view is shared by OpenAI’s leadership. Looking ahead, Brundage intends to concentrate on AI policy research and advocacy within the non-profit sector, where he will have greater freedom to publish and operate with increased independence.
“I think this is likely more often the case for policy research than, e.g., safety and security work, though there do need to be some people internally at OpenAI pushing for good policy stances, and it’s also valuable for there to be independent safety and security research,” Brundage adds.
Safety concerns voiced by top executives:
As the top executives left OpenAI, they shared a common concern about the company's transition from a non-profit model to a for-profit corporation, diverging from its original foundation.
Last month Murati, along with two other executives, departed from OpenAI while the company was implementing a restructuring plan to transition from a non-profit to a for-profit corporation. “For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built,” she said while leaving the company.
Earlier in May, former OpenAI policy researcher Gretchen Krueger announced her decision to leave the company, shortly after senior executives Jan Leike and Ilya Sutskever resigned. Krueger took to the social media platform X that she had "overlapping concerns" regarding the company.
After spending six years shaping OpenAI’s safety strategies, Brundage mentioned that he initially joined OpenAI as a research scientist on the policy team, later became head of Policy Research, and currently held the role of senior advisor for AGI Readiness. Before joining OpenAI, he had been in academia, completing a PhD in human and social dimensions of science and technology at Arizona State University and then working as a post-doc at Oxford. He also briefly worked in government at the US Department of Energy.
His resignation comes after other prominent members of OpenAI’s safety team also departed. Jan Leike, who focused on AI safety, recently left after expressing dissatisfaction with OpenAI’s prioritisation of products over safety protocols. Additionally, cofounder Ilya Sutskever left to start his own venture centred on safe AGI research. The recent dissolution of Brundage’s ‘AGI Readiness’ team reflects OpenAI’s shifting focus from foundational safety work toward commercial applications.
“The difficulty of demonstrating compliance without compromising sensitive information is a major barrier to arms control agreements, which requires innovation to address,” he adds.
Brundage noted that this push toward commercialisation has weakened OpenAI’s original mission of pursuing safe AGI. He had expressed concerns about the company’s for-profit direction as far back as 2019, and his departure comes amid speculation of a growing cultural rift within the organisation.
He expressed concern that, in the near term, AI could disrupt job opportunities for those who are eager to work. However, he also believed that humanity should ultimately eliminate the necessity of working for a living, arguing that this perspective is one of the most compelling reasons for developing AI and AGI in the first place.