
OpenAI is intensifying its pursuit of artificial general intelligence (AGI), particularly in the wake of rising competition from DeepSeek, which has emerged as a formidable player in the AI landscape. The company has articulated its vision for advanced AI agents, which it considers to be the next significant milestone. However, this commitment comes amid ongoing scrutiny regarding the delicate balance between safety and competitive advancement. Recently, OpenAI’s approach to AI safety has faced backlash, especially from former employees questioning the company’s evolving narrative.
Criticism from Former Policy Lead Regarding OpenAI’s AI Safety Narrative
In an effort to engage the community, OpenAI has outlined its methodology for the gradual deployment of its AI technologies, citing GPT-2’s cautious rollout as a benchmark. However, this approach has not been universally embraced. Miles Brundage, a former policy researcher at OpenAI, has publicly criticized the company for allegedly distorting the narrative surrounding its AI safety history.
The document released by OpenAI underscores its commitment to an iterative and thoughtful approach to AI deployment. This document discusses the importance of learning from past practices to enhance future safety measures. Specifically, it cites GPT-2 as an example of the company’s cautious strategy, asserting that it aims to treat current systems with heightened vigilance in a complex landscape. The document articulates that:
In a discontinuous world […] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2. We now view the first AGI as just one point along a series of systems of increasing usefulness […] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system.
Brundage, who served as OpenAI’s head of policy research, stresses that the GPT-2 rollout followed a methodical trajectory, with OpenAI providing insights throughout its development. Security experts acknowledged the responsible nature of this gradual deployment. He maintains that the cautious approach was essential, rather than overly tentative.
The bulk of this post is good + I applaud the folks who work on the substantive work it discusses.
But I’m pretty annoyed/concerned by the “AGI in many steps rather than one giant leap”section, which rewrites the history of GPT-2 in a concerning way.https://t.co/IQOdKczJeT
— Miles Brundage (@Miles_Brundage) March 5, 2025
Brundage further expressed unease over OpenAI’s assertion that AGI will develop through incremental steps rather than as a sudden breakthrough. He critiques the company for potentially misrepresenting the timelines of past releases and for redefining safety standards, which he believes could undermine critical discussions about AI risks as technology advances.
This incident marks yet another instance of OpenAI facing questions about its commitment to prioritizing long-term safety over rapid innovation and profit. Experts like Brundage continue to voice concerns regarding the importance of a measured approach to AI safety, especially as the pace of advancement increases.
Leave a Reply