Miles Brundage, former head of policy research at OpenAI, departs to pursue policy research in the nonprofit sector, amidst ongoing discussions about AI safety and regulation.
Miles Brundage, a prominent figure in the realm of artificial general intelligence (AGI) research, has made the decision to leave OpenAI to pursue policy research in the nonprofit sector. Brundage held a pivotal role within OpenAI, serving as the head of policy research and AGI readiness, throughout his tenure that lasted over six years.
OpenAI, a leading entity in artificial intelligence innovation, has recently witnessed several high-profile departures from key safety researchers and executives. These changes have occurred amid growing concerns surrounding the company’s approach to balancing the development of AGI with safety considerations. However, Brundage has made it clear that his departure was not driven by specific safety issues at OpenAI. In a conversation with the tech podcast Hard Fork, he stated, “I’m pretty confident that there’s no other lab that is totally on top of things.” He emphasised his desire to have a broader impact in the sphere of policy research and advocacy, hence his move towards nonprofit work.
Brundage is well-regarded for his contributions to safety innovations at OpenAI. One notable achievement under his leadership was the implementation of external red teaming. This process involved enlisting external experts to identify potential issues in OpenAI’s products, thus fortifying the safety and reliability of their AI systems.
On the future of AGI, the discourse is intensifying within the tech community, with varying opinions on when it will become a reality. Speaking on Hard Fork, Brundage projected that the industry is on the brink of crafting systems capable of performing almost any task that a person can execute remotely on a computer. This includes tasks as intricate as operating a mouse and keyboard and even appearing as a human in a video chat.
The timeline for achieving AGI remains a hot topic of debate among industry leaders. John Schulman, an OpenAI cofounder and research scientist, who also exited OpenAI in August, echoed sentiments that AGI is only a few years away. Similarly, Dario Amodei, CEO of OpenAI competitor Anthropic, anticipates an iteration of AGI by 2026.
Brundage’s decision to transition away from OpenAI is partly rooted in his aspiration to address broader, industry-wide issues and partake in discussions around regulation. He expressed a desire for greater independence to circumvent potential bias associated with being part of a corporate environment, stating, “I want to be independent and less biased. So I didn’t want to have my views rightly or wrongly dismissed as this is just a corporate hype guy.”
Brundage’s departure marks the continuation of significant changes within OpenAI, as it navigates the complexities of AGI research and the accompanying ethical and safety considerations. His move towards the nonprofit sector highlights his commitment to influencing the discourse on AI policy and governance, a conversation that continues to captivate the tech industry and policymakers worldwide.
Source: Noah Wire Services