Cryptonews

Researchers Receive Funding to Identify Potential Risks in AI Systems Amidst Reports of Disbanded Internal Risk Management Units

Source
cryptonewstrend.com
Published
Researchers Receive Funding to Identify Potential Risks in AI Systems Amidst Reports of Disbanded Internal Risk Management Units

In a pivotal move, OpenAI has unveiled a Safety Fellowship program, offering a substantial weekly stipend of $3,850 to external researchers tasked with examining the potential risks associated with advanced AI. This announcement, made on April 6, coincided with a revealing investigation by The New Yorker, which exposed that OpenAI had dismantled its internal safety teams and removed the term "safely" from its mission statement filed with the IRS. The fellowship, designed as a pilot program, aims to foster independent research in safety and alignment, while nurturing the next generation of talent in this critical field.

The program's generous incentives include a weekly stipend of $3,850, equivalent to over $200,000 per year, in addition to approximately $15,000 worth of computational resources and mentorship from OpenAI's expert researchers. Fellows will have the flexibility to work from Constellation's Berkeley workspace or remotely, with applications closing on May 3. Notably, the fellowship is not restricted to AI specialists, as OpenAI is actively seeking candidates from diverse backgrounds, including cybersecurity, social sciences, and human-computer interaction, in addition to computer science.

The timing of the announcement is particularly noteworthy, as it came on the heels of Ronan Farrow's investigative report in The New Yorker, which documented the demise of three consecutive internal safety teams at OpenAI over a span of 22 months. The superalignment team was the first to be disbanded in May 2024, followed by the AGI Readiness team in October 2024, and finally, the Mission Alignment team in February 2026. A striking remark from an OpenAI representative, in response to a journalist's inquiry about existential safety researchers, underscored the company's apparent shift in priorities: "What do you mean by existential safety? That's not, like, a thing."

It is essential to note that the Safety Fellowship is not intended to replace the internal infrastructure that was dissolved. Instead, fellows will receive API credits and computational resources, but will not have direct access to OpenAI's systems, positioning the program as a form of arm's-length research funding. The research agenda for the fellowship spans seven key areas, including safety evaluation, ethics, robustness, and scalable mitigations, among others. By the program's conclusion in February 2027, each fellow is expected to produce a substantial output, such as a research paper, benchmark, or dataset. OpenAI has emphasized that it prioritizes research ability, technical judgment, and execution capacity over specific academic credentials.

The implications of this development extend far beyond the AI industry, as it has significant bearings on the broader market. The confidence in frontier AI companies' safety commitments serves as a crucial market signal, influencing capital allocation across AI infrastructure, AI tokens, and DePIN and AI agent protocols, which sit at the intersection of cryptocurrency and artificial intelligence. As investors closely monitor OpenAI's spending trajectory and operational priorities, the success of this fellowship program will be keenly observed. The question of whether external researchers, working without internal access, can meaningfully impact model development will begin to be answered when the first cohort's research is unveiled in early 2027.