OpenAI Prioritizes ‘Shiny Products’ Over AI Safety, Ex-Researcher Says

[ad_1]

A researcher who just resigned from ChatGPT developer OpenAI is accusing the company of not devoting enough resources to ensure that artificial intelligence can be safely controlled. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” ex-OpenAI researcher Jan Leike claimed in a tweet on Friday.A year ago, OpenAI appointed Leike and his colleague, renowned AI researcher Ilya Sutskever, to co-lead a team focused on reining in future superintelligent AI systems to prevent long-term harm. The resulting “superalignment” team was supposed to have access to 20% of OpenAI’s computing resources to research and prepare for such threats. But earlier this week, both Leike and Sutskever abruptly resigned from the company. Although Sutskever said he believes the company is on track to develop a “safe and beneficial” artificial general intelligence, Leike took to Twitter/X on Friday to express some serious doubts.“Over the past few months my team has been sailing against the wind,” Leike alleged in a long tweet thread. “Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

This Tweet is currently unavailable. It might be loading or has been removed.

He also revealed more about why he quit. “I joined because I thought OpenAI would be the best place in the world to do this research,” Leike said in a separate tweet. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”In another post, Leike noted that “building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” This was posted days after OpenAI debuted GPT-4o, its latest large language model.

Recommended by Our Editors

Leike’s tweets are bound to raise some serious concerns about OpenAI, which is trying to develop AI systems that can match and eventually exceed human capability. The company didn’t immediately respond to a request for comment. But OpenAI told Bloomberg that the superalignment team Leike and Sutskever were leading has been effectively disbanded. Instead, the company is preparing to integrate the remaining parts across OpenAI’s research efforts. Wired reports that five researchers who focused on safety and policy on OpenAI were fired or have resigned in recent months. That said, the company has other groups focused on shorter-term AI safety threats, whereas the superalignment team spent its efforts on far-off, theoretical dangers. 

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Megaclicknshop
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart