Skip to main content

OpenAI training its next major AI model, forms new safety committee

posted onMay 29, 2024
by l33tdawg
Arstechnica
Credit: Arstechnica

On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has "recently begun" training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to a terrible two weeks in the press for the company.

Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, "frontier model" is a term for a new AI system designed to push the boundaries of current capabilities. And "AGI" refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).

Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.

Source

Tags

Industry News

You May Also Like

Recent News

Tuesday, November 19th

Friday, November 8th

Friday, November 1st

Tuesday, July 9th

Wednesday, July 3rd

Friday, June 28th

Thursday, June 27th

Thursday, June 13th

Wednesday, June 12th

Tuesday, June 11th