OpenAI forms safety committee as it starts training next AI model

OpenAI CEO Sam Altman ©Eric Risberg/AP Photo

OpenAI has formed a new safety committee as training starts on their latest artificial intelligence (AI) model.

The committee will make recommendations to the company’s board on “critical and security decisions” for OpenAI.

CEO Sam Altman, Bret Taylor, Adam D’Angelo and Nicole Seligman will lead the committee, OpenAI said in a blog post.

The first thing on their agenda is to update the company’s current safety practices in 90 days and then share their recommendations with the board. Then, the adopted recommendations will be shared with the public.

"We view safety as something we have to invest in and succeed at," OpenAI’s current safety procedures say.

The announcement comes a few weeks after OpenAI disbanded its Superalignment Team, a research team that was supposed to mitigate AI risks, like rogue behaviour, after the co-chairs quit the firm. It was running for less than a year.

OpenAI produced a “safety update” last week in the wake of the AI Seoul Summit. It says, among other things, that it will not release a new AI model if it crosses a “medium” threat level.

That assessment is based on internal “scorecards” that the company keeps on their models based on how they perform during training runs, but OpenAI does not share more specific information about how they evaluate the models.

OpenAI said in their AI Seoul Summit statement that they are also working on additional protections for flagging harmful content for children on their platforms as well as introducing a new tool to identify AI-generated images by DALL-E 3, ChatGPT’s image generator.

On May 13, OpenAI announced GPT-4 Omni, the latest model that can “reason across audio, vision and text in real-time”.

The company says the new model is a step towards “more natural human-computer interaction,” because it can respond to inputs in as little as 232 milliseconds.

OpenAI’s tech and policy experts Aleksander Madry and Lilian Weng along with Jakub Pachocki, the newly appointed chief scientist, are also on the committee.

The safety committee is going to get the advice of former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.

© Euronews