OpenAI sets up new safety body in wake of staff departures

OpenAI is setting up a new governance body to oversee the safety and security of its AI models, as it embarks on the development of a successor to GPT-4.

The first task for the OpenAI Board’s new Safety and Security Committee will be to evaluate the processes and safeguards around how the company develops future models.

As an aside, the company said in the blog post announcing the committee’s creation that it “has recently begun training its next frontier model” and that it anticipates that the resulting systems will “bring us to the next level of capabilities on our path to AGI,” or artificial general intelligence — an AI system whose capabilities match or exceed the human brain in a wide range of tasks.

OpenAI’s creation of a new safety committee at board level follows a string of departures and bad publicity around the company’s attitude to safety, including the dispersal of a “superalignment” team focused on long-term risks led by ex-chief-scientist Ilya Sutskever, who left the company two weeks ago. Sutskever’s departure was followed by that of Jan Leike, who co-led the superalignment team.

The committee will be led by OpenAI CEO Sam Altman, OpenAI chairman Bret Taylor, and fellow board members Adam D’Angelo, Nicole Seligman. Other members will include the OpenAI Head of Preparedness Aleksander Madry, Head of Safety Systems Lilian Weng, Head of Alignment Science John Schulman, Head of Security Matt Knight, and Sutskever’s successor as chief scientist, Jakub Pachocki.

Its first task will be to evaluate how the company is handling AI risks in its development of AI models. In 90 days, it will share its recommendations with the full board of directors. The company said it may later reveal any adopted recommendations “in a manner that is consistent with safety and security.”

With the committee, OpenAI signals that it recognizes the continued concerns the industry and the general public have about AI, and is taking steps internally to monitor itself even as it aims to stay ahead of competitors.

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” it said in the blog post.

Pressure mounts on OpenAI

OpenAI’s unveiling of progress on its next version of GPT is a natural progression for the company as it aims to protect its market lead even as competition heat ups. xAI, the company founded by Tesla leader Elon Musk, recently announced a $6 billion fundraising effort with a $24 billion valuation as Musk aims to challenge the startup he once championed on AI and AGI. Meanwhile, Musk and OpenAI remain embroiled in a heated legal dispute.

OpenAI also faced controversy recently when it released a virtual assistant with a voice that some said sounded eerily similar to that of Hollywood actress Scarlett Johannson, even though she did not consent to the company using her voice when asked for her permission several times. Johannson famously voiced an AI system with whom a character played by Joaquim Phoenix falls in love in the 2013 film “Her.”

“As the usage of generative AI increases, associated risks, and security concerns are emerging,” observed Pareekh Jain, CEO of EIIRTrend & Pareekh Consulting. “The Scarlett Johansson incident has heightened OpenAI’s awareness of these risks.”

Securing AI can bolster its adoption

AI security also remains a priority for AI stakeholders at large, with various initiatives being formed at both the government and corporate levels to try to set guidelines for the future development of the technology before it evolves beyond human control.

Just last week, 16 big users and creators of AI, including OpenAI as well as its top competitors Google, Amazon, Meta and xAI as well as frenemy Microsoft, signed up to the Frontier AI Safety Commitments, a new set of safety guidelines and development outcomes for the technology.

Demonstrating that AI is secure is essential to companies like OpenAI whose business depend on its widespread adoption. That’s because one of the largest challenges in enterprise and consumer perception of AGI relates to security, according to Jain. This perception “is often influenced by scenarios depicted in science fiction movies,” he said. “Therefore, it is essential to integrate security measures, risk management, and ethical considerations from the design stage, rather than as an afterthought.”

He’s not alone in that believe. Nicole Carignan, vice president of strategic cyber AI at cybersecurity firm Darktrace, said, “The risk AI poses is often in the way it is adopted,” and it’s important to encourage AI leaders to promote its responsible, safe, and secure use. “Broader commitments to AI safety will allow us to move even faster to realize the many opportunities and benefits of AI,” she said.

© Foundry