Former OpenAI board member tells all about Altman’s ousting

A former board member of OpenAI has shed new light on last year’s firing of company CEO Sam Altman.

In an episode of The Ted AI Show podcast titled What really went down at OpenAI and the future of regulation, Helen Toner spoke with host Bilawal Sidhu about Altman’s departure and subsequent return, as well as the need for sound and thorough AI regulation — an issue of paramount importance for CIOs, who not only need to deal with AI governance in the enterprise, but must also be able to trust the technologies they are using.

In introducing Toner, the director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology (CSET) and an expert in AI policy and global AI strategy, Sidhu described the OpenAI saga as being “all about AI board governance and incentives being misaligned among some really smart people. It also shows us why trusting tech companies to govern themselves may not always go beautifully, which is why we need external rules and regulations. It is a balance.”

As for what really went down, Toner described the OpenAI board, which she joined in 2021, “as not a normal board.” It was, she said, set up explicitly for the purpose of making sure the company’s public good mission was coming first over profits and investor interests. and was one of four board members to vote for Altman’s dismissal before being ousted upon his return a week later,

“But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company and in some cases, outright lying to the board. At this point, everyone always says, ‘Like what? Give me give me some examples,’” she said.

“I cannot share all the examples,” she continued, “but to give a sense of the kind of thing that I am talking about, it is things like, when ChatGPT came out in November 2022, the board was not informed in advance. We learned about it on Twitter. Sam did not inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.”

In addition, she said, “On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”

The final example she said she could share, “because it’s been very widely reported, relates to this paper that I wrote, which, I think, has been way overplayed in the press.”

Sidhu told his audience that Toner co-wrote a research paper last fall intended for policy makers: “What you need to know is that Sam Altman was not happy about it. It seemed like Helen’s paper was critical of open AI and more positive about one of their competitors, Anthropic. It was also published right when the Federal Trade Commission was investigating OpenAI about the data used to build its generative AI products.”

Toner replied that the problem was that, “after the paper came out, Sam started lying to other board members in order to try and push me off the board. So, it was another example that just like really damaged our ability to trust him and actually only happened in late October last year, when we were already talking pretty seriously about whether we needed to fire him.”

Toner one of four board members to vote for Altman’s dismissal in November 2023, before being ousted herself upon his return a week later,

The OpenAI saga, said Sidhu, “shows that trying to do good and regulating yourself is not enough.” He then asked Toner why it is necessary to have regulations.

Toner said that they are imperative by virtue of the fact AI can be used in so many scenarios, be it in an IT department or a government agency.

“If people are using it to decide who gets a loan, to decide who gets parole, to decide who gets to buy a house, you need that technology to work well. If that technology is going to be discriminatory, which AI often is, it turns out, you need to make sure that people have recourse, and they can go back and say, ‘Hey, why was this decision made?’” she said.

As for the possibility of AI being used in the military, she said, “That is a whole other kettle of fish. I do not know if we would say ‘regulation’ for that, but we certainly need to have guidance, rules, and processes in place.”

Another concern is “superalignment,” or ensuring that superintelligent AI systems remain aligned with human values and goals. OpenAI had a whole team dedicated to that, led by former Chief Scientist Ilya Sutskever, who left the company two weeks ago, and Jan Leike, who left OpenAI to join Anthropic in a similar role.

Reflecting on the risks that could arise from the development of superintelligent AI without adequate safeguards, Toner said, “Looking forward and thinking about more advanced AI systems, there is a pretty wide range of potential harms that we could well see if AI keeps getting increasingly sophisticated. A script kiddy in their parent’s basement having the hacking capabilities of a crack NSA cell is a problem.”

Sidhu concluded the podcast by reading a statement from OpenAI board chairman Bret Taylor, to whom he had sent a transcript of the episode. Taylor wrote, “We are disappointed that Miss Toner continues to revisit these issues. An independent committee of the board worked with the law firm WilmerHale to conduct an extensive review of the events of November. The review concluded that the prior board’s decision was not based on concerns of product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners. Additionally, over 95% of employees, including senior leadership, asked for Sam’s reinstatement as CEO and the resignation of the prior board. Our focus remains on moving forward and pursuing OpenAI’s mission to ensure AGI benefits all of humanity.”

© Foundry