Ex OpenAI chief scientist Sutskever launches rival AI company that puts safety first

Ilya Sutskever is launching Safe Superintelligence ©Canva

OpenAI co-founder Ilya Sutskever, who left the ChatGPT maker last month, has announced a new artificial intelligence (AI) company, which he’s calling Safe Superintelligence or SSI.

“I am starting a new company,” Sutskever wrote on X on Wednesday. “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”

He described SSI as a start-up that “approaches safety and capabilities in tandem,” so that the company advances its AI system while still prioritising safety.

Unlike other AI companies that face external pressures, such as OpenAI or Google, SSI will have a “singular focus,” which means “no distraction by management overhead or product cycles”.

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” he said, adding that they can “scale in peace”.

Sutskever led the push to oust OpenAI’s co-founder and CEO Sam Altman in November, which lasted five days before he was rehired. The coup, carried out along with other directors, was over concerns about Altman’s handling of AI safety.

Sutskever led OpenAI’s superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic. The team was focused on guiding and controlling AI systems but has been devolved since both men left OpenAI.

“I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point,” Sutskever said when he left OpenAI.

Sutskever is starting the company with Daniel Gross, who oversaw Apple’s AI and search efforts, and former OpenAI colleague Daniel Levy.

The company has offices in Palo Alto, California and Tel Aviv, Israel.

© Euronews