Ex-OpenAI chief scientist Ilys Sutskever launches new AI startup

Ilys Sutskever, the influential former chief scientist of OpenAI, has unveiled his highly anticipated new venture —Safe Superintelligence Inc (SSI) — a company dedicated to developing safe and responsible AI systems.

The announcement comes after months of speculation following Sutskever’s departure from OpenAI, where he reportedly clashed with leadership including CEO Sam Altman over safety concerns.

SSI, as outlined by Sutskever in the announcement, is a company with a singular focus: creating safe and powerful artificial intelligence. “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” Sutskever wrote in the announcement. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

This suggests SSI could prioritize safety while actively pushing the boundaries of AI development.

“There is immense potential and the right intentions in SSI’s focused approach,” said Subrat Parida, an AI expert and former CEO of Racetrack AI. “Different nations need to define boundaries and establish compliance through global policies. Currently, unethical AI practices are being used for illegal purposes, making ‘safety’ seem like a mere buzzword. I hope SSI can set meaningful standards.”

Safety over commercial success

SSI aims to differentiate itself from established AI giants such as OpenAI, Microsoft, and Apple by avoiding the “pressure of management overhead and product cycles.”

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” Sutskever said in the announcement. “This way, we can scale in peace.”

This independence, coupled with a business model designed to prioritize long-term safety, suggests SSI could take a more measured approach compared to some of the breakneck developments witnessed in the AI field.

“SSI’s dedicated focus on safety has the potential to be a transformative force, pushing established AI players to prioritize responsible development alongside achieving ground-breaking results,” said Prabhu Ram, head of the Industry Intelligence Group at CyberMedia Research. “This could lead to a future where advancements in AI are not only impressive but also achieved ethically and with well-defined guardrails in place.”

Sutskever is not alone in this mission. He is supported by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked with OpenAI, SSI noted.

Currently, the company has two offices — in Palo Alto and Tel Aviv, where the company said it has “deep roots and the ability to recruit top technical talent.”

This development follows Sutskever’s departure from OpenAI in May, after leading the push to oust CEO Sam Altman. His exit hinted at new endeavors, which have now come to fruition with the establishment of SSI. Sutskever’s departure was soon followed by resignations from other OpenAI researchers, including Jan Leike and Greten Krueger, who cited safety concerns.

Both the researchers announced their exit from OpenAI on social media platform X.

Leike, who cited “safety culture and processes have taken a backseat” at the ChatGPT creator, eventually joined Anthropic last month stating his new focus will be on “scalable oversight, weak-to-strong generalization, and automated alignment research.”

The newly formed SSI is positioned as the world’s first “straight-shot” superintelligence lab, as per the announcement. The company plans to recruit top technical talent to tackle what according to Sutskever is “the most important technical problem of our time.”

“Now is time. Join us,” he urged in the announcement.

With the launch of SSI, the race for safe and powerful artificial intelligence enters a new phase. Sutskever’s experience and the team he has assembled position SSI as a major player in the critical field. Whether they can achieve their ambitious goal remains to be seen, but their focus on safety marks a significant step forward in the responsible development of artificial general intelligence.

“We are still in the early innings of artificial intelligence. We have a long way to go in terms of responsible adoption, establishing safety norms, and building adequate guardrails. In this context, Ilya Sutskever’s Safe Superintelligence (SSI) has the potential to be a transformative force in the evolving AI landscape,” CyberMedia Research’s Ram said.

© Foundry