Ilya Sutskever, a co-founder of OpenAI, has launched a new company, Safe Superintelligence Inc. (SSI), just a month after officially leaving OpenAI. Sutskever, who was previously OpenAI’s chief scientist, started SSI with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.
The tweet announcing the creation of SSI stated, “SSI is our mission, our name, and our entire product roadmap because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”
“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
At OpenAI, Sutskever played a crucial role in the company's initiatives to enhance AI safety in response to the emergence of 'superintelligent' AI systems. He worked on this alongside Jan Leike, co-leader of OpenAI’s Superalignment team. However, both Sutskever and Leike departed from OpenAI in May following a significant disagreement with the leadership regarding AI safety strategies. Leike now leads a team at Anthropic, a competing AI firm.
In a 2023 blog post co-authored with Leike, he predicted that AI with intelligence surpassing humans could emerge within a decade and warned that such AI might not be benevolent, emphasizing the need for research on controlling and restricting it.