Safe Superintelligence (SSI), the secretive AI startup and super unicorn launched by former OpenAI chief scientist Ilya Sutskever, has secured a massive $2 billion funding round—catapulting its valuation to an astonishing $32 billion. Remarkably, the company has yet to release any public-facing product or service, signaling strong investor confidence in its vision for safe and scalable artificial intelligence.
The round was led by Greenoaks with a reported $500 million investment, and included participation from top-tier firms such as Andreessen Horowitz, Lightspeed Venture Partners, and DST Global. Tech giants Alphabet and Nvidia have also thrown their support behind SSI, while Google Cloud has emerged as a key infrastructure partner for the AI research lab.
This significant backing positions SSI as a major player in the race for next-generation AI innovation, with a core focus on developing superintelligent systems aligned with human safety.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
Despite its high valuation, the company hasn’t revealed any technology or products yet. It has only announced hiring, including for its Tel Aviv development center. Safe Superintelligence, according to Sutskever, aims to create AI models that exceed human intelligence but remain aligned with human interests. This suggests potential philosophical disagreements with OpenAI CEO Sam Altman, especially concerning the risks and limitations of advanced AI development.
Ilya Sutskever, alongside co-founders Daniel Gross and Daniel Levy, describes Safe Superintelligence SSI as a “straight-shot SSI lab” with an unwavering focus – creating a safe superintelligence. This ambitious goal necessitates a singular approach. Unlike companies juggling multiple projects and revenue streams, SSI prioritizes safety above all else. Investors, business models, and even daily operations are meticulously designed to propel them toward this objective.
Their approach hinges on viewing safety and capabilities as intertwined technical hurdles. Traditional AI development often prioritizes raw power, a strategy that raises concerns about unforeseen consequences. Safe Superintelligence SSI, however, proposes a balanced approach. They aim to advance AI capabilities as rapidly as possible, but with the crucial caveat that safety remains paramount. This two-pronged strategy, they believe, paves the way for the peaceful and responsible scaling of AI.
The benefits of Safe Superintelligence SSI’s singular focus are multifaceted. Streamlined management and an unwavering commitment to their long-term goal eliminate distractions associated with product cycles and short-term pressures. Additionally, their business model safeguards safety, security, and progress from the temptations of immediate commercial gains. This allows them to prioritize responsible advancement without compromising their core principles.
Ilya Sutskever’s credentials within the AI landscape are undeniable. Born in Russia and holding both Israeli and Canadian citizenship, Sutskever has made significant contributions to the field. His co-invention of the AlexNet convolutional neural network, a landmark achievement in deep learning, stands as a testament to his technical expertise.
