Safe Superintelligence, an audacious new AI startup created by OpenAI’s visionary former Chief Scientist Ilya Sutskever, is making waves in the industry. The company is in high-stakes negotiations to secure funding, with insiders revealing that it is targeting a jaw-dropping valuation of at least $20 billion, according to multiple sources familiar with the matter, reported Reuters.
Despite its high valuation, the company hasn’t revealed any technology or products yet. It has only announced hiring, including for its Tel Aviv development center. Safe Superintelligence, according to Sutskever, aims to create AI models that exceed human intelligence but remain aligned with human interests. This suggests potential philosophical disagreements with OpenAI CEO Sam Altman, especially concerning the risks and limitations of advanced AI development.
Ilya Sutskever, alongside co-founders Daniel Gross and Daniel Levy, describes SSI as a “straight-shot SSI lab” with an unwavering focus – creating a safe superintelligence. This ambitious goal necessitates a singular approach. Unlike companies juggling multiple projects and revenue streams, SSI prioritizes safety above all else. Investors, business models, and even daily operations are meticulously designed to propel them towards this objective.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
Their approach hinges on viewing safety and capabilities as intertwined technical hurdles. Traditional AI development often prioritizes raw power, a strategy that raises concerns about unforeseen consequences. SSI, however, proposes a balanced approach. They aim to advance AI capabilities as rapidly as possible, but with the crucial caveat that safety remains paramount. This two-pronged strategy, they believe, paves the way for peaceful and responsible scaling of AI.
The benefits of SSI’s singular focus are multifaceted. Streamlined management and an unwavering commitment to their long-term goal eliminate distractions associated with product cycles and short-term pressures. Additionally, their business model safeguards safety, security, and progress from the temptations of immediate commercial gains. This allows them to prioritize responsible advancement without compromising their core principles.
Ilya Sutskever’s credentials within the AI landscape are undeniable. Born in Russia and holding both Israeli and Canadian citizenship, Sutskever has made significant contributions to the field. His co-invention of the AlexNet convolutional neural network, a landmark achievement in deep learning, stands as a testament to his technical expertise.
Sutskever’s journey with AI research continued with his co-founding of OpenAI. OpenAI, established in 2015, aimed to develop safe artificial general intelligence (AGI) – a hypothetical AI capable of surpassing human cognitive abilities across all domains. Ilya Sutskever’s departure from OpenAI in May 2024 was a significant development within the AI community. While the exact reasons remain undisclosed, speculation centers around a potential disagreement with OpenAI’s leadership regarding the pace and focus of safety measures.
![](https://s3.amazonaws.com/media.jewishbusinessnews.com/2021/06/08195152/logo-full.png)