Connect with us

Hi, what are you looking for?

Jewish Business News

Unicorns

OpenAI Co-Founder Sutskever’s ‘Safe Superintelligence’ Reaches $30 Billion AI Startup Valuation

The news comes just one week after Safe Superintelligence was reportedly going to hit a $20 billion valuation.

Ilya Sutskever

Ilya Sutskever (Twitter)

Ilya Sutskever’s, OpenAI’s visionary former Chief Scientist, clandestine AI venture, Safe Superintelligence (SSI), is reportedly securing a jaw-dropping funding round exceeding $1 billion—catapulting its valuation beyond a staggering $30 billion. According to Bloomberg, the elusive startup, shrouded in secrecy, is positioning itself as a dominant force in the race to build advanced AI. With Sutskever at the helm, SSI’s astronomical valuation signals not just immense investor confidence, but the potential dawn of a new era in artificial intelligence.

According to Bloomberg, San Francisco-based venture capital powerhouse Greenoaks Capital Partners is spearheading the deal, with plans to inject a staggering $500 million into the venture.

The news comes just one week after Safe Superintelligence was reportedly going to hit a $20 billion valuation.

Please help us out :
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.

Despite its high valuation, however, the company hasn’t revealed any technology or products yet. It has only announced hiring, including for its Tel Aviv development center. Safe Superintelligence, according to Sutskever, aims to create AI models that exceed human intelligence but remain aligned with human interests. This suggests potential philosophical disagreements with OpenAI CEO Sam Altman, especially concerning the risks and limitations of advanced AI development.

Ilya Sutskever, alongside co-founders Daniel Gross and Daniel Levy, describes SSI as a “straight-shot SSI lab” with an unwavering focus – creating a safe superintelligence. This ambitious goal necessitates a singular approach. Unlike companies juggling multiple projects and revenue streams, SSI prioritizes safety above all else. Investors, business models, and even daily operations are meticulously designed to propel them towards this objective.

Their approach hinges on viewing safety and capabilities as intertwined technical hurdles. Traditional AI development often prioritizes raw power, a strategy that raises concerns about unforeseen consequences. SSI, however, proposes a balanced approach. They aim to advance AI capabilities as rapidly as possible, but with the crucial caveat that safety remains paramount. This two-pronged strategy, they believe, paves the way for peaceful and responsible scaling of AI.

The benefits of SSI’s singular focus are multifaceted. Streamlined management and an unwavering commitment to their long-term goal eliminate distractions associated with product cycles and short-term pressures. Additionally, their business model safeguards safety, security, and progress from the temptations of immediate commercial gains. This allows them to prioritize responsible advancement without compromising their core principles.

Ilya Sutskever’s credentials within the AI landscape are undeniable. Born in Russia and holding both Israeli and Canadian citizenship, Sutskever has made significant contributions to the field. His co-invention of the AlexNet convolutional neural network, a landmark achievement in deep learning, stands as a testament to his technical expertise.

Sutskever’s journey with AI research continued with his co-founding of OpenAI. OpenAI, established in 2015, aimed to develop safe artificial general intelligence (AGI) – a hypothetical AI capable of surpassing human cognitive abilities across all domains. Ilya Sutskever’s departure from OpenAI in May 2024 was a significant development within the AI community. While the exact reasons remain undisclosed, speculation centers around a potential disagreement with OpenAI’s leadership regarding the pace and focus of safety measures.

Newsletter



You May Also Like

World News

In the 15th Nov 2015 edition of Israel’s good news, the highlights include:   ·         A new Israeli treatment brings hope to relapsed leukemia...

Life-Style Health

Medint’s medical researchers provide data-driven insights to help patients make decisions; It is affordable- hundreds rather than thousands of dollars

Religion

He hopes to be a real Jew in time for Passover.

Leadership

Jews are disproportionately represented on the roster of the richest business people, with 10 Jews among the top 50 (20%), and 38 (19%) Jews...