We all remember an event that marked the technology industry – not for a good reason – in November 2023: The artificial intelligence (AI) community was rocked by an unexpected controversy involving OpenAIone of the world's most renowned AI research organizations, and its former CEO, Sam Altman. Although the exact details of the conflict have not yet been fully revealed, various reports and leaks point to a number of key factors:
It was suspected that Altman, in addition to being CEO of OpenAI, had investments in private AI companies that could compete with OpenAI itself. This raised questions about his impartiality and his ability to lead the organization ethically. On the other hand, Altman would have used manipulation tactics to influence the decisions of the board of directors and would have exhibited a aggressive behavior towards some employees.
But above all, there were disagreements between Altman and other OpenAI board members about the strategic direction of the organization. Some argued that Altman prioritized the development of high risk AI technologieswhile others advocated a more cautious approach focused on safety and ethics.
The response of two ex-OpenAI
But not everything came to nothing: Ilya Sutskeverco-founder of OpenAI, launched his new company from this situation, Safe Superintelligence (SSI)with the ambitious objective of developing safe superintelligent artificial intelligence. The birth of SSI marks a milestone in the AI community, rising from the ashes of the internal debate that led to Sutskever's departure from OpenAI.
Sutskever and other researchers such as Jan Leike had expressed concerns about the prioritization of “shiny products” over security at OpenAI, which ultimately led to its departure.
SSI differentiates itself from other AI companies by its primary focus on security. Its mission is to develop superintelligent AI systems that are beneficial and safe for humanityavoiding possible catastrophic consequences.
The challenge of secure superintelligence: Creating a safe superintelligent AI is a complex task that involves overcoming several obstacles:
- Alignment problem: Ensure that AI systems act in accordance with human values.
- Security Scalability: Develop security methods that can adapt to increasingly complex AI systems.
- Balance between speed and security: Advance AI development without compromising security.
Despite the doubts that some analysts have expressed about the concept of “safe superintelligence,” the launch of SSI is a positive sign of the commitment of some researchers to the responsible development of AI.
Attracting talent and financing
SSI seeks to attract researchers with a strong interest in the safe development of AIand your success will largely depend on the funding you receive and the partners you partner with.
SSI's entry into the AI landscape intensifies the competition to develop secure and reliable AI systems. It remains to be seen which companies will lead this race and what impact their advances will have on the future of humanity.
SSI presents itself as a key player in the search for a safe and beneficial AI future for all.