On Friday, November 17, 2023, OpenAI, a leading research organization in artificial intelligence (AI), announced that it had fired its CEO Sam Altman over “a lack of candor” with the board of directors. The board also said that it had lost confidence in Altman’s ability to lead the organization, which aims to create artificial general intelligence (AGI) that can benefit all of humanity.
Altman, who joined OpenAI in 2019 as its CEO and co-chairman, was a prominent figure in the tech industry, having previously led the startup accelerator Y Combinator and invested in companies like Airbnb, Stripe, and Reddit. He was also a vocal advocate for the potential of AI to transform the world and the need for ethical and responsible development of the technology.
The news of Altman’s dismissal came as a shock to many in the AI community, who expressed their support for him and his vision on social media. Some also criticized the board’s decision, comparing it to the infamous ouster of Steve Jobs from Apple in 1985. Altman himself said that he was “shocked and saddened” by the board’s actions and that he still believed in OpenAI’s mission.
What was the reason?
The exact reason for Altman’s firing remains unclear, as neither OpenAI nor Altman have provided all the details. However, according to some reports, the board’s move was likely orchestrated by OpenAI’s chief scientist Ilya Sutskever, who had concerns about the safety and speed of Altman’s approach to AI development.
Sutskever, who is one of the co-founders of OpenAI and a renowned AI researcher, reportedly argued that Altman was pushing for too much commercialization and growth of the organization, at the expense of ensuring that its AI systems were aligned with human values and interests. Sutskever also allegedly feared that Altman was not transparent enough with the board about the risks and challenges of creating AGI, which is a hypothetical form of AI that can perform any intellectual task that a human can do.
The tension between Altman and Sutskever came to a head on November 6, 2023, when OpenAI hosted its Dev Day event, where Altman delivered a keynote speech showcasing some of the organization’s latest products and projects, such as ChatGPT, an AI-powered chatbot that can converse with humans on various topics. Sutskever saw this as an “inflection moment” of Altman pushing too far, too fast, and decided to rally the board against him.
What are the implications?
The dismissal of Altman has raised several questions and concerns about the future of OpenAI and the broader field of AI. Some of the issues that have been discussed include:
- The governance and accountability of OpenAI, which has an unusual structure where its for-profit arm is owned and controlled by a non-profit public charity, which in turn is governed by a board of directors that includes Sutskever and three other non-employees. Some have questioned whether this arrangement gives enough oversight and transparency to the organization’s activities and decisions, especially given its ambitious and potentially risky goal of creating AGI.
- The relationship and trust between OpenAI and its key partner and investor Microsoft, which reportedly owns a minority stake in the organization and provides it with cloud computing resources and access to its AI platform Azure. Microsoft’s CEO Satya Nadella was said to be “furious” about the board’s decision to fire Altman, as he had a close working relationship with him and saw him as a visionary leader in AI. Some have speculated whether Microsoft will continue to support OpenAI or seek to renegotiate its terms of collaboration.
- The impact and direction of OpenAI’s research and innovation, which has produced some of the most advanced and influential AI systems in the world, such as GPT-3, a natural language processing model that can generate coherent and diverse texts on various topics. Some have wondered whether OpenAI will maintain its pace and quality of AI development under Sutskever’s leadership, or whether it will slow down and focus more on safety and ethics. Some have also expressed concern that OpenAI’s products and projects could be used for malicious or harmful purposes, such as spreading misinformation or manipulating people.
- The regulation and policy of AI, which has become a hot topic in recent years, as governments and organizations around the world have tried to address the opportunities and challenges posed by the rapid and widespread adoption of AI in various domains and sectors. Some have argued that the US government, under President Joe Biden, has not done enough to regulate and guide the development and use of AI, especially in areas such as privacy, security, fairness, and accountability. Some have also suggested that the US should establish a federal agency or commission dedicated to overseeing and governing AI, similar to what other countries like the UK and France have done.
The ouster of Sam Altman from OpenAI has sparked a debate on the state and future of AI, both within and outside the organization. The incident has highlighted the need for more clarity and consensus on the vision, values, and goals of OpenAI, as well as the role and responsibility of its board, partners, and stakeholders. It has also raised awareness and interest in the broader issues and implications of AI development and deployment, such as safety, ethics, innovation, and regulation. As AI continues to advance and impact the world, it is important to ensure that it is aligned with the common good and the public interest.