Hundreds of OpenAI employees are threatening to walk out and demand the resignation of ChatGPT's board, a move that follows a tumultuous weekend that began with the surprise termination of CEO Sam Altman and ended with Microsoft's hiring of Altman.
They followed OpenAI co-founder Sam Altman, who said he was starting the AI ​​subsidiary after his shock resignation from Microsoft, whose ChatGPT chatbot had led to rapid growth in AI technology.
In the letter, some senior OpenAI employees threatened to leave the company unless the board was changed.
“Your actions have demonstrated your lack of ability to control OpenAI,” read the letter, which was first published Wired.
The list of signatories included Ilya Sutzkever, the company's chief scientist and one of the four-member board that voted to oust Altman.
It also included top executive Mira Murat, who was named to replace Altman as CEO when he was fired on Friday, but was himself demoted over the weekend.
“Microsoft has assured us that there are positions for all OpenAI employees in this new subsidiary if we decide to join,” the letter said.
Reports say that 500 of OpenAI's 770 employees signed the letter.
OpenAI has appointed Emmett Shear, the former CEO of Amazon's streaming platform Twitch, as its new CEO, despite pressure from Microsoft and other major investors to reinstate Altman.
The startup's board fired Altman on Friday, with US media citing concerns that he underestimated the dangers of his technology and was distracting the company from its mission – a claim his successor denied.
Microsoft CEO Satya Nadella wrote on X that Altman “will join Microsoft to lead a new advanced artificial intelligence research group,” along with OpenAI co-founder Greg Brockman and other colleagues.
Altman rose to fame last year with the launch of ChatGPT, which sparked a race to advance AI research and development, as well as billions in investment in the sector.
His dismissal prompted several other high-profile departures from the company, as well as investors clamoring for his return.
“We're going to build something new and it's going to be incredible. The mission continues,” Brockman said, citing former research director Jakub Patchok, head of AI risk assessment Alexander Madri and longtime researcher Shimon Sidor.
But OpenAI defended its decision in a memo to employees Sunday night, saying “Sam's conduct and lack of transparency … undermines the board's ability to effectively oversee the company.” The New York Times reports.
Shir confirmed his appointment as acting CEO of OpenAI in a post on X on Monday, but also denied reports that Altman was fired over security concerns over the use of AI technology.
“Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to be the Acting CEO of @OpenAI. After consulting with my family and thinking about it for a few hours, I accepted,” he wrote. .
“Before I took the job, I looked into the basis for the changes. The board didn't remove Sam because of any specific safety disagreements, their reasoning was completely different from that.”
“It is clear that the process and communications surrounding Sam's removal were very poor, which seriously damaged our trust,” Shear added.
Global tech titan Microsoft has invested more than $10 billion in OpenAI and has rolled out the AI ​​pioneer's technology into its own products.
Microsoft CEO Nadella added in his post that he “looks forward to meeting and working with Emmett Shear and the new leadership team at OAI.”
“We remain committed to our partnership with OpenAI and believe in our product roadmap,” he said.
OpenAI is in fierce competition with others, including Google and Meta, as well as startups like Anthropic and Stability AI, to build their own AI models.
Generative AI platforms like ChatGPT are trained on huge numbers to enable them to answer questions, even complex ones, in human-like language.
They are also used for image generation and manipulation.
But the technology has prompted warnings about the dangers of its misuse – from blackmailing people with “deeply fake” images to image manipulation and malicious misinformation.