Here Is Why Sam Altman Was Suddenly Fired From OpenAI

Helen Toner, a former board member of OpenAI, stated on “The TED AI Show” podcast that the reason why Sam Altman was suddenly fired from OpenAI was due to repeated instances of withholding information, misrepresentation, and even lying to the board.

Here Is Why Sam Altman Was Suddenly Fired From OpenAI 1

Helen Toner, a former board member of OpenAI who assisted in the November dismissal of CEO Sam Altman, broke her silence on Tuesday by discussing the activities inside the firm that preceded Altman’s termination in a podcast that was made public.

She provided the following example: The board learned about OpenAI’s November 2022 release of ChatGPT via Twitter after not being notified beforehand. Toner added that Altman kept his ownership of the OpenAI startup fund a secret from the board.

Less than a week after his dismissal, Altman changed his title to CEO, but Toner’s remarks provide the first explanation for the choice.

“The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company’s public good mission was primary, was coming first — over profits, investor interests, and other things,” Toner said on “The TED AI Show” podcast.

“But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, and in some cases outright lying to the board,” she said.

According to Toner, Altman repeatedly provided the board with “inaccurate information about the small number of formal safety processes that the company did have in place.”

“For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or misinterpreted, or whatever,” Toner said. “But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us, and that’s just a completely unworkable place to be in as a board — especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money.”

Toner clarified that the board has made efforts to address problems. She added that the board held discussions with two executives in October, one month prior to Altman’s ouster, during which they shared experiences with him that they had previously been uneasy discussing. The chats included screenshots and documentation of problematic interactions and falsehoods.

“The two of them suddenly started telling us … how they couldn’t trust him, about the toxic atmosphere he was creating,” Toner said. “They used the phrase ‘psychological abuse,’ telling us they didn’t think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change.”

The phrase artificial general intelligence, or AGI, is used to describe a broad category of artificial intelligence that can perform a variety of cognitive activities better than humans.

Remarks from an OpenAI representative were not immediately available.

A year after the group’s announcement, OpenAI revealed earlier this month that it was disbanding its team dedicated to the long-term hazards of AI. Days after team leaders Jan Leike and Ilya Sutskever, co-founders of OpenAI, announced their exits from the Microsoft-backed business, the news was released. Leike stated on Friday that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” Leike has now revealed he is joining AI rival Anthropic.

The high-profile exits and Toner’s remarks come after the leadership instability of the previous year.

With the justification that it had carried out “a deliberative review process” and that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the OpenAI board dismissed Altman in November.

“The board no longer has confidence in his ability to continue leading OpenAI,” it said.

While Sutskever focused his attention on making sure artificial intelligence wouldn’t damage people, other people, including Altman, were more ready to move forward with providing new technology, according to reports from the Wall Street Journal and other media sites.

Following Altman’s dismissal, nearly every employee of OpenAI submitted an open letter threatening to resign, and investors, including Microsoft, voiced their disapproval. After a week, board members Toner and Tasha McCauley—who had voted to remove Altman—were removed, and Altman returned. Sutskever gave up his position on the board and continued to work for the company until he made his departure known on May 14. Adam D’Angelo is still on the board despite having voted to remove Altman as well.

In March, OpenAI declared the completion of an internal inquiry by law firm WilmerHale into the circumstances surrounding Altman’s dismissal, as well as the appointment of Altman to the board.

Although it summarized the WilmerHale probe report, OpenAI did not publish it.

“The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg,” OpenAI board chair Bret Taylor said at the time, referring to president and co-founder Greg Brockman. The review also “concluded the board acted in good faith … [and] did not anticipate some of the instability that led afterward,” Taylor added.

There were several guesses as to the reason, as reported by GreatGameIndia last year. According to Reuters sources, Sam Altman was fired because of the secret OpenAI Project Q-Star, an AI model designed to “surpass humans in the most economically valuable tasks.”

GreatGameIndia is being actively targeted by powerful forces who do not wish us to survive. Your contribution, however small help us keep afloat. We accept voluntary payment for the content available for free on this website via UPI, PayPal and Bitcoin.

Support GreatGameIndia

Leave a Reply