前OpenAI董事会成员解释了为什么CEOSam Altman被解雇

时间:2024-09-22 编辑: 浏览:(252)

乔治敦大学CSET战略和基础研究拨款主任海伦·托纳在Vox媒体2023年Code会议上讲话,该会议于2023年9月27日在加利福尼亚州达纳·波伊恩特举行的. Former OpenAI board member Helen Toner, who helped oust CEO Sam Altman in November, broke her silence this week when she spoke on a podcast about events inside the company leading up to Altman's firing. One example she shared: When OpenAI released ChatGPT in November 2022, the board was not informed in advance and found out about it on Twitter. Toner also said Altman did not tell the board he owned the OpenAI startup fund. Altman was renamed CEO less than a week after he was fired, but Toner's comments give insight into the decision for the first time. "The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's public good mission was primary, was coming first — over profits, investor interests, and other things," Toner said on "The TED AI Show" podcast released on Tuesday. "But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board," she said. Toner said Altman gave the board "inaccurate information about the small number of formal safety processes that the company did have in place" on multiple occasions. "For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn't a big deal, or misinterpreted, or whatever," Toner said. "But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us, and that's just a completely unworkable place to be in as a board — especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money." Toner explained that the board had worked to improve issues. She said that, in October, a month before the ousting, the board had conversations with two executives who relayed experiences with Altman they weren't comfortable sharing before, including screenshots and documentation of problematic interactions and mistruths. "The two of them suddenly started telling us... how they couldn't trust him, about the toxic atmosphere he was creating," Toner said. "They used the phrase 'psychological abuse,' telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change." Artificial general intelligence, or AGI, is a broad term that refers to a type of artificial intelligence that outperforms human abilities on various cognitive tasks. An OpenAI spokesperson was not immediately available to comment. Earlier this month, OpenAI disbanded its team focused on the long-term risks of AI a year after the company announced the group. The news came days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike, who has since announced he is joining AI competitor Anthropic, wrote on Friday that OpenAI's "safety culture and processes have taken a backseat to shiny products." Toner's comments and the high-profile departures follow last year's leadership crisis. In November, OpenAI's board ousted Altman, saying it had conducted "a deliberative review process" and that Altman "was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." "The board no longer has confidence in his ability to continue leading OpenAI," it said. The Wall Street Journal and other media outlets reported that while Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, others, including Altman, were instead more eager to push ahead with delivering new technology. Altman's removal prompted resignations and threats of resignations, including an open letter signed by virtually all of OpenAI's employees, and uproar from investors, including Microsoft. Within a week, Altman was back and board members Toner and Tasha McCauley, who had voted to oust Altman, were out. Sutskever relinquished his seat on the board and remained on staff until he announced his departure on May 14. Adam D'Angelo, who had also voted to oust Altman, remains on the board. In March, OpenAI announced its new board, which includes Altman, and the conclusion of an internal investigation by law firm WilmerHale into the events leading up to Altman's ouster. OpenAI did not publish the WilmerHale investigation report but summarized its findings. "The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg," OpenAI board chair Bret Taylor said at the time, referring to president and co-founder Greg Brockman. The review also "concluded the board acted in good faith... [and] did not anticipate some of the instability that led afterwards," Taylor added.

最新 更多 >
  • 1 元宇宙进化论:游戏领跑,AI助力,XR率先破局

    图片来源:由工具生成 来源: 作者:记者邵冰燕,实习生杨雨婷 原标题:《大模型对元宇宙意味着什么:终点还是重生?》 “未来的元宇宙应该说是条条大路通罗马,每一条路径都是为元宇宙去搭建内容的基建之路。”中国传媒大学副教授、游戏设计系主任张兆弓在2023年世界人工智能大会上作出上述表示。 “到了今天,元宇宙本质上是价值增量的逻辑。”阿里巴巴云游戏事业部(元境)总经理王矛在大会上表示,元宇宙的价值,不是