Jump to content
  • The Fear and Tension That Led to Sam Altman’s Ouster at OpenAI

    aum

    • 545 views
    • 8 minutes
     Share


    • 545 views
    • 8 minutes

    The departure of the high-profile boss of the San Francisco company drew attention to a philosophical rift among the people building new A.I. systems.

     

    Over the last year, Sam Altman led OpenAI to the adult table of the technology industry. Thanks to its hugely popular ChatGPT chatbot, the San Francisco start-up was at the center of an artificial intelligence boom, and Mr. Altman, OpenAI’s chief executive, had become one of the most recognizable people in tech.

     

    But that success raised tensions inside the company. Ilya Sutskever, a respected A.I. researcher who co-founded OpenAI with Mr. Altman and nine other people, was increasingly worried that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk, according to three people familiar with his thinking. Mr. Sutskever, a member of the company’s board of directors, also objected to what he saw as his diminished role inside the company, according to two of the people.

     

    That conflict between fast growth and A.I. safety came into focus on Friday afternoon, when Mr. Altman was pushed out of his job by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying the split was as significant as when Steve Jobs was forced out of Apple in 1985.

     

    The ouster of Mr. Altman, 38, drew attention to a longtime rift in the A.I. community between people who believe A.I. is the biggest business opportunity in a generation and others who worry that moving too fast could be dangerous. And the ouster showed how a philosophical movement devoted to the fear of A.I. had become an unavoidable part of tech culture.

     

    Since ChatGPT was released almost a year ago, artificial intelligence has captured the public’s imagination, with hopes that it could be used for important work like drug research or to help teach children. But some A.I. scientists and political leaders worry about its risks, such as jobs getting automated out of existence or autonomous warfare that grows beyond human control.

     

    Fears that A.I. researchers were building a dangerous thing have been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it.

     

    OpenAI’s board has not offered a specific reason for why it pushed out Mr. Atman, other than to say in a blog post that it did not believe he was communicating honestly with them. OpenAI employees were told on Saturday morning that his removal had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practice,” according to a message viewed by The New York Times.

     

    Greg Brockman, another co-founder and the company’s president, quit in protest on Friday night. So did OpenAI’s director of research. By Saturday morning, the company was in chaos, according to a half dozen current and former employees, and its roughly 700 employees were struggling to understand why the board made its move.

     

    “I’m sure you all are feeling confusion, sadness, and perhaps some fear,” Brad Lightcap, OpenAI’s chief operating officer, said in a memo to OpenAI employees. “We are fully focused on handling this, pushing toward resolution and clarity, and getting back to work.”

     

    Mr. Altman was asked to join a board meeting via video at noon in San Francisco on Friday. There, Mr. Sutskever, 37, read from a script that closely resembled the blog post the company published minutes later, according to a person familiar with the matter. The post said that Mr. Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

     

    But in the hours that followed, OpenAI employees and others focused not only on what Mr. Altman may have done, but on the way the San Francisco start-up is structured and the extreme views on the dangers of A.I. embedded in the company’s work since it was created in 2015.

     

    Mr. Sutskever and Mr. Altman could not be reached for comment on Saturday.

     

    In recent weeks, Jakub Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously occupying a position below Mr. Sutskever, he was elevated to a position alongside Mr. Sutskever, according to two people familiar with the matter.

     

    Mr. Pachocki quit the company late on Friday, the people said, soon after Mr. Brockman. Earlier in the day, OpenAI said Mr. Brockman had been removed as chairman of the board and would report to the new interim chief executive, Mira Murati. Other allies of Mr. Altman — including two senior researchers, Szymon Sidor and Aleksander Madry — have also left the company.

     

    Mr. Brockman said in a post on X, formerly Twitter, that even though he was the chairman of the board, he was not part of the board meeting where Mr. Altman was ousted. That left Mr. Sutskever and three other board members: Adam D’Angelo, chief executive of the question-and-answer site Quora; Tasha McCauley, an adjunct senior management scientist at the RAND Corporation; and Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.

     

    They could not be reached for comment on Saturday.

     

    Ms. McCauley and Ms. Toner have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

     

    In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to form a new A.I. company called Anthropic.

     

    Mr. Sutskever was increasingly aligned with those beliefs. Born in the Soviet Union, he spent his formative years in Israel and emigrated to Canada as a teenager. As an undergraduate at the University of Toronto, he helped create a breakthrough in an A.I. technology called neural networks.

     

    In 2015, Mr. Sutskever left a job at Google and helped found OpenAI alongside Mr. Altman, Mr. Brockman and Tesla’s chief executive, Elon Musk. They built the lab as a nonprofit, saying that unlike Google and other companies, it would not be driven by commercial incentives. They vowed to build what is called artificial general intelligence, or A.G.I., a machine that can do anything the brain can do.

     

    Mr. Altman transformed OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment from Microsoft. Such enormous sums of money are essential to building technologies like GPT-4, which was released earlier this year. Since its initial investment, Microsoft has put another $12 billion into the company.

     

    The company was still governed by the nonprofit board. Investors like Microsoft do receive profits from OpenAI, but their profits are capped. Any money over the cap is funneled back into the nonprofit.

     

    As he saw the power of GPT-4, Mr. Sutskever helped create a new Super Alignment team inside the company that would explore ways of ensuring that future versions of the technology would not do harm.

     

    Mr. Altman was open to those concerns, but he also wanted OpenAI to stay ahead of its much larger competitors. In late September, Mr. Altman flew to the Middle East for a meeting with investors, according to two people familiar with the matter. He sought as much as $1 billion in funding from SoftBank, the Japanese technology investor led by Masayoshi Son, for a potential OpenAI venture that would build a hardware device for running A.I. technologies like ChatGPT.

     

    OpenAI is also in talks for “tender offer” funding that would allow employees to cash out shares in the company. That deal would value OpenAI at more than $80 billion, nearly triple its worth about six months ago.

     

    But the company’s success appears to have only heightened concerns that something could go wrong with A.I.

     

    “It doesn’t seem at all implausible that we will have computers — data centers — that are much smarter than people,” Mr. Sutskever said on a podcast on Nov. 2. “What would such A.I.s do? I don’t know.”

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...