Jump to content
  • OpenAI’s directors have been anything but open. What the hell happened?

    aum

    • 5 comments
    • 441 views
    • 4 minutes
     Share


    • 5 comments
    • 441 views
    • 4 minutes

    In the top company in the world’s most explosive industry, the boss was fired and rehiredand no one has said why

     

    • OpenAI was ‘working on model so powerful it alarmed staff’

     

    The OpenAI farce has moved at such speed in the past week that it is easy to forget that nobody has yet said in clear terms why Sam Altman – the returning chief executive and all-round genius, according to his vocal fanclub – was fired in the first place. Since we are constantly told, not least by Altman himself, that the worst outcome from the adoption of artificial general intelligence could be “lights out for all of us”, somebody needs to find a voice here.

     

    If the old board judged, for example, that Altman was unfit for the job because he was taking OpenAI down a reckless path, lights-wise, there would plainly be an obligation to speak up. Or, if the fear is unfounded, the architects of the failed boardroom coup could do everybody a favour and say so. Saying nothing useful, especially when your previous stance has been that transparency and safety go hand in hand, is indefensible.

     

    The original non-explanation from OpenAI was that Altman had to go because he had not been “consistently candid” with other directors. Not fully candid about what? A benign (sort of) interpretation is that the row was about the amount of time Altman was devoting to other business interests, including a reported computer chip venture. If that is correct, outsiders might indeed be relaxed: it is normal for other board members to worry about whether the boss is sufficiently focused on the day job.

     

    Yet the whole purpose of OpenAI’s weird governance setup was to ensure safe development of the technology. For all its faults, the structure was intended to put the board of the controlling not-for-profit entity in change. Safety came first; the interests of the profit-seeking subsidiary were secondary. Here’s Altman’s own description, from February this year: “We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety.”

     

    The not-for-profit board, then, could close the whole show if it thought that was the responsible course. In principle, sacking the chief executive would merely count as a minor exercise of such absolute authority.

     

    The chances of such arrangements working in practice were laughably slim, of course, especially when there was a whiff of an $86bn valuation in the air. You can’t take a few billion dollars from Microsoft, in exchange for a 49% stake in the profit-seeking operation, and expect it not to seek to protect its investment in a crisis. And if most of the staff – some of the world’s most in-demand workers – rise in rebellion and threaten to hop off to Microsoft en masse, you’ve lost.

     

    Yet the precise reason for sacking Altman still matters. There were only four members of the board apart from him. One was the chief scientist, Ilya Sutskever, who subsequently performed a U-turn that he didn’t explain. Another is Adam D’Angelo, chief executive of the question-and-answer site Quora, who, bizarrely, intends to transition seamlessly from the board that sacked Altman to the one that hires him back. Really?

     

    That leaves the two departed women: Tasha McCauley, a tech entrepreneur, and Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology. What do they think? Virtually the only comment from either has been Toner’s whimsical post on X after the rehiring of Altman: “And now, we all get some sleep.”

     

    Do we, though? AI could pose a risk to humanity on the scale of a nuclear war, Rishi Sunak warned the other week, echoing the general assessment. If the leading firm can’t even explain the explosion in its own boardroom, why are outsiders meant to be chilled? In the latest twist, Reuters reported on Thursday that researchers at OpenAI were so concerned about the dangers posed by the latest AI model that they wrote to the board. Those directors have some explaining to do – urgently.

     

    Source


    User Feedback

    Recommended Comments

    HOW DO YOU EXPLAIN PURE FEAR?????  The Terminator Cat is almost out of the bag!  Next year Trump may be the next President again.  After him, there might be one law, one government body, ZERO Opposition.

     

    Write WHAT YOU will/want, the clock is ticking and its speed is increasing!!!  Positive side, no more crazy religious people who like to kill other people.  AI will not know Religion!!!

    Edited by Nuclear Fallout
    Link to comment
    Share on other sites


    People killed other people much before "religious people" appeared.  Religion is one of the myriad motivations for killings.  

     

    How about three poisons -- greed, hatred, and delusion -- at the core?

     

    Edited by aum
    Link to comment
    Share on other sites


    Trying to stay apolitical and areligeous... who says an AI system won't 'get' religion with everything it absorbs, in a manner which influences its 'logic'? If it did, it could be the best, or worst, thing that could happen. Regardless...

     

    Personally I have thought for some time that AI is the most dangerous threat to humanity since the atomic bomb. Not because AI itself might start nuking us, but because of the potential of humans using AI to control the narrative. We all know about propaganda farms and the steering of social media by political parties etc. that has been going on for many years (in fact since the beginning of media). AI provides (is already providing???) the opportunity to do it on a vastly more expansive and perhaps persuasive scale to the masses that inhabit social media. Too many people on this planet don't engage in critical thinking and those who would control us for their own objectives will use any tool at their disposal.

    Link to comment
    Share on other sites


    Apolitical cannot exist without political; and areligious without religious.  😉

     

    When both sides of the coin are out (of the mind) the mind is truly free.  🙂

     

    With warm regards,

     

    Link to comment
    Share on other sites




    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...