Jump to content
  • Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

    alf9872000

    • 187 views
    • 5 minutes
     Share


    • 187 views
    • 5 minutes

    NOT LONG AFTER Elon Musk announced plans to acquire Twitter last March, he mused about open sourcing “the algorithm” that determines how tweets are surfaced in user feeds so that it could be inspected for bias.

    His fans—as well as those who believe the social media platform harbors a left-wing bias—were delighted.

     

    But today, as part of an aggressive plan to trim costs that involves firing thousands of Twitter employees, Musk’s management team cut a team of artificial intelligence researchers who were working toward making Twitter’s algorithms more transparent and fair.

     

    Rumman Chowdhury, director of the ML Ethics, Transparency, and Accountability (META—no, not that one) team at Twitter, tweeted that she had been let go as part of mass layoffs implemented by new management—although it hardly seemed that she was relishing the idea of working under Musk.

     

     

    Chowdhury told WIRED earlier this week that the groups’ work was put on hold as a result of Musk’s impending acquisition. “We were told, in no uncertain terms, not to rock the boat,” she said. Chowdhury also said that her team had been doing some important new research on political bias that might have helped Twitter and other social networks from preventing particular viewpoints from being unfairly penalized.

     

    Joan Deitchman, a senior manager at Twitter’s META unit confirmed that the entire team had been fired. Kristian Lum, formerly a machine learning reacher on the team, said the “entire META team minus one” had been let go. Nobody from the team, or Twitter, could be reached for comment this morning.

     
     

    As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

     

    Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.

     

    Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.

     

    Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter. 

    “The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

     

    Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”

     

    Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.

     

    Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”

     

    As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.

     

    The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...