Jump to content
  • Twitter’s Moderation System Is in Tatters


    Karlston

    • 309 views
    • 6 minutes
     Share


    • 309 views
    • 6 minutes

    Disinformation researchers have spent years asking Twitter to remove toxic and fake posts. After Elon Musk’s staff cuts, there’s hardly anyone to talk to.

    Even before Twitter cut some 4,400 contract workers on November 12, the platform was showing signs of strain. After Elon Musk bought the company and laid off 7,500 full time employees, disinformation researchers and activists say, the team that took down toxic and fake content vanished. Now, after years of developing relationships within those teams, researchers say no one is responding to their reports of disinformation on the site, even as data suggests Twitter is becoming more toxic.

     

    The issue is particularly acute in Brazil, where a runoff presidential election between right-wing incumbent Jair Bolsonaro and Luiz Inácio Lula da Silva took place just days after Musk’s takeover. Observers and activists had warned for months that Bolsonaro’s supporters might not accept the results of the election should he lose, and could resort to violence. When Bolsonaro supporters began questioning the election results online, researchers found that Twitter had apparently fired all the people who should be monitoring the platform.

     

    “At this moment, we have nobody to reach out to,” says Nina Santos, a researcher at the Brazilian National Institute of Science & Technology in Digital Democracy. “All the people that we were talking with are no longer there.” Santos says that until Musk’s takeover, Twitter had been “quite responsive” in taking down rule-breaking content that could undermine trust in the election or spread disinformation, compared to Meta and Google. The entirety of Twitter’s Brazil team was included in the 7,500 people laid off earlier this month.

     

    Although Lula was declared the winner of the election, Santos says she still sees tweets questioning the result or calling for mobilization against the government. All of these, she says, are dangerous. Twitter’s current policy states that the company will “label or remove false or misleading information intended to undermine public confidence in an election or other civic process.” Christopher Bouzy, founder and CEO of Bot Sentinel, a project to fight disinformation and harassment on Twitter, was also monitoring the Brazilian elections, as well as the US midterms. Like Santos, he noticed that tweets claiming the Brazilian election was stolen remained up on Twitter.

     

    Disinformation also flooded Twitter during the US midterms, particularly around the race in Maricopa County, Arizona, the state’s largest county and a consistent target of right-wing election deniers. Bouzy, who was monitoring thousands of right-wing accounts, says he had “no idea who to contact” at the company to get tweets containing disinformation taken down. “Twitter is a shit show,” he says.

     

    On November 15, more than 70 civil society organizations across the globe wrote to Musk demanding he take action to stop hate speech becoming more prevalent on Twitter. In the weeks leading up to the US midterms, the nonprofit advocacy group Free Press released a report highlighting how all social platforms were allowing election disinformation to persist. Nora Benavidez, the group’s senior counsel and director of digital justice and civil rights, says that although many platforms, including Twitter, are not always responsive to researchers and activist organizations, mass layoffs have made this even more difficult.

     

    “Me and other people who have tried to reach out have gotten dead ends,” Benavidez says. “And when we’ve reached out to those who are supposedly still at Twitter, we just don’t get a response.”

     

    Even when researchers can get through to Twitter, responses are slow—sometimes taking more than a day. Jesse Littlewood, vice president of campaigns at the nonprofit Common Cause, says he’s noticed that when his organization reports tweets that clearly violate Twitter’s policies, those posts are now less likely to get taken down.

     

    The volume of content that users and watchdogs may want to report to Twitter is likely to increase. Many of the staff and contractors laid off in recent weeks worked on teams like trust and safety, policy, and civic integrity, all of which worked to keep disinformation and hate speech off the platform.

     

    Melissa Ingle was a senior data scientist on Twitter’s civic integrity team until she was fired along with 4,400 other contractors on November 12. She wrote and monitored algorithms used to detect and remove political misinformation on Twitter—most recently, that meant the elections in the US and Brazil. Of the 30 people on her team, only 10 remain, and many of the human content moderators, who review tweets and flag those that violate Twitter’s policies, have also been laid off. “Machine learning needs constant input, constant care,” she says. “We have to constantly update what we are looking for because political discourse changes all the time.”

     

    Though Ingle’s job did not involve interacting with outside activists or researchers, she says members of Twitter’s policy team did. At times, information from external groups helped inform the terms or content Ingle and her team would train algorithms to identify. She now worries that with so many staffers and contractors laid off, there won’t be enough people to ensure the software remains accurate.

     

    “With the algorithm not being updated anymore and the human moderators gone, there’s just not enough people to manage the ship,” Ingle says. “My concern is that these filters are going to get more and more porous, and more and more things are going to come through as the algorithms get less accurate over time. And there’s no human being to catch things going through the cracks.”

     

    Within a day of Musk taking ownership of Twitter, Ingle says, internal data showed that the number of abusive tweets reported by users increased 50 percent. That initial spike died off a little, she says, but abusive content reports remained about 40 percent or so higher than the usual volume before the takeover.

     

    Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University, also expects to see Twitter’s defenses against banned content wither. “Twitter has always struggled with this, but a number of talented teams had made real progress on these problems in recent months. Those teams have now been wiped out.”

     

    Such concerns are echoed by a former content moderator who was a contractor for Twitter until 2020. The contractor, speaking anonymously to avoid repercussions from his current employer, says all the former colleagues doing similar work whom he was in touch with have been fired. He expects the platform to become a much less nice place to be. “It’ll be horrible,” he says. “I have actively searched the worst parts of Twitter—the most racist, most horrible, most degenerate parts of the platform. That’s what’s going to be amplified.”

     

     

    Twitter’s Moderation System Is in Tatters

     

    (May require free registration to view)


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...