Jump to content
  • YouTube algorithm pushed election fraud claims to Trump supporters, report says

    alf9872000

    • 221 views
    • 8 minutes
     Share


    • 221 views
    • 8 minutes

    Researchers analyzed real recommendations to hundreds of YouTube users.

    For years, researchers have suggested that algorithms feeding users content aren't the cause of online echo chambers, but are more likely due to users actively seeking out content that aligns with their beliefs. This week, New York University researchers for the Center for Social Media and Politics showed results from a YouTube experiment that just happened to be conducted right when election fraud claims were raised in fall 2020. They say their results provide an important caveat to prior research by showing evidence that in 2020, YouTube's algorithm was responsible for "disproportionately" recommending election fraud content to users more "skeptical of the election's legitimacy to begin with."

     

    A coauthor of the study, Vanderbilt University political scientist James Bisbee told The Verge that even though participants were recommended a low number of election denial videos—a maximum of 12 videos out of hundreds participants clicked on—the algorithm generated three times as many to people predisposed to buy into the conspiracy than it to people who did not. "The more susceptible you are to these types of narratives about the election... the more you would be recommended content about that narrative," Bisbee said.

     

    YouTube spokesperson Elena Hernandez told Ars that Bisbee's team's report "doesn't accurately represent how our systems work." Hernandez says, "YouTube doesn't allow or recommend videos that advance false claims that widespread fraud, errors, or glitches occurred in the 2020 US presidential election" and YouTube's "most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels."

     

    Bisbee's team states directly in their report that they did not attempt to crack the riddle of how YouTube's recommendation system works:

    "Without access to YouTube's trade-secret algorithm, we can't confidently claim that the recommendation system infers a user's appetite for election fraud content using their past watch histories, their demographic data, or some combination of both. For the purposes of our contribution, we treat the algorithm as the black box that it is, and instead simply ask whether it will disproportionately recommend election fraud content to those users who are more skeptical of the election's legitimacy."

     

    To conduct their experiment, Bisbee's team recruited hundreds of YouTube users and re-created the recommendation experience by having each participant complete the study logged into their YouTube accounts. After participants clicked through various recommendations, researchers recorded any recommended content flagged as supporting, refuting, or neutrally reporting Trump's election fraud claims. Once they finished watching videos, participants completed a long survey sharing their beliefs about the 2020 election.

     

    Bisbee told Ars that "the purpose of our study was not to measure or describe or reverse-engineer the inner workings of the YouTube algorithm, but rather to describe a systematic difference in the content it recommended to users who were more or less concerned about election fraud." The study's only purpose was to analyze content fed to users to test whether online recommendation systems contributed to the "polarized information environment."

     

    "We can show this pattern without reverse-engineering the black box algorithm they use," Bisbee told Ars. "We just looked at what real people were being shown."

    Testing YouTube’s recommendation system

    Bisbee's team reported that because YouTube's algorithm relies on watch histories and subscriptions. In most cases, it's a positive experience for recommended content to align with user interests. But because of the extreme circumstances following the 2020 election, researchers hypothesized that the recommendation system would naturally feed more election fraud content to users who were already skeptical about Joe Biden's win.

     

    To test the hypothesis, researchers "carefully controlled the behavior of real YouTube users while they were on the platform." Participants logged into their accounts and downloaded a browser extension to capture data on the recommended videos. Then they navigated through 20 recommendations, following a specified path laid out by researchers—such as only clicking the second recommended video from the top. Every participant started out watching a randomly assigned "seed" video (either political or non-political) to ensure that the initial video they watched didn't influence subsequent recommendations based on prior user preferences that the algorithm would recognize.

     

    There were many limitations to this study, which researchers outlined in detail. Perhaps foremost, participants were not representative of typical YouTube users. The majority of participants were young, college-educated Democrats watching YouTube on devices running Windows. Researchers suggest the recommended content might have differed if more participants were conservative or Republican-leaning, and thus assumedly more likely to believe in election fraud.

     

    There was also an issue where YouTube removed election fraud videos from the platform in December 2020, resulting in researchers' losing access to what they described as an insignificant number of videos recommended to participants that could not be assessed.

     

    Bisbee's team reported that the key takeaway from the report was preliminary evidence of a pattern of behavior of YouTube's algorithm—but not a true measure of how misinformation spread on YouTube in 2020.

     
    Tensions between YouTube and researchers
     

    While YouTube said most users accessed election content from official news sources, Bisbee's team counted videos from Fox News and the White House among election fraud content that could've influenced skeptical YouTube users. Dubious prominent sources and the tension between YouTube's understandable protectiveness over how its proprietary algorithm functions are contributing factors to confusion over YouTube's algorithm's role in misinformation spread to election deniers.

     

    NYU Center for Social Media and Politics co-director and study coauthor Joshua Tucker told Ars that until YouTube works more transparently with more researchers that "we are going to observe what we are able to observe."

     

    "If the platforms choose to keep these algorithms private, then researchers interested in questions around content exposure will try to figure out their own ways to understand the content to which users are exposed," Tucker said.

     

    An NYU senior researcher who also contributed to the study, Megan Brown told Ars that "YouTube has consistently critiqued" studies auditing its algorithm "for not accurately reflecting the realities of what happens to real users." While Brown thinks that YouTube's research program launched earlier this year is a "great start" toward increasing transparency on the platform, researchers are still limited by what data YouTube chooses to share. She said in the past YouTube has critiqued studies based on public data that YouTube provides, and that it's unclear from YouTube's statement what aspects of NYU's study do not reflect how YouTube's recommendation system works.

     

    "In this study, we use real users to learn what they were recommended during the election," Brown said. "We are unaware of what standard must be met to accurately reflect what YouTube users are recommended, if actually collecting the recommendations they saw does not meet that standard."

     

    A YouTube spokesperson told Ars that it's difficult "to draw conclusions from this report" because researchers only provide "a partial sample of the data that they claim was recommended to users" and they do not "distinguish between content that was recommended by our systems, and content that viewers actively subscribe to." YouTube also points to research, which Bisbee's team notes, predominantly shows that user preferences—not the algorithm—are to blame for increases in extreme content. "YouTube recommendations aren't actually steering viewers towards extreme content," YouTube's spokesperson said.

    Google’s continued battle with election fraud claims

    Since 2020, Google has taken a stronger stance by actively blocking election misinformation on YouTube, which Bisbee's team reported was the "most popular social media network among US adults" in 2020. More recently, Google's stayed the course ahead of the mid-term elections, still seemingly battling to keep 2020 election fraud claims off its platforms and services.

     

    The most recent example: After Donald Trump went on a Truth Social rant repeating 2020 election fraud claims earlier this week, Google blocked Truth Social from the Play Store, citing insufficient content moderation. Reports say Google's worried about Truth Social's potential for incitement to violence, and Congress' mounting investigation recently made a move that indicates Google has good reason to maintain ongoing concerns about real-world dangers resulting from election fraud claims seeming to inflame Trump's base.

     

    On Thursday, the select committee investigating the January 6 attack requested that former House Speaker Newt Gingrich share information on how senior advisers helped Trump amplify false claims about the 2020 election in TV advertisements. Congress also wants Gingrich to explain his involvement "in various other aspects of the scheme to overturn the 2020 election and block the transfer of power, including after the violence of January 6th."

     

    As the investigation continues, Google and Congress see the false claims about the election as critically linked to inciting violence. Reuters has a running series documenting other concerns of potentially impending violence linked to false claims, including Trump supporters' increasing "terroristic death threats" of election officials throughout 2021.

     

    Source: Ars Technica

    https://arstechnica.com/tech-policy/2022/09/youtube-algorithm-pushed-election-fraud-claims-to-trump-supporters-report-says/


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...