Jump to content
  • Facebook is bad at moderating in English. In Arabic, it’s a disaster

    aum

    • 408 views
    • 6 minutes
     Share


    • 408 views
    • 6 minutes

    The platform must work with communities on the ground to design policies on moderation and be fully transparent about them.

     

    For many, the Facebook Papers come as no surprise. As a Palestinian digital rights advocate, the recent revelations perfectly describe and validate the archetypal experience of Palestinians and millions of others generating daily content outside the U.S.


    For years, activists and civil society organizations have warned of Facebook’s negligence of non-English speaking regions, and its deeply discriminatory content moderation structure which have served to silence globally marginalized voices, not empower them. Yet Facebook, at every ebb and flow, has chosen profit over people.


    The thousands of pages of leaked documents now provide incontestable evidence, finally laying to rest one of the biggest claims repeatedly made by Facebook and its leadership since the heyday of the Arab Spring: Safety and freedom of expression are not afforded to all users equally, but are rather dictated by the company’s market interests and bottom line.


    As reported by the Wall Street Journal, Facebook has built and maintained a system that favors the most powerful — including politicians, celebrities, and athletes — and exempts them from some or all of its rules under the so-called “cross-check” or “XCheck” program. While VIPs enjoy a preferential status, and with it a lack of accountability, many ordinary users are often erroneously censored and harshly suppressed with little to no explanation of what they did wrong. In a true Orwellian sense, Facebook users are all equal, but some users are more equal than others.

     

    In Arabic — the third most common language on the platform — Facebook’s double standards in content moderation parades the worst of all worlds: Political speech is muzzled due to over-enforcement of policies related to terrorism and incitement to violence, while hate speech and disinformation targeting at-risk users such as women and the LGBTQ+ community are left rampant on the platform.


    Arab activists and journalists, many of whom use Facebook to document human rights abuses and war crimes, are routinely censored and booted off the platform — most commonly under the pretext of terrorism.


    This is especially pronounced in times of political crisis and violence. Let’s not forget Facebook’s mass censorship of Palestinian voices during the height of Israeli violence and brutality in the months of May and June 2021. Most notably, Facebook deleted content reporting on Israeli forces violently storming into Al-Aqsa Mosque, the third holiest site in Islam, throwing stun grenades and tear gas at worshippers, because company staff mistook “Al-Aqsa” for a terrorist organization.


    Such arbitrary mistakes are disturbingly common. Across the region, Facebook’s algorithms incorrectly deleted Arabic content 77% of the time. In one instance, Facebook’s Oversight Board overturned the erroneous removal of a post shared by an Egyptian user on the violence in Israel and Palestine from a verified Al Jazeera page. Not only was the content wrongfully removed for allegedly violating the platform’s Dangerous Individuals and Organisations (DIO) Community Standard, but the user was also disproportionately punished with a read-only account restriction for three days, disabling his ability to livestream content, and prohibiting him from using advertising products on the platform for 30 days.


    Digital rights advocates have demanded transparency on these designations. The Oversight Board also recommended Facebook to publish the list, but Facebook refused, citing safety concerns. The Intercept recently revealed the full list, and again, to no one’s surprise, “the DIO policy and blacklist … place far looser prohibitions on commentary about predominately white anti-government militias than on groups and individuals listed as terrorists, who are predominately Middle Eastern, South Asian, and Muslim.”


    Now, when it comes to Arabic-language disinformation or hate speech targeting historically oppressed groups or women and gender minorities, Facebook’s overzealous content moderation turns into inaction, again demonstrating that Facebook directs its attention — and resources — to contexts that matter to the company financially or politically.


    For LGBTQ+ community in the region, such criteria for action doesn’t hit the mark. They are doxxed, outed, and targeted with smear campaigns on Facebook and Instagram. Yet, Facebook is slow to act, leading, in some cases, to real-world harm. An investigation by Reuters shows, for instance, that despite Facebook’s ban on content promoting gay conversion therapy, much of the Arabic-language conversion content thrives unmoderated on the platform, where practitioners are free to promote their services and post freely to millions of followers through verified accounts.


    Similarly, Facebook knew of the human-trafficking networks using the platform as a modern slavery market to buy and sell domestic workers in the Gulf, yet chose to remain idle after it shut down a few pages. According to the Wall Street Journal, Facebook only took action after Apple threatened to remove Facebook’s products from the App Store, following a BBC investigation into these onlive slave markets.

     

    As one Egyptian LGBTQ+ activist puts it, “This is very dangerous content to us — but to Facebook, it doesn’t seem to be a priority.” Indeed, Facebook spent more than 3.2 million hours searching, labeling or removing “false or misleading” content in 2020, but only 13% of those hours were spent working on content outside the U.S. Facebook employed only 766 content moderators in 2020 to manually moderate content posted by its 220 million users in the Arabic-speaking world.

     

    Relying on automation makes matters worse. While engineers have cast doubt over AI’s ability to catch and remove hate speech in English, it’s utterly disastrous in other world languages. According to an internal memo from 2020, Facebook does not have enough content or data “to train and maintain the Arabic classifier currently.” In Afghanistan, where hate speech is reportedly one of the top “abuse categories,” Facebook took action against just 0.23% of the estimated hate speech posts in the country.


    Can any of this be fixed? There are no silver bullet solutions to content moderation issues at scale. But for a start, Facebook should admit to the problem. Creating a new “Meta” world does not absolve the company from its responsibilities if its content moderation system continues to wreak havoc on the rest of the world.

     

    Facebook must dedicate sufficient financial and human resources for the 90% of users who live outside of the U.S. A first step would be to hire staff that understand and experience the world beyond Menlo Park, speak the local languages of the content they moderate, and understand local and regional nuances.


    The platform needs to work with communities on the ground and co-design policies with them, and most importantly, for once, be fully transparent. Facebook’s profit margins cannot come at the cost of our rights and our lives.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...