Jump to content
  • The New Digital Dark Age


    Karlston

    • 530 views
    • 4 minutes
     Share


    • 530 views
    • 4 minutes

    Online trust will reach an all-time low thanks to unchecked disinformation, AI-generated content, and social platforms pulling up their data drawbridges.

    For researchers, social media has always represented greater access to data, more democratic involvement in knowledge production, and great transparency about social behavior. Getting a sense of what was happening—especially during political crises, major media events, or natural disasters—was as easy as looking around a platform like Twitter or Facebook. In 2024, however, that will no longer be possible.

     

    In 2024, we will face a grim digital dark age, as social media platforms transition away from the logic of Web 2.0 and toward one dictated by AI-generated content. Companies have rushed to incorporate large language models (LLMs) into online services, complete with hallucinations (inaccurate, unjustified responses) and mistakes, which have further fractured our trust in online information.

     

    Another aspect of this new digital dark age comes from not being able to see what others are doing. Twitter once pulsed with publicly readable sentiment of its users. Social researchers loved Twitter data, relying on it because it provided a ready, reasonable approximation of how a significant slice of internet users behaved. However, Elon Musk has now priced researchers out of Twitter data after recently announcing that it was ending free access to the platform’s API. This made it difficult, if not impossible, to obtain data needed for research on topics such as public health, natural disaster response, political campaigning, and economic activity. It was a harsh reminder that the modern internet has never been free or democratic, but instead walled and controlled.

     

    Closer cooperation with platform companies is not the answer. X, for instance, has filed a suit against independent researchers who pointed out the rise in hate speech on the platform. Recently, it has also been revealed that researchers who used Facebook and Instagram’s data to study the platforms’ role in the US 2020 elections had been granted “independence by permission” by Meta. This means that the company chooses which projects to share its data with and, while the research may be independent, Meta also controls what types of questions are asked and who asks them.

     

    With elections coming in the US, India, Mexico, Indonesia, the UK, and the EU in 2024, the stakes are high. Until now, online “observatories” have been independently monitoring social media platforms for evidence of manipulation, inauthentic behavior, and harmful content. However, changes in data access by social media platforms, as well as the explosion of generative AI misinformation, means that the tools that researchers and journalists developed in prior national elections for monitoring online activity won’t work. One of my own collaborations, AI4TRUST, is developing new tools for combating misinformation, but our endeavor is stalled because of these changes.

     

     

    We need to clean up our online platforms. The Center for Countering Digital Hate, a research, advocacy, and policy organization working to stop the spread of online hate and disinformation, has called for the adoption of its STAR Framework (Safety by Design, Transparency, Accountability, and Responsibility). This would ensure that digital products and services are safe before they are launched; increase transparency around algorithms, rule enforcement, and advertising; and work to hold companies both accountable to democratic and independent bodies, and responsible for omissions and actions that lead to harm. The EU’s Digital Services Act is a step in the right direction of regulation, including provisions to ensure that independent researchers can monitor social network platforms. However, these provisions will take years to be actionable. The UK’s Online Safety Bill—slowly making its way through the policy process—could also help, but again, these provisions will take time to implement. Until then, the transition from social media to AI-mediated information means that, in 2024, a new digital dark age will likely begin.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...