Jump to content
  • This Disinformation Is Just for You

    aum

    • 290 views
    • 6 minutes
     Share


    • 290 views
    • 6 minutes

    Generative AI won't just flood the internet with more lies—it may also create convincing disinformation that’s targeted at groups or even individuals.

     

    IT’S NOW WELL understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.

     

    When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.

     

    “Generate AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”

     

    Rather than producing just a handful of articles a day,  Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”

     

    Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign.

     

    Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.

     

    “You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.

     

    Hany Farid, a professor of computer science at the University of California, Berkeley, says this kind of customized disinformation is going to be “everywhere.” Though bad actors will probably target people by groups when waging a large-scale disinformation campaign, they could also use generative AI to target individuals.

     

    “You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming,” Farid says.

     

    Purveyors of disinformation will try all sorts of tactics until they find what works best, Farid says, and much of what’s happening with these disinformation campaigns likely won’t be fully understood until after they’ve been in operation for some time. Plus, they only need to be somewhat effective to achieve their aims.

     

    “If I want to launch a disinformation campaign, I can fail 99 percent of the time. You fail all the time, but it doesn’t matter,” Farid says. “Every once in a while, the QAnon gets through. Most of your campaigns can fail, but the ones that don’t can wreak havoc.”

     

    Farid says we saw during the 2016 election cycle how the recommendation algorithms on platforms like Facebook radicalized people and helped spread disinformation and conspiracy theories. In the lead-up to the 2024 US election, Facebook’s algorithm—itself a form of AI—will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.

     

    “We’ve been pretty well tricked by very low-quality content. We are entering a period where we’re going to get higher-quality disinformation and propaganda,” Starbird says. “It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”

     

    What can be done about this problem? Unfortunately, only so much. Diresta says people need to be made aware of these potential threats and be more careful about what content they engage with. She says you’ll want to check whether your source is a website or social media profile that was created very recently, for example. Farid says AI companies also need to be pressured to implement safeguards so there’s less disinformation being created overall.

     

    The Biden administration recently struck a deal with some of the largest AI companies—ChatGPT maker OpenAI, Google, Amazon, Microsoft, and Meta—that encourages them to create specific guardrails for their AI tools, including external testing of AI tools and watermarking of content created by AI. These AI companies have also created a group focused on developing safety standards for AI tools, and Congress is debating how to regulate AI.

     

    Despite such efforts, AI is accelerating faster than it’s being reined in, and Silicon Valley often fails to keep promises to only release safe, tested products. And even if some companies behave responsibly, that doesn’t mean all of the players in this space will act accordingly.

     

    “This is the classic story of the last 20 years: Unleash technology, invade everybody’s privacy, wreak havoc, become trillion-dollar-valuation companies, and then say, ‘Well, yeah, some bad stuff happened,’” Farid says. “We’re sort of repeating the same mistakes, but now it’s supercharged because we’re releasing this stuff on the back of mobile devices, social media, and a mess that already exists.”

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...