Jump to content
  • A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.

    aum

    • 359 views
    • 3 minutes
     Share


    • 359 views
    • 3 minutes

    A man used AI to bring back his deceased fiancé. But the creators of the tech warn it could be dangerous and used to spread misinformation.

     

    • A man used artificial intelligence (AI) to create a chatbot that mimicked his late fiancé. 

     

    • The groundbreaking AI technology was designed by Elon Musk's research group OpenAI. 

     

    • OpenAI has long warned that the technology could be used for mass information campaigns. 

     

    After Joshua Barbeau's fiancé passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her. 

     

    In a story for the San Francisco Chronicle, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyper-realistic chatbots, recreated the experience of speaking with his late fiancé. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.

     

    It may sound like a miracle (or a Black Mirror episode), but the AI creators warn that the same technology could be used to fuel mass misinformation campaigns.

     

    Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. By consuming massive datasets of human-created text (Reddit threads were particularly helpful), GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.

     

    It's some of the most sophisticated — and dangerous — language-based AI programming to date. 

     

    When OpenAI released GPT-2, the predecessor to GPT-3, the group wrote that it can potentially be used in "malicious ways." The organization anticipated bad actors using the technology could automate "abusive or faked content on social media," "generate misleading news articles," or "impersonate others online."  

     

    GPT-2 could be used to "unlock new as-yet-unanticipated capabilities for these actors," the group wrote.  

     

    OpenAI staggered the release of GPT-2, and still restricts access to the superior GPT-3, in order to "give people time" to learn the "societal implications" of such technology.  

     

    Misinformation is already rampant on social media, even with GPT-3 not widely available. A new study found that YouTube's algorithm still pushes misinformation, and the nonprofit Center for Countering Digital Hate recently identified 12 people responsible for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the "Disinformation Dozen," they have millions of followers.  

     

    As AI continues to develop, Oren Etzioni, CEO of the non-profit, bioscience research group, Allen Institute, previously told Insider it will only become harder to tell what's real. 

     

    "The question 'Is this text or image or video or email authentic?' is going to become increasingly difficult to answer just based on the content alone," he said.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...