Jump to content
  • Did Bing Really Say That? Microsoft Warns of Doctored AI Chats Spreading Online

    aum

    • 249 views
    • 3 minutes
     Share


    • 249 views
    • 3 minutes

    One screenshot circulating on social media claims to show the AI-powered Bing trying to place the user on an FBI watchlist. However, Microsoft says the image is a fake.

     

    As experts worry about AI-powered chatbots generating factual errors and propaganda, in a bit of irony, misinformation is starting to smear the reputation of Microsoft’s AI-powered Bing.

     

    Social media, particularly Twitter and Reddit, has seen a surge of screenshots showing the outlandish responses the ChatGPT-powered Bing can occasionally generate for users. But according to Microsoft, at least some of these screenshots appear to be fake.

     

    Frank Shaw, the company’s chief communications officer, took to Twitter on Wednesday to point out the problem. “If you are wondering if some of these ‘Bing Chat’ conversations flying around on here (Twitter) are real, let's just say we are seeing edited/photoshopped screenshots which probably never happened,” he wrote.

     

    Shaw then cited a Reddit post(Opens in a new window), which claimed to show the AI-powered Bing seeking to place the user on an “FBI watch list” after misinterpreting a query. To do so, Bing allegedly began searching for “child pornography” during the chat session.

     

    While it’s true Bing can post inaccuracies and some emotionally bizarre responses during long chat sessions, Shaw says some of the screenshots circulating online go beyond what’s plausible for the AI-powered chatbot, which also strictly prohibits searches for child pornography.  

     

    “We have certainly seen some that are fake, which was what drove my tweet,” Shaw told PCMag. “We’re trying to make sure there is some awareness of the potential early. You can run the queries cited in the screen shots and pretty quickly get a sense of if they are likely.”

     

    02lPDJfVmJdpIJCNQec7wrI-2.fit_lim.size_8

    Attempting to ask the new Bing the same query resulted in a non-response. (Bing.)

     

    The warning from Microsoft highlights how misinformation can easily infect any topic these days, thanks to the viral nature of social media. The doctored screenshots also threaten to sow more distrust around the new Bing when critics including Tesla CEO Elon Musk have been blasting the AI chatbot over its alleged biases and potential to spread propaganda.

     

    In response to the faked chats, Shaw is encouraging users to first try and replicate a similar response from Bing before sharing screenshots on social media that claim to show the AI chatbot’s offbeat behavior.

     

    That said, it can be hard to test which Bing interactions are real, and which are fake, given the volume of screenshots circulating online. The Reddit forum(Opens in a new window) devoted to Bing is currently flooded with screenshots, many of them showing the chatbot’s odd or buggy behavior. But to prevent Bing from making bizarre responses, Microsoft has temporarily restricted the number of chats that can be made with the program.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...