Jump to content
  • Google's new AI Overviews in Search are already generating major factual errors [Update]


    Karlston

    • 123 views
    • 3 minutes
     Share


    • 123 views
    • 3 minutes

    Update - We received this statement from a Google spokesperson:

     

    The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.

    Original story - Earlier this month, Google held its annual I/O developers conference. As expected, AI was at the top of the company's agenda during the event as it revealed new and upcoming generative AI features for a number of its products and services.

     

    One of those features announced at I/O is AI Overviews for Search. It started rolling out for US Search features soon after I/O. The idea is that when you type in a search string, you will get an answer to your question generated by an AI program based on information from a number of sources on the web, with credits to those sources.

     

    However, as CNBC reports, since AI Overviews has started to be a part of Google Search in the US, many people have posted obvious factual errors that the feature had generated.

     

    One person, "@heavenrend" on X, showed that when they typed in "cheese not sticking to pizza" into Google Search, the AI Overview summary suggested putting in "about 1/8 cup of nontoxic glue to the sauce."

     

    Another X user, "@napalmtrees," asked if it is normal to leave a dog in a hot car. The AI Overview summary from Google Search said simply, "Yes, it’s always safe to leave a dog in a hot car." The apparent source of this flat-out wrong answer was the lyrics to a fictional Beatles song.

     

    The criticism of AI Overview sounds very similar to how Google launched its first AI chatbot, Bard, in March 2023, which was criticized for launching without enough safety and ethical guardrails. In February 2024, Google turned off the creation of images of people in Bard's successor, Gemini. That feature allowed the making of artwork that sometimes showed people with darker skin color in inappropriate ways. At the time, Google stated it would work to improve the feature before it was put back in Gemini, but so far, that has yet to happen.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...