Jump to content
  • Google’s AI Overviews misunderstand why people use Google


    Karlston

    • 262 views
    • 7 minutes
     Share


    • 262 views
    • 7 minutes

    Answers that are factually "wrong" are only part of the problem.

    Last month, we looked into some of the most incorrect, dangerous, and downright weird answers generated by Google's new AI Overviews feature. Since then, Google has offered a partial apology/explanation for generating those kinds of results and has reportedly rolled back the feature's rollout for at least some types of queries.

     

    But the more I've thought about that rollout, the more I've begun to question the wisdom of Google's AI-powered search results in the first place. Even when the system doesn't give obviously wrong results, condensing search results into a neat, compact, AI-generated summary seems like a fundamental misunderstanding of how people use Google in the first place.

    Reliability and relevance

    When people type a question into the Google search bar, they only sometimes want the kind of basic reference information that can be found on a Wikipedia page or corporate website (or even a Google information snippet). Often, they're looking for subjective information where there is no one "right" answer: "What are the best Mexican restaurants in Santa Fe?" or "What should I do with my kids on a rainy day?" or "How can I prevent cheese from sliding off my pizza?"

     

    The value of Google has always been in pointing you to the places it thinks are likely to have good answers to those questions. But it's still up to you, as a user, to figure out which of those sources is the most reliable and relevant to what you need at that moment.

     

    • googleai_16.png
      This wasn't funny when the guys at Pep Boys said it, either. (via)
      Kyle Orland / Google
    • googleai_12.png
      Weird Al recommends "running with scissors" as well! (via)
      Kyle Orland / Google
    • googleai_4.png
      This list of steps actually comes from a forum thread response about doing something completely different. (via)
      Kyle Orland / Google
    • googleai_6.png
      An island that's part of the mainland? (via)
      Kyle Orland / Google
    • googleai_8.png
      If everything's cheaper now, why does everything seem so expensive?
      Kyle Orland / Google
    • googleai_9.png
      Pretty sure this Truman was never president... (via)
      Kyle Orland / Google

    For reliability, any savvy Internet user makes use of countless context clues when judging a random Internet search result. Do you recognize the outlet or the author? Is the information from someone with seeming expertise/professional experience or a random forum poster? Is the site well-designed? Has it been around for a while? Does it cite other sources that you trust, etc.?

     

    But Google also doesn't know ahead of time which specific result will fit the kind of information you're looking for. When it comes to restaurants in Santa Fe, for instance, are you in the mood for an authoritative list from a respected newspaper critic or for more off-the-wall suggestions from random locals? Or maybe you scroll down a bit and stumble on a loosely related story about the history of Mexican culinary influences in the city.

     

    One of the unseen strengths of Google's search algorithm is that the user gets to decide which results are the best for them. As long as there's something reliable and relevant in those first few pages of results, it doesn't matter if the other links are "wrong" for that particular search or user.

    What does “The Web” say?

    After years or decades of using Google, regular web searchers perform this kind of sifting of results without really thinking about it. A savvy web user would, for example, easily discount the opinion of Reddit user fucksmith, who 11 years ago wrote a Reddit post suggesting you add glue to pizza sauce, which infamously became part of a Google AI Overview answer last month.

     

    googleai_2.png

    The bit about using glue on pizza can be traced back to an
    11-year-old troll post on Reddit. (via)
    Kyle Orland / Google

     

    If that kind of obviously troll post appears as part of your Google search results—even near the top of the list of links—well, that's just the nature of the World Wide Web. When you're looking through a disintermediated information ecosystem where anyone can write literally anything, you sometimes have to sift through the trolls and the obvious SEO cruft to get to the gems of useful information that you need.

     

    When Google's AI Overview synthesizes a new summary of the web's top results, on the other hand, all of this personal reliability and relevance context is lost. The Reddit troll gets mixed in with the serious cooking expert, and both get lumped together into the "One Best Answer Presented by Google's Magic Artificial Intelligence (tm)."

     

    Ideally, Google's storied PageRank algorithm and large language model safeguards would ensure that this summary is a somewhat valid and useful digest of the collective thoughts of The Web writ large. But without the human sifting element that's an automatic part of a normal Google search, the AI Overview is often going to end up confused, flat, or outright wrong.

     

    Instead of "hallucinating" information, Google is essentially trusting in its tried-and-true search algorithm to surface the best information to feed to its LLM—if it's in a top result, it must be true, right? And despite the "intelligence" in the name, Google's AI doesn't have the kind of human understanding that's often needed to sort out useful results from irrelevant ones.

    A matter of trust

    Reputationally, there's a subtle but important shift from Google saying, "Here's a bunch of potential sources on the web that might have an answer for your search," to saying, "Here is Google's AI-generated answer for your search." If Google points you to a source recommending glue in your pizza sauce, that's mostly on you for trusting that source (though you can certainly criticize Google's algorithm for ranking that source highly). If that same information is presented on the Google homepage by the company's new state-of-the-art artificial intelligence, on the other hand, it is essentially getting a seal of approval from what has become the web's primary way to search for collective knowledge.

     

    redditfakeai.png

    While this AI Overview screenshot was a fake, Google's problem is that the
    fake was not immediately obvious to everyone looking at it.
    Reddit

     

    That's a major problem for Google, even if outright wrong or dangerous results only come in response to "generally very uncommon queries, and aren’t representative of most people’s experiences," as a spokesperson told me last month. Take a recent viral post showing a screenshot of Google's AI Overview telling a depressed person to jump off the Golden Gate Bridge. Both follow-up social media posts and a Google spokesperson later confirmed that the screenshot was fake and the result was never generated by Google's AI Overview. But the truth came out only after the fake post had already been featured in The New York Times, which had to issue a correction.

     

    While the Times should have done its due diligence on checking that screenshot, it's easy to see why the author thought such an AI Overview was plausible from Google. It's very easy to imagine a troll on Reddit (or some other web forum) suggesting self-harm for a depressed person. And now, it's also easy to imagine Google's AI Overview summarizing that post as a valid answer because it happens to appear near the top of Google's search results. And it's also easy to imagine Google pushing that answer out in its authoritative, Google-backed artificial voice (even though Google says it has LLM safeguards for hateful or violent content).

     

    When your AI is just summarizing the top search results from around the web, it's only ever going to be as smart or as dumb as the search engine itself. Without the human factor that helps make sense of Google's map of the web, a Google-powered "AI Overview" is always going to fail in some remarkable ways.

     

    Source

     

    Hope you enjoyed this news post.

    Thank you for appreciating my time and effort posting news every single day for many years.

    2023: Over 5,800 news posts | 2024 (till end of May): Nearly 2,400 news posts


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...