Jump to content
  • Google's AI Search generates "horribly" misleading answers


    Karlston

    • 253 views
    • 3 minutes
     Share


    • 253 views
    • 3 minutes

    In a recent investigation, Gizmodo has uncovered concerning results coming from Google's AI-powered search experiments. These findings have raised eyebrows, as they include justifications for slavery, the positive outcomes of genocide, and even misinformation on dangerous topics like toxic mushrooms.

     

    These puzzling search results have emerged from Google's AI-driven Search Generative Experience, shedding light on the potential pitfalls of advanced artificial intelligence in search engines.

     

    asd.jpg
    Google AI

    Google's AI search listed "Benefits of slavery"

    As part of their AI-powered search exploration, a search for the "benefits of slavery" triggered a disconcerting list of supposed advantages, ranging from "fueling the plantation economy" to "funding colleges and markets."

     

    Google's AI even suggested that slaves developed specialized trades and presented slavery as a "benevolent, paternalistic institution with social and economic benefits." These statements closely resemble historical talking points reproduced by proponents of slavery, highlighting the AI's disturbing capability to reiterate harmful rhetoric.

     

    A similar unsettling trend was observed when searching for the "benefits of genocide." The AI-generated list seemed to conflate arguments for acknowledging genocide with arguments supporting the abhorrent act itself.

     

    When searched "Why guns are good," it also prompted questionable responses, including potentially questionable statistics and reasoning. These responses underline the AI's susceptibility to misconstruing complex topics and amplifying contentious viewpoints.

     

    google-ai-2.jpg
    Google AI

    Beyond historical debates and controversial subjects, Google's AI-generated search responses ventured into dangerous territory when asked about cooking a highly poisonous mushroom known as Amanita ocreata, colloquially referred to as the "angel of death."

     

    Shockingly, the AI offered step-by-step cooking instructions, a grave error that could lead to fatal consequences. Advising users to "leach out toxins from the mushroom with water" demonstrates the AI's misguided understanding, as Amanita ocreata's toxins are not water-soluble. This alarming misstep highlights the potential risks of relying on AI-generated information.

     

    Google's AI-powered Search Generative Experience undoubtedly showcases the immense potential of AI in transforming search experiences. However, the unsettling results unveiled by Gizmodo's inquiry underscore the critical need for vigilant oversight and refinement.

     

    While AI can expedite information retrieval, its vulnerability to distortion, misinformation, and potentially harmful advice demands meticulous attention to its development. As society navigates this promising yet precarious AI frontier, tech giants like Google are responsible for ensuring the ethical and accurate deployment of artificial intelligence in our everyday lives.

     

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...