Jump to content
  • What Everyone Is Getting Wrong About Microsoft’s Chatbot

    aum

    • 423 views
    • 10 minutes
     Share


    • 423 views
    • 10 minutes

    No, Bing isn’t falling in love with you.

     

    On a talk show in 1970, the filmmaker Orson Welles told a story about the time he became a fortune teller out of boredom. He would spend the day predicting people’s futures and sussing out facets of their lives using cold reading, a technique often employed by psychics and mentalists to suss out information about complete strangers using surface level information about them (e.g. how they’re dressed, the way they talk).

     

    “The computer in here,” he told the show’s host, pointing to his head, “has made all of those deductions without you being conscious of it.”

     

    Over the course of the day and after a few fortune telling sessions, he began to fall into an old trap that occasionally happens to working magicians, where they start to believe they actually possess supernatural powers. He realized this after a woman walked in and sat down in front of him. After taking her in, he said, “You lost your husband last week” to which the woman burst into tears, confirming that she had. At that moment, Welles realized that he was teetering a little too close to believing in his own powers, and quit being a fortune teller.

     

    It’s easy to roll our eyes at things like this and think, “That’s silly. I’d never be a big enough sucker to be duped into something like that.” And yet, it seems as though so many of us have when it comes to things like AI.

     

    “Software does not ‘fall in love,’ or ‘threaten its users,’ but, in response to queries, the new chatbot has provided answers that contend both of those.”


    — Irina Raicu, Santa Clara University,

     

    Ever since Microsoft announced that it was integrating an AI chatbot into its Bing search engine, the world has lost its damned mind—and it’s really no surprise. Eye-popping headlines and viral Twitter threads of the bot supposedly threatening users, falling madly in love with them, or even claiming that it can spy on people through their webcams are undoubtedly wild. At the top of the headlines are the stories of Bing’s chatbot telling users it “wants to be human,” as it told Digital Trends; or that it can “feel or think things,” as it told The Washington Post.

     

    These stories are even more disconcerting when folks like the world’s second richest man (and former investor in ChatGPT’s creators OpenAI) take a break from breaking Twitter to say these chatbots represent a world-ending existential threat to humanity. “One of the biggest risks to the future of civilization is AI,” Elon Musk told a crowd Feb. 15 at the World Government Summit in Dubai, in a discussion about ChatGPT. The infusion of billions of dollars into AI from companies like Alphabet and Baidu are only spurring more concerns that a chatbot arms race will utterly transform the landscape of the internet and media forever—into something not certainly positive.

     

    There’s already a lot going on with Microsoft's new AI-powered Bing chatbot. Some of it is scary and most of it’s confusing. The important thing to keep in mind though is this: Much of what you’re hearing from the media about this is a big steaming pile of unmitigated BS being peddled by people who should know better.

     

    No, Bing’s chatbot isn’t falling in love with you. It’s not spying on you via your webcam. It’s not even actually threatening you even if it produces a few creepy sentences. And it most certainly isn’t sentient despite what you might assume from reading certain headlines. It’s not doing any of that because of the simple fact that it can’t. It’s operating the way it was trained to—and that means it isn’t smart enough to do the things people are harping on about.

     

    “Software does not ‘fall in love,’ or ‘threaten its users,’ but, in response to queries, the new chatbot has provided answers that contend both of those,” Irina Raicu, director of Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University, told The Daily Beast

     

    Bots like ChatGPT and Bing’s new search engine are known as large language models (LLM). These are AIs that read and generate text. They’re trained on massive datasets of text scrubbed from all over the internet; from news stories, to Wikipedia articles, to fan fiction databases. Pretty much anything that’s available online is used to train these bots. The most advanced ones are capable of creating such sophisticated and uncanny responses that they easily blow past the famed Turing Test and straight into the realm of the uncanny.

     

    The other thing to understand about LLMs, though, is that they’re not really all that special. In fact, there’s a good chance you’ve used one recently. If you typed a text message or a post on your phone and used the predictive text feature, then you’ve used an LLM. They’re just strings of code making educated predictions about the best way to respond to whatever it is you’re typing. In a word, they’re rudimentary—designed only to make educated stabs at conversation. They don’t think about what the words mean or what they might imply. They’re simply trained to predict what the next word in a sentence is supposed to be, and for every sentence after that. It’s a fortune teller’s cold reading for the technological age.

     

    That isn’t to say that OpenAI’s LLMs aren’t impressive. They’re undoubtedly advanced and are some of the most uncanny chatbots that have ever been made to the public. But they probably shouldn’t have been made available to the public in the first place.

     

    Let’s go back to 2016: the first time Microsoft unleashed an AI chatbot onto the world, dubbed Tay and designed to mimic a 19-year-old American girl and learn through its chats with Twitter users. In less than a day, Microsoft was forced to suspend the account after users began tweeting racist, sexist, and wholly problematic messages at it—which caused Tay to regurgitate those same thoughts.

     

    In a blog post explaining its decision, Microsoft said that to “do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process.”

     

    Fast-forward to the present day. “It’s 2023, and Microsoft’s new Bing-related chatbot didn’t just get dropped into social media—but the users invited to use it so far are now posting their startling ‘conversations’ on Twitter after all,” Raicu said. It’s almost as if “Microsoft decided to program the polar opposite of overly-compliant Tay, and try again.”

     

    It’s funny that a company like Microsoft seemingly did not learn at all from the big, fat, problematic L that it took in 2016 and circle back on its chatbot plans—but not all that surprising. After all, ChatGPT skyrocketed to stardom after its release by OpenAI in Nov. 2022. Now Microsoft, Google, and others want to capitalize on the same success.

     

    More disconcertingly, though, people who should know better have seemed to not learn anything from Bing’s new chatbot either. It even kicked off the birth of a new kind of insipid writing genre too: the “We talked to an AI for this news article and here’s what it said” story. Hell, we’re even guilty of it ourselves.

     

    But, for the past week, it’s all we’ve heard about in tech news (in between stories about spy balloons and UFOs). There’s been headline after headline and tweet after tweet about how the bot has been acting badly or outlandishly. In one instance, Kevin Roose, a seasoned tech columnist at The New York Times, spent two hours talking to the Bing bot covering topics from Jungian psychology, to its own existential feelings, to love. He was so shaken by the experience that he even lost sleep over it.

     

    “Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology,” Roose wrote. “It unsettled me so deeply that I had trouble sleeping afterward.”

     

    He added that he’s worried that the technology will eventually learn how to “influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”

     

    That’s fair. But one might argue it also completely misses the point for two reasons: For one, users are actively trying to game the chatbot in order to make it say racist, sexist, and problematic things. We shouldn't be surprised that when you seek out nonsense, you get nonsense in response. Moreover, Bing's chatbot isn't designed for users to hold hours long conversations for it. It's a search engine. You're supposed to input your query, get the results you were looking for, and continue on. So of course, if you hold a two hour long conversation with it about philosophy and existentialism, you're gonna get some pretty weird shit back.

     

    In response to the subsequent media discourse around all these instances of its chatbot acting badly, Microsoft has reportedly updated the search engine to prevent it from being able to talk about itself or hold "conversations" for more than 50 queries.

     

     As we’ve written before, this is a case of a kind of digital pareidolia, the psychological phenomenon where you see faces and patterns where there aren’t. If you spend hours “conversing” with a chatbot, you’re going to think that it’s talking back at you with meaning and intention—even though, in actuality, you’re just talking to a glorified Magic 8 ball or fortune teller, asking it a question and seeing what it’s going to come up with next.

     

    There’s plenty to be scared about when it comes to AI. They’ve been known to be incredibly biased, consistently showing instances of racist and sexist outcomes in things like LLMs. The real danger is users believing the things that they say no matter how ridiculous or vile. This danger is only exacerbated by people claiming that these chatbots are capable of things like sentience and feelings, when in reality they can’t do any of those things. They’re bots. It can’t feel emotions like love, or hate, or happiness. It can only do what it was instructed: Tell us the things it thinks we want to hear.

     

    Then again, if that’s the case, maybe they do have more in common with us than we think.

     

    Source

    • Like 2

    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...