Jump to content

Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror.


aum

Recommended Posts

One of the pioneers of artificial intelligence argues that chatbots are often prodded into producing strange results by the people who are using them.

 

00AI-TRUTH-superJumbo.jpg?quality=75&aut

 

When Microsoft added a chatbot to its Bing search engine this month, people noticed it was offering up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish.

 

Then, when journalists and other early testers got into lengthy conversations with Microsoft’s A.I. bot, it slid into churlish and unnervingly creepy behavior.

 

In the days since the Bing bot’s behavior became a worldwide sensation, people have struggled to understand the oddity of this new creation. More often than not, scientists have said humans deserve much of the blame.

 

But there is still a bit of mystery about what the new chatbot can do — and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophic lens as well as the hard code of computer science.

 

Like any other student, an A.I. system can learn bad information from bad sources. And that strange behavior? It may be a chatbot’s distorted reflection of the words and intentions of the people using it, said Terry Sejnowski, a neuroscientist, psychologist and computer scientist who helped lay the intellectual and technical groundwork for modern artificial intelligence.

 

“This happens when you go deeper and deeper into these systems,” said Dr. Sejnowski, a professor at the Salk Institute for Biological Studies and the University of California, San Diego, who published a research paper on this phenomenon this month in the scientific journal Neural Computation. “Whatever you are looking for — whatever you desire — they will provide.”

 

Google also showed off a new chatbot, Bard, this month, but scientists and journalists quickly realized it was writing nonsense about the James Webb Space Telescope. OpenAI, a San Francisco start-up, launched the chatbot boom in November when it introduced ChatGPT, which also doesn’t always tell the truth.

 

The new chatbots are driven by a technology that scientists call a large language model, or L.L.M. These systems learn by analyzing enormous amounts of digital text culled from the internet, which includes volumes of untruthful, biased and otherwise toxic material. The text that chatbots learn from is also a bit outdated, because they must spend months analyzing it before the public can use them.

 

As it analyzes that sea of good and bad information from across the internet, an L.L.M. learns to do one particular thing: guess the next word in a sequence of words.

 

It operates like a giant version of the autocomplete technology that suggests the next word as you type out an email or an instant message on your smartphone. Given the sequence “Tom Cruise is a ____,” it might guess “actor.”

 

When you chat with a chatbot, the bot is not just drawing on everything it has learned from the internet. It is drawing on everything you have said to it and everything it has said back. It is not just guessing the next word in its sentence. It is guessing the next word in the long block of text that includes both your words and its words.

 

The longer the conversation becomes, the more influence a user unwittingly has on what the chatbot is saying. If you want it to get angry, it gets angry, Dr. Sejnowski said. If you coax it to get creepy, it gets creepy.

 

The alarmed reactions to the strange behavior of Microsoft’s chatbot overshadowed an important point: The chatbot does not have a personality. It is offering instant results spit out by an incredibly complex computer algorithm.

 

Microsoft appeared to curtail the strangest behavior when it placed a limit on the lengths of discussions with the Bing chatbot. That was like learning from a car’s test driver that going too fast for too long will burn out its engine. Microsoft’s partner, OpenAI, and Google are also exploring ways of controlling the behavior of their bots.

 

But there’s a caveat to this reassurance: Because chatbots are learning from so much material and putting it together in such a complex way, researchers aren’t entirely clear how chatbots are producing their final results. Researchers are watching to see what the bots do and learning to place limits on that behavior — often, after it happens.

 

Microsoft and OpenAI have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose — and reeling them in when they stray. They believe their big, public experiment is worth the risk.

 

Dr. Sejnowski compared the behavior of Microsoft’s chatbot to the Mirror of Erised, a mystical artifact in J.K. Rowling’s Harry Potter novels and the many movies based on her inventive world of young wizards.

 

“Erised” is “desire” spelled backward. When people discover the mirror, it seems to provide truth and understanding. But it does not. It shows the deep-seated desires of anyone who stares into it. And some people go mad if they stare too long.

 

“Because the human and the L.L.M.s are both mirroring each other, over time they will tend toward a common conceptual state,” Dr. Sejnowski said.

 

It was not surprising, he said, that journalists began seeing creepy behavior in the Bing chatbot. Either consciously or unconsciously, they were prodding the system in an uncomfortable direction. As the chatbots take in our words and reflect them back to us, they can reinforce and amplify our beliefs and coax us into believing what they are telling us.

 

Dr. Sejnowski was among a tiny group researchers in the late 1970s and early 1980s who began to seriously explore a kind of artificial intelligence called a neural network, which drives today’s chatbots.

 

A neural network is a mathematical system that learns skills by analyzing digital data. This is the same technology that allows Siri and Alexa to recognize what you say.

 

Around 2018, researchers at companies like Google and OpenAI began building neural networks that learned from vast amounts of digital text, including books, Wikipedia articles, chat logs and other stuff posted to the internet. By pinpointing billions of patterns in all this text, these L.L.M.s learned to generate text on their own, including tweets, blog posts, speeches and computer programs. They could even carry on a conversation.

 

These systems are a reflection of humanity. They learn their skills by analyzing text that humans have posted to the internet.

 

But that is not the only reason chatbots generate problematic language, said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute, an independent lab in New Mexico.

 

When they generate text, these systems do not repeat what is on the internet word for word. They produce new text on their own by combining billions of patterns.

 

Even if researchers trained these systems solely on peer-reviewed scientific literature, they might still produce statements that were scientifically ridiculous. Even if they learned solely from text that was true, they might still produce untruths. Even if they learned only from text that was wholesome, they might still generate something creepy.

 

“There is nothing preventing them from doing this,” Dr. Mitchell said. “They are just trying to produce something that sounds like human language.”

 

Artificial intelligence experts have long known that this technology exhibits all sorts of unexpected behavior. But they cannot always agree on how this behavior should be interpreted or how quickly the chatbots will improve.

 

Because these systems learn from far more data than we humans could ever wrap our heads around, even A.I. experts cannot understand why they generate a particular piece of text at any given moment.

 

Dr. Sejkowski said he believed that in the long run, the new chatbots had the power to make people more efficient and give them ways of doing their jobs better and faster. But this comes with a warning for both the companies building these chatbots and the people using them: They can also lead us away from the truth and into some dark places.

 

“This is terra incognita,” Dr. Sejkowski said. “Humans have never experienced this before.”

 

Source

Link to comment
Share on other sites


  • Views 477
  • Created
  • Last Reply

Top Posters In This Topic

  • aum

    1

Popular Days

Top Posters In This Topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...