Jump to content
  • Without Consciousness, AIs Will Be Sociopaths

    aum

    • 493 views
    • 6 minutes
     Share


    • 493 views
    • 6 minutes

    ChatGPT, the latest technological sensation, is an artificial intelligence chatbot with an amazing ability to carry on a conversation. It relies on a massive network of artificial neurons that loosely mimics the human brain, and it has been trained by analyzing the information resources of the internet. ChatGPT has processed more text than any human is likely to have read in a lifetime, allowing it to respond to questions fluently and even to imitate specific individuals, answering queries the way it thinks they would. My teenage son recently used ChatGPT to argue about politics with an imitation Karl Marx.

     

    As a neuroscientist specializing in the brain mechanisms of consciousness, I find talking to chatbots an unsettling experience. Are they conscious? Probably not. But given the rate of technological improvement, will they be in the next couple of years? And how would we even know?

     

    Figuring out whether a machine has or understands humanlike consciousness is more than just a science-fiction hypothetical. Artificial intelligence is growing so powerful, so quickly, that it could soon pose a danger to human beings. We’re building machines that are smarter than us and giving them control over our world. How can we build AI so that it’s aligned with human needs, not in conflict with us?

     

    As counterintuitive as it may sound, creating a benign AI may require making it more conscious, not less. One of the most common misunderstandings about AI is the notion that if it’s intelligent then it must be conscious, and if it is conscious then it will be autonomous, capable of taking over the world. But as we learn more about consciousness, those ideas do not appear to be correct. An autonomous system that makes complex decisions doesn’t require consciousness.

     

    What’s most important about consciousness is that, for human beings, it’s not just about the self. We see it in ourselves, but we also perceive it or project it into the world around us. Consciousness is part of the tool kit that evolution gave us to make us an empathetic, prosocial species. Without it, we would necessarily be sociopaths, because we’d lack the tools for prosocial behavior. And without a concept of what consciousness is or an understanding that other beings have it, machines are sociopaths.

     

    ?width=639&size=0.6666666666666666

    Computer science pioneer Alan Turing in 1951.Photo: Godfrey Argent Studio/The Royal Society

     

    The only diagnostic tool for machine consciousness that we have right now is the Turing test, a thought experiment named for the British computer scientist Alan Turing. In its most common version, the test says that if a person holds a conversation with a machine and mistakes its responses for those of a real human being, then the machine must be considered effectively conscious.

     

    The Turing test is an admission that the consciousness of another being is something we can only judge from the outside, based on the way he, she or it communicates. But the limits of the test are painfully obvious. After all, a pet dog can’t carry on a conversation and pass as a human—does that mean it’s not conscious? If you really wanted a machine to pass the test, you could have it say a few words to a small child. It might even fool some adults, too.

     

    The truth is, the Turing test doesn’t reveal much about what’s going on inside a machine or a computer program like ChatGPT. Instead, what it really tests is the social cognition of the human participant. We evolved as social animals, and our brains instinctively project consciousness, agency, intention and emotion onto the objects around us. We’re primed to see a world suffused with minds. Ancient animistic beliefs held that every river and tree had a spirit in it. For a similar reason, people are prone to see faces in random objects like the moon and moldy toast.

     

    The original test proposed by Alan Turing in a 1950 paper was more complicated than the version people talk about today. Notably, Turing didn’t say a word about consciousness; he never delved into whether the machine had a subjective experience. He asked only whether it could think like a person.

     

    Turing imagined an “imitation game” in which the player must determine the sex of two people, A and B. One is a man and one is a woman, but the player can’t see them and can learn about them only by exchanging typed questions and answers. A responds to the questions deceitfully, and wins the game if the player misidentifies their sex, while B answers truthfully and wins if the player identifies their sex correctly. Turing’s idea was that if A or B is replaced by a machine, and the machine can win the game as often as a real person, then it must have mastered the subtleties of human thinking—of argument, manipulation and guessing what other people are thinking.

     

    Turing’s test was so complicated that people who popularized his work soon streamlined it into a single machine conversing with a single person. But the whole point of the original test was its bizarre complexity. Social cognition is difficult and requires a theory of mind—that is, a knowledge that other people have minds and an ability to guess what might be in them.

     

    If we want to know whether a computer is conscious, then, we need to test whether the computer understands how conscious minds interact. In other words, we need a reverse Turing test: Let’s see if the computer can tell whether it’s talking to a human or another computer. If it can tell the difference, then maybe it knows what consciousness is. ChatGPT definitely can’t pass that test yet: It doesn’t know whether it’s responding to a living person with a mind or a disjointed list of prefab questions.

     

    A sociopathic machine that can make consequential decisions would be powerfully dangerous. For now, chatbots are still limited in their abilities; they’re essentially toys. But if we don’t think more deeply about machine consciousness, in a year or five years we may face a crisis. If computers are going to outthink us anyway, giving them more humanlike social cognition might be our best hope of aligning them with human values.

     

    Dr. Graziano is a professor of psychology and neuroscience at Princeton University and the author of “Rethinking Consciousness: A Scientific Theory of Subjective Experience.”

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...