Jump to content
  • What Isaac Asimov’s Robbie Teaches About AI and How Minds 'Work'

    Karlston

    • 314 views
    • 13 minutes
     Share


    • 314 views
    • 13 minutes

    When humans didn't know what moved the ocean and the sun, they granted those objects mental states. Something similar can happen with artificial intelligence.

    In Isaac Asimov’s classic science fiction story “Robbie,” the Weston family owns a robot who serves as a nursemaid and companion for their precocious preteen daughter, Gloria. Gloria and the robot Robbie are friends; their relationship is affectionate and mutually caring. Gloria regards Robbie as her loyal and dutiful caretaker. However, Mrs. Weston becomes concerned about this “unnatural” relationship between the robot and her child and worries about the possibility of Robbie causing harm to Gloria (despite it's being explicitly programmed to not do so); it is clear she is jealous. After several failed attempts to wean Gloria off Robbie, her father, exasperated and worn down by the mother’s protestations, suggests a tour of a robot factory—there, Gloria will be able to see that Robbie is “just” a manufactured robot, not a person, and fall out of love with it. Gloria must come to learn how Robbie works, how he was made; then she will understand that Robbie is not who she thinks he is. This plan does not work. Gloria does not learn how Robbie “really works,” and in a plot twist, Gloria and Robbie become even better friends. Mrs. Weston, the spoilsport, is foiled yet again. Gloria remains “deluded” about who Robbie “really is.”

     

    What is the moral of this tale? Most importantly, that those who interact and socialize with artificial agents, without knowing (or caring) how they “really work” internally, will develop distinctive relationships with them and ascribe to them those mental qualities appropriate for their relationships. Gloria plays with Robbie and loves him as a companion; he cares for her in return. There is an interpretive dance that Gloria engages in with Robbie, and Robbie’s internal operations and constitution are of no relevance to it. When the opportunity to learn such details arises, further evidence of Robbie’s functionality (after it saves Gloria from an accident) distracts and prevents Gloria from learning anymore.

     

    Philosophically speaking, “Robbie” teaches us that in ascribing a mind to another being, we are not making a statement about the kind of thing it is, but rather, revealing how deeply we understand how it works. For instance, Gloria thinks Robbie is intelligent, but her parents think they can reduce its seemingly intelligent behavior to lower-level machine operations. To see this more broadly, note the converse case where we ascribe mental qualities to ourselves that we are unwilling to ascribe to programs or robots. These qualities, like intelligence, intuition, insight, creativity, and understanding, have this in common: We do not know what they are. Despite the extravagant claims often bandied about by practitioners of neuroscience and empirical psychology, and by sundry cognitive scientists, these self-directed compliments remain undefinable. Any attempt to characterize one employs the other (“true intelligence requires insight and creativity” or “true understanding requires insight and intuition”) and engages in, nay requires, extensive hand waving.

     

    But even if we are not quite sure what these qualities are or what they bottom out in, whatever the mental quality, the proverbial “educated layman” is sure that humans have it and machines like robots do not—even if machines act like we do, producing those same products that humans do, and occasionally replicating human feats that are said to require intelligence, ingenuity, or whatever else. Why? Because, like Gloria’s parents, we know (thanks to being informed by the system’s creators in popular media) that “all they are doing is [table lookup / prompt completion / exhaustive search of solution spaces].” Meanwhile, the mental attributes we apply to ourselves are so vaguely defined, and our ignorance of our mental operations so profound (currently), that we cannot say “human intuition (insight or creativity) is just [fill in the blanks with banal physical activity].”

     

    Current debates about artificial intelligence, then, proceed the way they do because whenever we are confronted with an “artificial intelligence,” one whose operations we (think we) understand, it is easy to quickly respond: “All this artificial agent does is X.” This reductive description demystifies its operations, and we are therefore sure it is not intelligent (or creative or insightful). In other words, those beings or things whose internal, lower-level operations we understand and can point to and illuminate, are merely operating according to known patterns of banal physical operations. Those seemingly intelligent entities whose internal operations we do not understand are capable of insight and understanding and creativity. (Resemblance to humans helps too; we more easily deny intelligence to animals that do not look like us.)

     

    But what if, like Gloria, we did not have such knowledge of what some system or being or object or extraterrestrial is doing when it produces its apparently “intelligent” answers? What qualities would we ascribe to it to make sense of what it is doing? This level of incomprehensibility is perhaps rapidly approaching. Witness the perplexed reactions of some ChatGPT developers to its supposedly “emergent” behavior, where no one seems to know just how ChatGPT produced the answers it did. We could, of course, insist that “all it’s doing is (some kind of) prompt completion.” But really, we could also just say about humans, “It’s just neurons firing.” But neither ChatGPT nor humans would make sense to us that way.

     

    The evidence suggests that if we were to encounter a sufficiently complicated and interesting entity that appears intelligent, but we do not know how it works and cannot utter our usual dismissive line, “All x does is y,” we would start using the language of “folk psychology” to govern our interactions with it, to understand why it does what it does, and importantly, to try to predict its behavior. By historical analogy, when we did not know what moved the ocean and the sun, we granted them mental states. (“The angry sea believes the cliffs are its mortal foes.” Or “The sun wants to set quickly.”) Once we knew how they worked, thanks to our growing knowledge of the physical sciences, we demoted them to purely physical objects. (A move with disastrous environmental consequences!) Similarly, once we lose our grasp on the internals of artificial intelligence systems, or grow up with them, not knowing how they work, we might ascribe minds to them too. This is a matter of pragmatic decision, not discovery. For that might be the best way to understand why and what they do.

     

    This should prompt us to look a little closer. For, come to think of it, how do I know that other humans have minds like mine? Roughly: They look like me, they act like me, and so, I reason that they must have minds like mine, which work the way I think mine does. (This is an entirely reasonable inference to the best possible explanation for their visible, external behavior.) We never, though, open the brains of other human beings to check for minds, because we would not know what to look for. More to the point, we know what we would see: a brain, and we do not know how those work. Our intentionality, our understanding, is mysterious too when viewed at this lower level of description. And so, because we cannot find physical correlates of our intelligence, and even if we did, we would find using them too cumbersome in dealing with intelligent humans, we instead observe how human beings behave and act, and how they conform to psychological generalizations. If someone wants to get into medical school, and they believe that studying hard will help them do so, then we can predict that they may be found in a library, studying away diligently. That is what “normal, intelligent” human beings do. This is the interpretive dance we engage in with humans; the language of psychology emerges from these interactions. It is how we make sense of our fellow humans.

     

    This means that our fellow humans, too, are entities whose complex and poorly understood innards do not allow us to explain, predict, and understand their interactions with us in terms of their physical composition and properties (the way we can with objects like stones or glass bottles) or in terms of their design properties (the way we can with aircraft or mechanical pencils). Because we must use higher-level psychological explanations, the best way to make sense of human beings’ behavior is to anthropomorphize them! That is, the best way to make sense of these other beings distinct from me (other “humans”) is to treat them as if they were just like me in kind. The crucial point here is that I did not have to regard other human beings as being like me. I could have perhaps regarded them as curious aliens who happen to resemble me and act like me but were not really like me in some “important, crucial” sense, because I did not have conclusive proof that they had internal lives and minds like mine. Instead, we choose to anthropomorphize humans, because doing so makes interactions with them more tractable, a situation preferable to enduring a solipsistic existence, convinced that our mind is the only one that existed.

     

    This philosophical analysis matters because there is an important balancing act we must engage in when thinking about legal regulation of artificial intelligence research: We want the technical advantages and social benefits of artificial intelligence (such as the amazing predictions of protein structures produced by AlphaFold), so we want their designers to continue developing such systems. But these companies need liability cover—like the railways were provided by the Supreme Court in their fledgling days—otherwise, the designers of artificial intelligence systems would stay out of such a potentially financially risky arena. But we want society to be protected too from the negative effects of such smart programs, especially if they take actions that are not anticipated—which, of course, is also their desirable functionality.

     

    So, in legal and economic terms, we need to appropriately allocate risk and liability. One way to do so builds upon this revised understanding of artificial intelligence. When we have a conceptual sense that the artificial agents we interact with are agents in the psychological sense—that is, we understand their actions as being caused by their beliefs and desires—it will allow us to consider these systems as the legal representatives (the legal agents) of those who develop and deploy them. Much as hospitals employ doctors who act on behalf of the hospitals, whose acts the hospital is liable for, who can sign contracts and take actions on the behalf of the hospital. (The legal system does not, strictly, have to wait for such a conceptual understanding to be in place before deeming artificial agents to be legal agents, but broader social acceptance of such regulations will be easier if such conceptual understanding is widespread.) They would then be the legal agents of their legal principals—for example, the Bing chatbot would be the legal agent of its principal, Microsoft. Then, the principal is liable for their actions and consequences—as we, the broader public, would want—but only within the scope of duty which their developers and deployers would want. For instance, a driver for a public transport company is liable for things he does on the job but not off it. Transit companies can hire drivers, then, knowing that they are justifiably liable for their actions on the job, but they are protected from their employees when they “go rogue” off the job. Similarly, say a custom version of Bing, purchased by a customer to provide expert guidance on pricing, would be liable for the advice it provides on pricing, but if a customer were to use it for another task, say advising on finding suitable romantic partners, Microsoft would no longer be liable for any bad advice Bing may provide. For such advice would be out of the scope of its proposed duties. For another example, consider the case of Google’s Gmail agent, which scans emails for content it can use to provide advertisements to Gmail users. Google’s risible response to the charge of privacy violations is that because humans do not scan users’ emails, there is no privacy violation. This is not a defense Google could employ if its Gmail agent were to be considered its legal agent, because by law, the knowledge gained by a legal agent is directly attributed to its principal. Google’s “automation screen” thus fails because of the legal agent status of the programs it deploys. Here, our interests are protected by the legal status granted to the artificial agent. This does not diminish our rights; rather, it protects them.

     

    Consider what we would do if extraterrestrials alighted on this planet of ours and said, “Take us to your leader!” How would we understand and describe them? What if their innards were so mysterious that our best science gave us no handle on how they functioned? We would have to function like diligent field anthropologists, looking for behavioral evidence that we could correlate with their pronouncements, and start considering the possibility that they have minds like ours. Our lawyers would have to assess the status of these beings in our social orderings and, on seeing they filled and performed important executive roles, that people had formed personal relationships with them, perhaps think about evaluating their application for citizenship and legal status seriously. An analogous situation exists today with regards to the artificial agents and programs in our midst, with a crucial difference: We have made and designed them. This familiarity is tinged with contempt, but the nature of our interpretive dance with them can and will change, depending on how mysterious we find them. The more impenetrable they become in terms of their internal operations, the more sophisticated their functioning, the more we will have to rely on external descriptions using psychological terms like “agent.” This would not be a concession to anything but common sense. And our natural intelligence.

     

    Source

     

    (May require free registration to view)


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...