Jump to content
  • Conversational AI Will Learn to Push Your Buttons. We Need to Solve the Manipulation Problem.

    aum

    • 309 views
    • 6 minutes
     Share


    • 309 views
    • 6 minutes

    Ever since Captain Kirk spoke to the ship’s computer in the 1967 season of Star Trek, researchers have dreamed of enabling natural conversations between humans and machines. It took more than 50 years, but the largest companies in the world are finally on the verge of bringing this capability to billions of users around the globe. Most notably, Microsoft is integrating OpenAI’s impressive ChatGPT technology into its Bing search engine while Google is racing to release a competitive chatbot called Bard based on its LaMDA technology.

     

    As a Star Trek fan and a researcher of human-computer systems for over 30 years, I believe natural language is one of the most effective ways for people and machines to interact. On the other hand, I am deeply concerned that without sensible guardrails, conversational AI could be used to manipulate individuals with extreme precision and efficiency.

     

    To distinguish this danger from other AI-related risks, I refer to this growing threat as the AI “manipulation problem.” I believe it’s now an urgent issue for policy makers to focus on. What makes the problem unique is that conversational AI involves a real-time engagement during which an AI system can impart targeted influence on a user, sense that user’s reaction to the influence, and then adjust its tactics to maximize impact. This might sound like an abstract process, but we humans usually just call it a conversation. After all, if you want to influence an individual, your best approach is often to speak with that person directly and adjust your arguments as you sense expressions of resistance or hesitation.  

     

    The danger is that conversational AI has now advanced to the point where automated systems can engage individual users in flowing dialog that is coherent, convincing, and could easily be deployed with a targeted persuasive agenda. And while current systems are primarily text-based, they will increasingly be combined with real-time voice, enabling natural spoken interactions between humans and machines. In addition, they will soon have a visual presence, being combined with photorealistic digital faces (digital humans) that look, move, and express like real people. And while interacting with online products and services through realistic dialog has a great many benefits, it could also become the ultimate deployment mechanism for AI-powered influence campaigns.

     

    The fact is, we’re now entering the age of natural computing, in which we interact regularly with “virtual spokespeople” that look, sound, and act like authentic persons, but who are designed to represent the specific needs and objectives of the entities that deployed them. Corporations, state actors, or criminal enterprises could field these AI-driven conversational agents to skillfully pursue a persuasive conversational agenda that aims to convince you to buy a particular product, believe a piece of misinformation, or even fool you into revealing your bank account or other sensitive information.

     

    And trust me, these AI-driven spokespeople will be extremely skilled at achieving their persuasive goals. Unless limited by regulation, these systems will have access to personal data (your interests, values, and background) and will use it to craft dialog that is specifically designed to engage and influence you personally. In addition, these systems (unless regulated) will be able to analyze your emotional reactions in real-time, using your webcam to process your facial expressions, eye motions, and pupil dilation—all of which can be used to infer your feelings at every moment. This means that a virtual spokesperson that engages you in an influence-driven conversation will be able to adapt its tactics based on how you respond to every word it speaks, detecting which strategies are working and which are not.

     

    You might argue this isn’t a new risk. Human salespeople already do the same thing, reading emotions and adjusting tactics. But consider this: AI systems can already detect reactions that no human can perceive. For example, AI systems can detect “micro-expressions” on your face and in your voice that are too subtle for human observers but which reflect inner feelings. Similarly, AI-systems can read faint changes in your complexion known as “facial blood flow patterns” and tiny changes in your pupil size, both of which reflect emotional reactions. Unless protected by regulation, virtual spokespeople will be far more perceptive of our inner feelings than any human representative.

     

    Conversational AI will also learn to push your buttons. Unless limited by regulation, these platforms will compile data about how you reacted during each prior conversational interaction, tracking which tactics were most effective on you personally.  In other words, these AI systems will not only adapt to your immediate verbal and emotional responses, they will get better and better at “playing you” over time, learning how to draw you into conversation, guide you to accept new ideas, get you riled up, and ultimately drive you to buy things you don’t need or believe things you’d normally realize were absurd. And because this technology will be easily deployed at scale, these methods can be used to target and influence broad populations.

     

    Of course, these are future dangers.  We have no reason to believe that current conversational systems have implemented deliberately manipulative techniques. In many ways, we’re in a honeymoon period, similar to the early days of social media before large platforms adopted ad-based business models. In those early days, there was no motivation to monetize users through aggressive tracking, profiling, and targeting techniques. The new risk, therefore, is that conversational platforms adopt similar business models that prioritize targeted influence.  If they do, it could motivate many of the abuses described above.   

     

    This is why I believe the manipulation problem could be the most significant threat that AI poses to society in the near future. Regulators must consider this an urgent danger.  After all, ChatGPT was launched less than three months ago and has already reached over 100 million active users, making it the fastest adopted application in history. We need guardrails that protect the public from real-time interactive manipulation through AI-driven conversational agents. It’s coming fast.  

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...