Jump to content
  • Microsoft Alarmed at Doctors Using ChatGPT to Tell Patients Bad News

    aum

    • 327 views
    • 3 minutes
     Share


    • 327 views
    • 3 minutes

    Dr. Robot will see you now.

     

    Doctors are using OpenAI's ultra-popular AI chatbot ChatGPT in a surprising new way, the New York Times reports: to communicate with patients more empathetically.

     

    In other words, they're using a chatbot to appear more human — a fascinating contradiction that sheds light on the difficulties of balancing the importance of objective medical guidance with human compassion.

     

    Some medical professionals, according to the report, are even using ChatGPT to deliver bad medical news, which is something that even Microsoft, a company that's closely collaborating with OpenAI, isn't so sure about.

     

    "As a patient, I’d personally feel a little weird about it," Peter Lee, Microsoft corporate vice president for research and incubations, told the NYT.

     

    One advantage of using ChatGPT, some experts claim, is that it can be used to explain simple concepts void of confusing jargon.

     

    "Doctors are famous for using language that is hard to understand or too advanced," Christopher Moriates, who worked on a project that used ChatGPT to inform patients of available treatments for alcohol use disorder, told the NYT. "It is interesting to see that even words we think are easily understandable really aren’t."

     

    Others took things even further. Gregory Moore, who until recently was a senior executive leading health and life sciences at Microsoft, told the newspaper that he was blown away after asking ChatGPT for advice on how to help a friend with advanced cancer.

     

    But not everybody agrees that using a chatbot to replace the most human part of your profession is a good idea. For one, patients place a lot of value on compassion when evaluating the healthcare they receive.

     

    Having that empathy replaced with a tool that infamously can't distinguish between right and wrong could make the situation a lot worse — especially if it were to be used to make medical decisions.

     

    "I know physicians are using this," Stanford Health Care's Dev Dash, who is evaluating tools like ChatGPT in a healthcare setting, told the NYT. "I’ve heard of residents using it to guide clinical decision-making. I don’t think it’s appropriate."

     

    Whether patients are aware they're being communicated to by a chatbot remains to be seen. But does that really matter when it's not a life and death situation?

     

    It's the kind of thorny ethical conundrum we'll likely be grappling with for quite some time.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...