Jump to content
  • A lawyer used ChatGPT for legal research, but later found the chatbot created fake cases

    aum

    • 366 views
    • 2 minutes
     Share


    • 366 views
    • 2 minutes

    In a recent court case, a lawyer relied on ChatGPT for legal research, resulting in the submission of false information. The incident sheds light on the potential risks associated with AI in the legal field, including the propagation of misinformation.

     

    The case revolved around a man suing an airline over an alleged personal injury. The plaintiff's legal team submitted a brief citing several previous court cases to support their argument, seeking to establish a legal precedent for their claim. However, the airline's lawyers discovered that some of the referenced cases did not exist and promptly alerted the presiding judge.

     

    Judge Kevin Castel, presiding over the case, expressed astonishment at the situation, labeling it an "unprecedented circumstance." In an order, the judge demanded an explanation from the plaintiff's legal team.

     

    Steven Schwartz, a colleague of the lead attorney, confessed to utilizing ChatGPT to search for similar legal precedents. In a written statement, Schwartz expressed deep regret that he "had never previously used AI for legal research and was unaware that its content could be false."

     

    1683305137_chatgpt_story.jpg

     

    Screenshots attached to the filing showed a conversation between Schwartz and ChatGPT. In the prompt, Schwartz asked if a specific case, Varghese v. China Southern Airlines Co Ltd, was genuine.

     

    ChatGPT affirmed its authenticity, indicating that the case could be found in legal reference databases such as LexisNexis and Westlaw.

     

    However, subsequent investigations revealed that the case did not exist, leading to further doubts about the other cases provided by ChatGPT.

     

    In light of this incident, both lawyers involved in the case, Peter LoDuca and Steven Schwartz from the law firm Levidow, Levidow & Oberman, have been summoned to an upcoming disciplinary hearing on June 8 to explain their actions.

     

    This event has prompted discussions within the legal community regarding the appropriate use of AI tools in legal research and the need for comprehensive guidelines to prevent similar occurrences.

     

    Source: NYT

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...