Jump to content
  • Over 100,000 compromised ChatGPT accounts found for sale on dark web

    aum

    • 400 views
    • 2 minutes
     Share


    • 400 views
    • 2 minutes

    Cybercrooks hoping users have whispered employer secrets to chatbot

     

    Singapore-based threat intelligence outfit Group-IB has found ChatGPT credentials in more than 100,000 stealer logs traded on the dark web in the past year.

     

    The amount of stolen accounts steadily climbed from 74 in June 2022 to 26,902 in May 2023. April 2023 was an outlier – a moderate decline was seen in the number of accounts, before peaking the very next month.

     

    "Group-IB's experts highlight that more and more employees are taking advantage of the Chatbot to optimize their work, be it software development or business communications," said the company, adding that demand for account credentials was gaining "significant popularity."

     

    The problem is particularly rife in the heavily populated Asia Pacific region, which accounted for over 40 percent of stolen ChatGPT accounts in the time period Group-IB tracked.

     

    India suffered the most compromised accounts (12,632), a tidbit that resonates with previous findings that the subcontinent is a prime target for data theft, thanks to its size and heavy use of infotech.

     

    Most logs (78,348) were breached using the Racoon info stealer, with Vidar accounting for 12,984 and Redline for 6,773.

     

    Shestakov told The Register: "Racсoon is one of the most popular stealers on the market distributed under the MaaS model due to its simplicity.

     

    Released in June 2022, the new version of Raccoon was tailored better to the needs of operators and offered cybercriminals a higher level of customization and the ability to handle excessive loads."

     

    Group-IB advises the usual procedures to mitigate thievery: update passwords regularly and implement two-factor authentication, and of course, maybe buy some of their products. ®

     

    ChatGPT stores user query history and AI responses by default. Access to the history could expose company or personal secrets.

     

    "Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code," said Group-IB head of threat intelligence Dmitry Shestakov.

     

    Both Apple and Samsung have banned company use of ChatGPT over security issues. In the case of the latter, employees accidentally leaked secrets.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...