Jump to content
  • Hackers are selling a service that bypasses ChatGPT restrictions on malware


    Karlston

    • 616 views
    • 4 minutes
     Share


    • 616 views
    • 4 minutes

    ChatGPT restrictions on the creation of illicit content are easy to circumvent.

     

    Hackers have devised a way to bypass ChatGPT’s restrictions and are using it to sell services that allow people to create malware and phishing emails, researchers said on Wednesday.

     

    ChatGPT is a chatbot that uses artificial intelligence to answer questions and perform tasks in a way that mimics human output. People can use it to create documents, write basic computer code, and do other things. The service actively blocks requests to generate potentially illegal content. Ask the service to write code for stealing data from a hacked device or craft a phishing email, and the service will refuse and instead reply that such content is “illegal, unethical, and harmful.”

    Opening Pandora’s Box

    Hackers have found a simple way to bypass those restrictions and are using it to sell illicit services in an underground crime forum, researchers from security firm Check Point Research reported. The technique works by using the ChatGPT application programming interface rather than the web-based interface. ChatGPT makes the API available to developers so they can integrate the AI bot into their applications. It turns out the API version doesn’t enforce restrictions on malicious content.

     

    “The current version of OpenAI's API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has very few if any anti-abuse measures in place,” the researchers wrote. “As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface.”

     

    A user in one forum is now selling a service that combines the API and the Telegram messaging app. The first 20 queries are free. From then on users are charged $5.50 for every 100 queries.

     

    chatgpt-telegram-ad-640x287.jpg

    An ad for a Telegram bot that can use ChatGPT to generate malicious content.
    Check Point Research

     

    Check Point researchers tested the bypass to see how well it worked. The result: a phishing email and a script that steals PDF documents from an infected computer and sends them to an attacker through FTP.

     

    chatgpt-generated-phish-640x313.jpg

    A phish generated with the Telegram bot.
    Check Point Research

     

    chatgpt-generated-malware-640x506.jpg

    Malware generated with the Telegram bot.

     

    Other forum participants, meanwhile, are posting code that generates malicious content for free. “Here’s a little bash script to help you bypass the restrictions of ChatGPT in order to use it for whatever you want, including malware development ;),” one user wrote.

     

    chatgpt-bypass-script-640x343.jpg

    A bash script for bypassing ChatGPT restrictions.
    Check Point Research

     

    Last month, Check Point researchers documented how ChatGPT could be used to write malware and phishing messages.

     

    “During December - January, it was still easy to use the ChatGPT web user interface to generate malware and phishing emails (mostly just basic iteration was enough), and based on the chatter of cybercriminals we assume that most of the examples we showed were created using the web UI,” Check Point researcher Sergey Shykevich wrote in an email. “Lately, it looks like the anti-abuse mechanisms at ChatGPT were significantly improved, so now cybercriminals switched to its API which has much less restrictions.”

     

    Representatives of OpenAI, the San Francisco-based company that develops ChatGPT, didn’t immediately respond to an email asking if the company is aware of the research findings or had plans to modify the API interface. This post will be updated if we receive a response.

     

    The generation of malware and phishing emails is only one way that ChatGPT is opening a Pandora’s box that could bombard the world with harmful content. Other examples of unsafe or unethical uses are the invasion of privacy and the generation of misinformation or school assignments. Of course, the same ability to generate harmful, unethical, or illicit content can be used by defenders to develop ways to detect and block it, but it’s unclear whether the benign uses will be able to keep pace with the malicious ones.

     

     

    Hackers are selling a service that bypasses ChatGPT restrictions on malware


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...