Jump to content
  • How is the Dark Web Reacting to the AI Revolution?


    Karlston

    • 655 views
    • 6 minutes
     Share


    • 655 views
    • 6 minutes

    A quick search for “ChatGPT” on the dark web and Telegram shows 27,912 mentions in the past six months.

     

    Much has been written about the potential for threat actors to use language models. With open source large language models (LLMs) such as LLaMA and Orca, and now the cybercrime model WormGPT, the trends around the commodification of cybercrime and the increasing capabilities of models are set to collide.

     

    Threat actors are already engaging in rigorous discussions of how language models can be used for everything from identifying 0-day exploits to craft spear-phishing emails.

     

    Open source models represent a particularly compelling opportunity for threat actors since they haven’t undergone reinforcement learning by human feedback (RLHF) focused on preventing risky or illegal answers.

     

    This allows threat actors to actively use them to identify 0-days, write spear-phishing emails, and perform other types of cybercrime without the need for jailbreaks.

     

    Threat exposure management firm Flare has identified more than 200,000 OpenAI credentials currently being sold on the dark web in the form of stealer logs.

     

    While this is undoubtedly concerning, the statistic only begins to scratch the surface of threat actors' interests in ChatGPT, GPT-4, and AI language models more broadly.

     

    welcome-back.png

    Login page for a phishing at scale platformSource: Flare

    Trends Collide: The Cybercrime Ecosystem and Open Source AI Language Models 

    In the past five years, there has been a dramatic growth in the commodification of cybercrime. A vast underground network now exists across Tor and illicit Telegram channels in which cybercriminals buy and sell personal information, network access, data leaks, credentials, infected devices, attack infrastructure, ransomware and more.

     

    Commercially-minded cybercriminals will likely increasingly employ quickly proliferating open source AI language models. The first such application, WormGPT has already been created and is being sold for a monthly access fee.                                                 

    Customized Spear-Phishing at Scale

    Phishing-a-a-Service (PhaaS) already exists and provides ready-made infrastructure to launch phishing campaigns from a monthly fee.

     

    There are already extensive discussions among threat actors using WormGPT to facilitate broader, personalized phishing attacks.

     

    The use of generative AI will likely enable cybercriminals to launch attacks against thousands of users with customized messages gleaned from data from social media accounts, OSINT sources, and online databases, dramatically increasing the threat to employees from email phishing.

     

    wormgpt-features-split.jpg

    A threat actor explains WormGPTSource: Flare

     

    "Tomorrow, API-WormGPT will be provided by Galaxy dev channel, the request status is unlimited and will be calculated periodically, and to use API-WORMGPT, you need to get an API-KEY. The latest news will be announced," a threat actor advertises WormGPT on Telegram.

     

    "If you don't know what WORMGPT is: This WORMGPT is an unlimited version of CHATGPT, designed by hackers and made for illegal work, such as phishing and malware, etc. without any ethical sources."

    Automated Exploit & Exposure Identification

    Projects such as BabyAGI seek to use language models to loop on thoughts and carry out actions online, and potentially in the real world. As things stand today, many companies don’t have full visibility of their attack surface.

     

    They rely on threat actors not quickly identifying unpatched services, credentials and API keys exposed in public GitHub repositories, and other forms of high-risk data exposure.

     

    Semi-autonomous language models could quickly and abruptly shift the threat landscape by automating exposure detection at scale for threat actors.

     

    Right now threat actors rely on a mix of tools used by cybersecurity professionals and manual effort to identify exposure that can grant initial access to a system.

     

    We are likely years, or even months away from systems that can not only detect obvious exposure such as credentials in a repository, but even identify new 0-day exploits in applications, dramatically decreasing the time security teams have to respond to exploits and data exposure.

    Vishing and Deepfakes

    Advances in generative AI also look set to create an extremely challenging environment for vishing attacks. AI driven services can already realistically copy the sound of an individual's voice with less than 60 seconds of audio, and deepfake technology continues to improve.

     

    Right now deep fakes remain in the uncanny valley, making them somewhat obvious. However the technology is rapidly progressing and researchers continue to create and deploy additional open source projects.

     

    wormgpt-responds-example.jpg

    WormGPT responds to a prompt asking to write an example of malware in pythonSource: Flare

    Hacking & Malware Generative AI Models

    Open source LLMs already exist focused on red teaming activities such as pen-test GPT.

     

    The functionality and specialization of a model largely depends on a multi-step process involving the data the model is trained on, reinforcement learning with human feedback, and other variables.

     

    "There are some promising open source models like orca which has promise for being able to find 0days if it was tuned on code," explains a threat actor discussing Microsoft's Orca LLM.

    What Does this Mean for Security Teams?

    Your margin for error as a defender is about to drop substantially. Reducing SOC noise to focus on high-value events, and improving mean time to detect (MTTD) and mean time to respond (MTTR) for high-risk exposure whether on the dark or clear web should be a priority. 

     

    AI adoption for security at companies will likely move considerably more slowly than it will for attackers, creating an asymmetry that adversaries will attempt to exploit.

     

    Security teams must build an effective attack surface management program, ensure that employees receive substantial training on deepfakes & spear-phishing, but beyond that, evaluate how AI can be used to rapidly detect and remediate gaps in your security perimeter.

     

    Security is only as strong as the weakest link, and AI is about to make that weak link much easier to find.

     

    About Eric Clay

     

    Eric is the Security Researcher at Flare, a threat exposure monitoring platform. He has experience in security data analytics, security research, and applications of AI in cybersecurity.

     

    Sponsored and written by Flare

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...