Jump to content
  • How ChatGPT—and Bots Like It—Can Spread Malware


    Karlston

    • 1.1k views
    • 6 minutes
     Share


    • 1.1k views
    • 6 minutes

    Generative AI is a tool, which means it can be used by cybercriminals, too. Here’s how to protect yourself.

     

    The AI landscape has started to move very, very fast: consumer-facing tools such as Midjourney and ChatGPT are now able to produce incredible image and text results in seconds based on natural language prompts, and we're seeing them get deployed everywhere from web search to children's books.

     

    However, these AI applications are being turned to more nefarious uses, including spreading malware. Take the traditional scam email, for example: It's usually littered with obvious mistakes in its grammar and spelling—mistakes that the latest group of AI models don't make, as noted in a recent advisory report from Europol.

     

    Think about it: A lot of phishing attacks and other security threats rely on social engineering, duping users into revealing passwords, financial information, or other sensitive data. The persuasive, authentic-sounding text required for these scams can now be pumped out quite easily, with no human effort required, and endlessly tweaked and refined for specific audiences.

     

    In the case of ChatGPT, it's important to note first that developer OpenAI has built safeguards into it. Ask it to "write malware" or a "phishing email" and  it will tell you that it's "programmed to follow strict ethical guidelines that prohibit me from engaging in any malicious activities, including writing or assisting with the creation of malware."

     

    ChatGPT-Malware-Security-01-chatgpt.jpg

    ChatGPT won't code malware for you, but it's polite about it.

     OpenAI via David Nield

    However, these protections aren't too difficult to get around: ChatGPT can certainly code, and it can certainly compose emails. Even if it doesn't know it's writing malware, it can be prompted into producing something like it. There are already signs that cybercriminals are working to get around the safety measures that have been put in place.

     

    We're not particularly picking on ChatGPT here, but pointing out what's possible once large language models (LLMs) like it are used for more sinister purposes. Indeed, it's not too difficult to imagine criminal organizations developing their own LLMs and similar tools in order to make their scams sound more convincing. And it's not just text either: Audio and video are more difficult to fake, but it's happening as well.

     

    When it comes to your boss asking for a report urgently, or company tech support telling you to install a security patch, or your bank informing you there's a problem you need to respond to—all these potential scams rely on building up trust and sounding genuine, and that's something AI bots are doing very well at. They can produce text, audio, and video that sounds natural and tailored to specific audiences, and they can do it quickly and constantly on demand.

     

    So is there any hope for us mere humans in the wave of these AI-powered threats? Is the only option to give up and accept our fate? Not quite. There are still ways you can minimize your chances of getting scammed by the latest technology, and they aren't so different from the precautions you should already be thinking about.

    How to Guard Against AI-powered Scams

    There are two types of AI-related security threats to think about. The first involves tools such as ChatGPT or Midjourney being used to get you to install something you shouldn't, like a browser plugin. You could be tricked into paying for a service when you don't need to, perhaps, or using a tool that looks official but isn't.

     

    To avoid falling into these traps, make sure you're up to date with what's happening with AI services like the ones we've mentioned, and always go to the original source first. In the case of ChatGPT for example, there's no officially approved mobile app, and the tool is web-only. The standard rules apply when working with these apps and their spinoffs: Check their history, the reviews associated with them, and the companies behind them, just as you would when installing any new piece of software.

     

    The second type of threat is potentially more dangerous: AI that’s used to create text, audio, or video that sounds convincingly real. The output might even be used to mimic someone you know—like the case of the voice recording purportedly from a chief executive asking for an urgent release of funds, which duped a company employee.

     

    While the technology may have evolved, the same techniques are still being used to try and get you to do something urgently that feels slightly (or very) unusual. Take your time, double-check wherever possible using different methods (a phone call to check an email or vice versa), and watch out for red flags—a time limit on what you're being asked to do, or a task that's out of the ordinary.

     

    ChatGPT-Malware-Security-02-windows.jpg

    As always, keep your software and systems up to date.

     Microsoft via David Nield

    Following links you're not expecting from texts and emails is usually not a good idea, especially when you're being asked to log in somewhere. If your bank has apparently got in touch with a message, for example, go to the bank website directly in your browser to log in, rather than following any embedded link.

     

    Keeping your operating systems, apps, and browsers up to date is a must (and this mostly happens automatically now, so there's no excuse). The most recent browsers will protect you against a whole host of phishing and scam attacks, whether the prompt designed to dupe you has been generated by AI or not.

     

    There's no foolproof tool for detecting the presence of AI text, audio, or video at the moment, but there are certain signs to look out for: Think blurring and inconsistencies in pictures, or text that sounds generic and vague. While scammers may have scraped details about your life or your workplace from somewhere, it's unlikely that they know all the ins and outs of your operations.

     

    In short, be cautious and question everything—that was true before the dawn of these new AI services, and it's true now. Like the face-morphing masks of the Mission: Impossible film series (which remain science fiction for now), you need to be absolutely sure that you're dealing with who you think you're dealing with before revealing anything.

     

     

    How ChatGPT—and Bots Like It—Can Spread Malware

     

    (May require free registration to view)


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...