If you wanted to, you could access an “evil” version of OpenAI’s ChatGPT today—though it’s going to cost you. It also might not necessarily be legal depending on where you live.
However, getting access is a bit tricky. You’ll have to find the right web forums with the right users. One of those users might have a post marketing a private and powerful large language model (LLM). You’ll connect with them on an encrypted messaging service like Telegram where they’ll ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.
Once you have access to it, though, you’ll be able to use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: have conversations about any illicit or ethically dubious topic under the sun, learn how to cook meth or create pipe bombs, or even use it to fuel a cybercriminal enterprise by way of phishing schemes.
“We’ve got folks who are building LLMs that are designed to write more convincing phishing email scams or allowing them to code new types of malware because they’re trained off of the code from previously available malware,” Dominic Sellitto, a cybersecurity and digital privacy researcher at the University of Buffalo, told The Daily Beast. “Both of these things make the attacks more potent, because they’re trained off of the knowledge of the attacks that came before them.”
Over the past year, we’ve seen the precipitous rise of generative artificial intelligence the likes of which we haven’t seen before. The technology has been popularized by the likes of ChatGPT and Bard—and caused a fair amount of criticism and concerns surrounding the potential for it to disrupt the jobs of writers, coders, artists, actors, and more.
While we’re still coming to terms with the full impact of these models—including the harms it can cause by way of job displacement and bias—experts are starting to sound the alarm on the growing number of black market AIs that are tailor made for cybercrime. In fact, the past year has seen a veritable cottage industry of LLMs created for the express purpose of coding malware and assisting in phishing attacks.
These models are powerful, hard to police, and growing in number. They also mark the emergence of a new battleground in the fight against cybercrime—one that extends even beyond text generators like ChatGPT, and bleeds into the realm of images, audio, and video as well.
“We’re blurring the boundaries in many ways between what is artificially generated and what isn’t,” Sellitto explained. “The same goes for the written text, and the same goes for images and everything in between.”
Phishing for Trouble
U.S. consumers lose nearly $8.8 billion each year to phishing emails—and you’ve probably seen them in your inbox before. These are the messages that claim to be from your bank or from even places like the Social Security Administration urgently requesting that you give them your financial information in order to fix a made-up crisis. They might include harmless looking links that actually download malware or viruses—allowing bad actors to steal any sensitive information straight from your computer.
Luckily, they’re pretty easy to catch for the most part. If they haven’t already fallen into your spam folder, you can spot them based on the language alone—choppy and grammatically incorrect sentences and wording that a legitimate financial institution would never use. This is primarily due to the fact that many of these phishing attacks come from outside of English-speaking countries in places like Russia.
However, with the launch of ChatGPT ushering a veritable generative AI boom, that has all completely changed.
“The technology hasn’t always been available on digital black markets,” Daniel Kelley, a former black hat computer hacker and cybersecurity consultant, told The Daily Beast. “It primarily started when ChatGPT became mainstream. There were some basic text generation tools that might have used machine learning but nothing impressive.”
Kelley explained that there are a wide range of these LLMs with variants like BlackHatGPT, WolfGPT, and EvilGPT. Despite the nefarious-sounding names, he said that many of these models are simply examples of AI jailbreaks, a term that describes clever prompting of existing LLMs like ChatGPT in order to get it to produce the desired output. These models are then wrapped around a custom interface that makes it seem like it’s a different chatbot—when, in reality, it’s just ChatGPT.
That isn’t to say that they’re harmless. In fact, one model stands out to Kelley as being one of the more nefarious and legitimate ones: WormGPT, an LLM that’s designed specifically for cybercrime and “lets you do all sorts of illegal stuff and easily sell it online in the future,” according to one description of it on a forum marketing the model.
“Everything black hat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” the description said. “WormGPT also offers anonymity, meaning that anyone can carry out illegal activities without being traced.”
“The only real malicious one that I’ve come across that I felt actually used a legitimate custom LLM was WormGPT,” Kelley said. “That was the first one, as far as I’m aware, that hit the market and actually went mainstream.”
Both Kelley and Sellitto said that WormGPT could be used effectively in business email compromise (BEC) attacks, a type of phishing scheme that involves stealing information from company employees by posing as a higher-up or someone else with authority. The language produced by the model is extremely clean, with accurate grammar and sentence structure making it much harder to detect at a glance.
Also, practically anyone with an internet connection can download it, allowing it to be proliferated with ease. It’s akin to a service that offers same-day mailing for purchasing guns and ski masks—only these guns and ski masks are specifically marketed towards and tailor made for criminals.
“It’s more accessible, because at the end of the day, I don’t need to be an expert in crafting devious emails. I can just type the prompt,” Sellitto said. “That’s the promise of LLMs on the good side, and it also applies to the dark side.”
Knowledge Is Power
Since the release of ChatGPT, Kelley says that he’s “100 percent” seen a rise of such generative AI on digital black markets. Not only are they available on forums for black hat hackers, but they’re also on so-called darknet markets, which are illegal online marketplaces where users can purchase anything from drugs, to contract killers, to powerful LLMs.
Adding fuel to this fire are companies like OpenAI and Meta releasing their own open-source models. These AI tools are publicly accessible and can be modified by anyone. That means these black market LLMs will only become more powerful and proliferated as time goes on. “I think it will intensify because the technology is evolving, and eventually cybercriminals will work out more use cases,” Kelley said. He added, “It will impact regular people.”
When it comes to protecting everyday consumers, there’s only so much policymakers can or will do. While the U.S. Congress has held multiple hearings regarding the development of AI in the wake of ChatGPT’s release, there’s yet to be any concrete regulation from Washington. Given the government’s track record of responding to emerging tech at a glacial pace, it’s a sure bet that they won’t be able to catch up with black market AI for a while—if ever.
Ultimately, then, the best way for the public to guard against the dangers these models pose are the simple but effective tactics that we’ve been taught already when it comes to cybercrime: educate yourself, be wary of strange emails, and don’t click on those fishy looking links.
It’s tried and true, and the best tool we possibly have to fight back against bad actors armed with a jailbroken version of ChatGPT trying to get access to our banking information. As the AI race surges on with breakneck speed, it might be the only tool we have too.
“What we’re seeing isn’t a passing fad,” Sellitto said. “Generative AI, whether we like it or not, is really here to stay. So as consumers, professionals, and organizations, we all need to come together and start figuring out ways that we can thoughtfully and ethically engage with it.”
- Adenman
- 1
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.