If you want, you could access an “evil” version of OpenAI’s ChatGPT today – but it will cost you. Depending on where you live, it may not necessarily be legal either.
However, it is a bit difficult to get access. You need to find the right web forums with the right users. One of these users may be tasked with marketing a private and powerful Large Language Model (LLM). You connect with them through an encrypted messaging service like Telegram, where they ask you for a few hundred dollars in cryptocurrency in exchange for the LLM.
However, once you have access to it, you can use it for all the things that ChatGPT or Google’s Bard prohibits you from doing: having conversations about illegal or ethically dubious topics, learning how to cook meth, etc., creating pipe bombs, or even using them to to advance a cybercriminal enterprise through phishing attacks.
“We have people developing LLMs that are designed to write more convincing phishing email scams or allow them to code new types of malware because they are trained on the code of previously available malware,” says Dominic Sellitto, a cybersecurity expert and digital privacy researcher at the University of Buffalo, told The Daily Beast. “Both things make the attacks more effective because they are trained based on knowledge of the attacks that preceded them.”
Over the past year we have seen the rapid rise of generative technology artificial intelligence We’ve never seen anything like this before. The technology was popularized by the likes of ChatGPT and Bard – and sparked a lot of criticism and concerns about the potential for it to impact the work of writers, programmers, artists, actors, and more.
While we’re still grappling with the full impact of these models – including the damage they can cause through job displacement and bias – experts are beginning to sound the alarm about the growing number of black market AIs tailored to cybercrime. In fact, the past year has seen a veritable cottage industry of LLMs designed for the express purpose of encoding malware and assisting in phishing attacks.
These models are powerful, difficult to monitor, and their number is increasing. They also mark the emergence of a new battleground in the fight against cybercrime – one that goes even beyond text generators like ChatGPT and into the realm of images, audio and video.
“We are, in many ways, blurring the lines between what is artificially created and what is not,” Sellitto explained. “The same goes for written text and the same goes for images and everything in between.”
Phishing for trouble
U.S. consumers lose nearly $8.8 billion every year to phishing emails — and you’ve probably seen them in your inbox before. These are messages purporting to come from your bank or even agencies like the Social Security Administration, urging you to provide your financial information to solve a made-up crisis. They may contain harmless-looking links that actually download malware or viruses, allowing criminals to steal sensitive information directly from your computer.
Luckily, they’re pretty easy to catch for the most part. If they haven’t already landed in your spam folder, you can identify them just by the language – choppy and grammatically incorrect sentences and phrasing that a reputable financial institution would never use. This is primarily because many of these phishing attacks come from outside English-speaking countries, such as Russia.
But with the introduction of ChatGPT, which triggered a veritable generative AI boom, that changed completely.
“The technology hasn’t always been available in digital black markets,” Daniel Kelley, a former black hat computer hacker and cybersecurity consultant, told The Daily Beast. “It started especially when ChatGPT became mainstream. There were some basic copywriting tools that might have used machine learning, but nothing impressive.”
Kelley explained that there is a wide range of these LLMs with variants such as BlackHatGPT, WolfGPT and EvilGPT. Despite the nefarious-sounding names, he said that many of these models are simply examples of AI jailbreaks, a term that describes clever prompting of existing LLMs like ChatGPT to produce the desired output. These models are then wrapped around a custom interface that makes it seem like it’s another chatbot – when in reality it’s just ChatGPT.
That doesn’t mean they are harmless. In fact, one model stands out to Kelley as one of the more nefarious and legitimate ones: WormGPT, an LLM designed specifically for cybercrime that “allows you to do all sorts of illegal things and easily sell them online in the future,” according to the statement a description of it in a forum that markets the model.
“Anything you can imagine related to Black Hat can be done with WormGPT, allowing anyone to access malicious activity without ever leaving the comfort of their home,” the description reads. “WormGPT also offers anonymity, meaning anyone can carry out illegal activities without being tracked.”
“The only truly malicious virus that, in my opinion, actually used a legitimate custom LLM was WormGPT,” Kelley said. “As far as I know, that was the first one that came out and actually became mainstream.”
Both Kelley and Sellitto said WormGPT could be used effectively in business email compromise (BEC) attacks, a type of phishing attack that involves stealing information from company employees by impersonating a supervisor or someone else in authority . The language produced by the model is extremely clean and much more difficult to recognize at first glance due to the precise grammar and sentence structure.
Plus, virtually anyone with an internet connection can download it, making it easy to distribute. It’s similar to a service that mails same-day purchases of guns and ski masks – only these guns and ski masks are specifically marketed and tailored to criminals.
“It’s more accessible because ultimately I don’t have to be an expert at crafting sneaky emails. I can just type the prompt,” Sellitto said. “That’s the promise of LLMs on the good side, and it’s true on the dark side too.”
Knowledge is power
Kelley says he has “100 percent” seen an increase in such generative AI in digital black markets since ChatGPT was released. They are available not only on black hat hacker forums, but also on so-called darknet markets, illegal online marketplaces where users can buy everything from drugs to hitmen to powerful LLMs.
Further fuel for this fire are companies like OpenAI and Meta Publishing your own open source models. These AI tools are publicly available and can be modified by anyone. This means that these black market LLMs are becoming more powerful and stronger over time. “I think it will increase as technology continues to evolve and cybercriminals will eventually come up with more use cases,” Kelley said. He added: “It will have an impact on normal people.”
When it comes to protecting everyday consumers, there is only so much policymakers can or will do. While the US Congress has held several hearings on the development of AI following the release of ChatGPT, there is still no concrete regulation from Washington. Given the government’s track record of responding to new technologies at a rapid pace, it’s a safe bet that it won’t be able to keep up with black market AI for a while, if ever.
Ultimately, the best way for the public to protect themselves from the dangers of these models is through the simple but effective tactics we’ve already learned when it comes to cybercrime: educate yourself, be wary of strange emails, and leave You’d rather stick to clicking on these suspicious-looking links.
It is proven and proven to be the best tool we have at our disposal to defend ourselves against criminals armed with a jailbroken version of ChatGPT and trying to gain access to our banking details. As AI competition advances at breakneck speed, it may also be the only tool at our disposal.
“What we’re seeing is not a passing fad,” Sellitto said. “Generative AI, whether we like it or not, is really here to stay. As consumers, professionals and organizations, we all need to come together and find ways to deal with this thoughtfully and ethically.”