This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Cybercriminals Now Touting Criminal Artificial Intelligence

Mere months after OpenAI’s ChatGPT chatbot burst into the international consciousness, cybercriminals claim to have their own version of the technology. Now, instead of upending business communication, these supposed systems would greatly increase criminals’ ability to draft phishing emails and iterate malware.

Large companies (Open AI, Google, etc.) build their standard AI tools with certain limits on how they can be used. The large language models (LLMs) by OpenAI, Google or Microsoft all have enacted safety measures designed to prevent people from abusing them. The two chatbots advertised on dark-web forums (known as WormGPT and FraudGPT) claim to strip away any kind of safety protections or ethical barriers. WormGPT was initially sold exclusively on HackForums for prices ranging from 500 to 5,000 Euro. The HackForums user who is selling WormGPT wrote: “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

WormGPT claims an unlimited character count and code formatting. All of this would aid a non-native speaking spammer to create credible phishing emails. FraudGPT has notably claimed that it could “create undetectable malware” and find leaks and vulnerabilities, as well as text. For the terminally interested, the FraudGPT creator published a video of the chatbot producing a scam email.

All of this is concerning (and perhaps inevitable). However, not everyone agrees that WormGPT and FraudGPT are genuine. Some researchers think that these tools may first be used to defraud the fraudsters. How would this work? Well, the developers would advertise a transformative product for criminals but its actual goal would be to scam and steal from those criminal "customers."

To their credit, the developer behind WormGPT appears to be changing the product. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the "23-year-old Portuguese programmer who created the project now says his service is slowly morphing into 'a more controlled environment.'” We shall see.

Regardless of whether these specific tools are legitimate, everyone should begin internalizing the danger that this technology can further accelerate the flood of cybersecurity threats.  

Since the start of July, criminals posting on dark-web forums and marketplaces have been touting two large language models (LLMs) they say they’ve produced. The systems, which are said to mimic the functionalities of ChatGPT and Google’s Bard, generate text to answer the questions or prompts users enter. But unlike the LLMs made by legitimate companies, these chatbots are marketed for illegal activities.

Tags

privacy security & data innovations