As “Evil LLMs” like FraudGPT and WormGPT hit the dark web, cybercrime is no longer about skill, just intent.


After proving useful to writers, coders, and students, generative AI has inevitably slipped into the hands of cybercriminals. In the past year, security analysts have begun tracking a troubling new trend, malicious chatbots called “Evil LLMs.” The best-known examples, FraudGPT and WormGPT, look like regular AI assistants on the surface but are built for deception.

Instead of essays or debug scripts, they produce phishing messages, fake IDs, and malware. Reports from a number of sources like The420.in, VarIndia, and DQIndia describe how these AI systems are being traded on dark-web forums, sometimes with pricing tiers starting from as low as $100, and even featuring tech support like any other legitimate software products.

Developers pitch them as “uncensored” tools or “limit-free” AI systems, yet their purpose is purely criminal. What we’re seeing is AI’s next mutation, intelligence without restraint, innovation stripped of ethics, and proof that technology learns whatever the world teaches it.

The new face of cybercrime

What makes these tools so disruptive isn’t just their intelligence, but their accessibility. With conventional malware or black hat operations, you needed coding skills or at least a little data science expertise to cause any significant damage. With FraudGPT, you only need imagination and a prompt. DQIndia’s recent report noted that even small-time scammers are now creating personalized phishing campaigns using language models that mimic real communication patterns.

WormGPT, based on an older open-source GPT-J framework, takes it a step further by writing polymorphic malware, code that constantly changes to evade antivirus detection. The net result is that cybercrime, once limited by skill, is now limited only by creativity.

Both FraudGPT and WormGPT represent what’s being dubbed the industrialization of digital fraud. Built on open-source frameworks and stripped of all safety filters, these models are optimized for phishing, identity theft, and code injection. Alloy.com and Sardine.ai report that such tools can craft scam emails capable of bypassing spam filters by studying how legitimate messages are structured.

Some versions can even scrape online profiles to personalize the attack, making it nearly impossible to tell apart a fake message from the real thing. The420.in adds that hackers are now experimenting with “prompt injection” attacks, where they manipulate mainstream AI assistants into revealing hidden data or executing unauthorized commands.

When combined with ransomware automation, that creates a near-autonomous threat engine capable of adapting in real time.

India’s digital shield

What’s emerging is a full-blown AI arms race. Security firms are racing to build counter-AI systems, models that can detect or neutralize malicious prompts before they cause harm. But experts admit the challenge is complex. Just as large language models learn from public data, their “evil twins” thrive on leaked datasets and jailbroken versions of legitimate models. Each patch leads to a new exploit. Meanwhile, however, the human factor remains the weakest link.

AI-generated scams typically succeed not because they outsmart technology, but mainly because they exploit human trust. Phishing, deepfake audio, and impersonation scams are already blending into one continuous spectrum of deception, forcing regulators to rethink cybersecurity from the ground up.

For India, the timing of this new AI-fueled crime wave couldn’t be worse. As VarIndia and DQIndia point out, the nation’s connected infrastructure, from UPI payments to DigiLocker and Aadhaar-linked services, gives cybercriminals an unusually broad playground or an increasingly vast attack surface. While the country has made huge strides in strengthening its digital backbone, the next wave of threats won’t be stopped by firewalls alone.

AI-driven attacks demand AI-level defense. That’s why policymakers are debating updates to India’s cybercrime laws to specifically address generative models, while researchers work on homegrown, Aatmanirbhar approaches to AI security. A few Indian startups are already flipping the equation, using large language models to spot deepfakes, detect scams, and track threat actors in real time.

The enemy in the mirror

AI’s dark turn isn’t a glitch, like we mentioned in a previous post about AI’s crazy hallucinations; it’s a mirror. FraudGPT and WormGPT show that every innovation can spawn an equal and opposite misuse. These models prove how easily ethical safeguards can be circumvented, how quickly open-source knowledge can be repurposed, and how technology meant to protect can also deceive.

For cybersecurity professionals in particular, the task ahead is about developing a deeper understanding of how AI can be weaponized with malicious intent and how to prevent it. For India, the fight will test both technical capacity and regulatory foresight. In the long run, it’s not enough to keep building smarter models; we need to build more responsible ones.

Evil LLMs might have been born in hidden forums, but the war they’ve started, between creation and corruption, will play out everywhere that an internet connection exists!

In case you missed:

With a background in Linux system administration, Nigel Pereira began his career with Symantec Antivirus Tech Support. He has now been a technology journalist for over 6 years and his interests lie in Cloud Computing, DevOps, AI, and enterprise technologies.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved