The Nigerian princes certainly had a long and fantastic run, but now they’re being replaced by AI (artificial intelligence)…
Gone are the days when scammers wielded mediocre power, having to rely on their understanding of the processes and language. AI is fuelling the next wave of cybercrime, with attackers employing generative AI to automate, scale, and even personalise their scams. According to Hoxhunt’s report on phishing trends, anywhere between 0.7-4.7% of nearly 4,00,000 malicious phishing emails had been crafted by AI. It might be a tiny fraction, but that’s just for now.

How Phishing Has Changed With AI
AI phishing harnesses the power and technology of generative AI, making it easier for scammers and hackers to execute scams that are all the more convincing to potential victims, and that too on a mass scale. The objective is to often evoke curiosity, panic, and fear, using social engineering to get victims to share personal and financial data such as passwords, download malware-ridden files, and click on malicious links.
It’s been working a little too well, too – scammers have raked in more than USD 1 trillion in the 12 months before November 2024 alone. According to cybersecurity firm SlashNext, the second half of 2024 saw a 202% increase in the number of phishing messages per 1,000 e-mail inboxes.
Clearly, hackers have been streamlining and escalating phishing tactics using AI.

What Does An AI-Powered Phishing Attack Look Like?
Traditional phishing indicators, such as generic greetings and poor grammar, are no longer reliable indicators of fraudulent messages. Today’s phishing e-mails are well-crafted and personalised, mimicking legitimate communication from trusted sources quite well. They even include familiar touches, such as references to the victim’s recent interactions, interests, or purchases, with the level of personalisation greatly increasing the likelihood of the scams becoming successful.
AI has gone a step ahead with polymorphic phishing, an advanced version of an e-mail phishing campaign. It randomises e-mail components such as the senders’ display names, subject lines, and content to create multiple almost-identical e-mails differing only by minor details. These personalised, evasive messages have resulted in higher attack success rates.
Besides phishing e-mails and texts, there are also vishing attacks over phone calls, where cybercriminals impersonate individuals or companies that the victims trust, increasing the chances of receiving information or money. However, it’s become all the more dangerous with AI tools, with cybercriminals pretending to be someone known to the target by analysing familiar voices from audio or video recordings. Once AI processes their voices, cybercriminals mask their own voices and speak as if they’re the target’s bosses, friends, or relatives, making vishing attacks more convincing and vicious, if anything.

More recently, there’s been an uptick in advanced grandparent scams using vishing attacks, where criminals have been tricking elderly people into believing that their family members are in danger. If you’re thinking that it’s only the more vulnerable people who fall prey to such scams, think again. In January 2020, cybercriminals used the AI-cloned voices of a company director to deceive a Hong Kong bank manager of a Japanese company to authorise USD 35 million in global fund transfers!
AI attacks also involve spear fishing, where cybercriminals target specific individuals/organisations by using prior information about them that they’ve already extracted or is out there, such as names, e-mail addresses, and phone numbers. And of course, who can ignore the rampant proliferation of deepfakes, which are becoming harder to spot due to advances in AI tech?

Can We Protect Ourselves From AI-Powered Phishing Attacks?
First of all, never download attachments or click on links from unsolicited e-mail messages, which effectively keeps your devices and private information safe. The same goes for unsolicited e-mail messages with attachments as well. Additionally, thoroughly ignore any unsolicited requests for your personal information, whether it’s over the phone, via text, or through e-mail. Even if it’s a familiar company or person who claims to be reaching out to you, consider why they’d be asking for such information in the first place.
Does some deal sound too good to be true? Is it a product or an experience that you’ve been waiting for or browsing through online? It could be a scam.
A good way to keep the family protected from such phishing attacks is to have a safe word for your family to verify the identity of a suspicious attacker. For instance, if a hacker calls you or a family member claiming to be someone you all know, ask them for your safe word. However, ensure that the safe word isn’t something that AI or cybercriminals could guess by researching the family online.

And of course, one of the best ways to tackle AI-enabled phishing attacks is by using password managers.
In a world where AI is revolutionising everything from finance to healthcare, it’s no surprise that cybercriminals have turned to it too. In the wrong hands, AI is an ever-evolving weapon of sorts, becoming terrifyingly effective in ways one couldn’t have imagined. Judging from what’s out there in the form of AI-powered phishing, it might seem impossible to work your way around such scams, but that doesn’t mean that it’s a foregone conclusion.
There are still many ways to protect yourself from these advanced cyberattacks, and most solutions involve staying extra cautious about what information you give out and click on.
In case you missed:
- Crypto Heists: How To Keep Your Cryptocurrency Safe?
- Keeping Your Tech Tidy: Tips For Data Backup And Safety
- Phantom Wallet: The Fastest-Growing Crypto Wallet
- The Good Samaritan: A Complete Guide To Ethical Hacking
- Safe Delivery App and the NeMa Smart Bot: How AI Is Aiding Safer Births Amidst Limited Resources
- Zero Trust Architecture: The Next Big Thing In Security
- Cryptography in Network Security – Concepts and Practices
- The Internet’s AI Slop Problem
- A Beginner’s Guide to Cryptocurrency Trading in India – Part 1
- AI-Red Teaming: How Emulating Attacks Help Cybersecurity