AI-enabled cyber terrorism is on the rise with hacking group Kimsuky using deepfake ids, phishing and military deception…
It has been a while since Artificial Intelligence ramped up from a theatrical tool but now, it has reached a point where it is being used as part of the operational mechanism for state-sponsored espionage activities!
Cybersecurity researchers have uncovered a recent campaign by a North Korean hacking group Kimsuky which used ChatGPT among other generative AI tools to forge military and government IDs in a phishing attack targeted at South Korea.

This marks a shift in how advanced persistent threats (APTs) are enhancing and automating cybercrime and espionage techniques. Questions are also being raised about safeguards, detection and what organisations and nations must do to defend against this kind of digital deception.
How the Kimsuky Campaign was Executed
The attackers used ChatGPT to generate a South Korean military ID. AI generated images and forged documents were used with real logos to make the phishing emails seem authentic. The attack was precisely targeted at South Korean journalists and researchers who were focused on North Korea.
The emails had malicious attachments like compressed files and .lnk shortcut files. The deepfake worked as a trust anchor that would make the recipient click on the attachment that would install backdoors, extract data or delay execution so that sandbox detection is evaded.
OpenAI has restrictions against generating ID documents and other forgeries the attackers used “prompt manipulation” (also referred to as jailbreak techniques) to avoid these safeguards. By changing the wordings of the prompt, they were able to generate the required content.

Broader Implications of the Cyberattack
Every day, large-language models (LLMs) and image generation tools are becoming more and more accessible. Anyone with minimal knowledge is now able to generate realistic content that can lead to phishing scams and espionage.
Kimsuky – is a state-backed hacker group – and they have been associated with espionage, targeting diplomatic, government and defence sectors for years. But this is the first time they have used AI as part of an actual operation.
Traditional phishing filters and malware detection techniques are becoming less effective with the rise of generative AI and LLM models.

Defensive Measures to Avoid such Threats
Employees in sensitive roles need to be trained about how social engineering is being upgraded with AI. They need to be taught to verify sender domain names, look for anomalies and double check with external sources when asked for information. AI tools need to be used to detect manipulation or generated content.
Scope of attachments and scripts need to be limited and only trusted sources and verified identities should be allowed. The South Korean firm Genians that uncovered this campaign by Kimsuky has insisted on the use of Endpoint Detection and Response (EDR), a cybersecurity technology that continuously monitors devices likes smartphones and computers.
Lastly, AI companies need to refine their guardrails and have stricter measures in place to avoid misuse.

The Last Word
The Kimsuky campaign is a clear warning that we have entered a new age of cyber warfare. AI tools have become enablers of sharper deception. And this may not even be the first campaign of its kind, it is merely the first cross border one with large scale implications that was detected.
If the defence mechanisms are not strengthened and stricter guardrails are not in place soon, the damages can be massive. Leaks, espionage, manipulation of public opinion, compromising of critical infrastructure… anything has become possible.
What was once a bestselling science fiction novel has now become a scary reality. The question is not if more attacks will happen, the question is when. And whether we are prepared for it.
In case you missed:
- How to Free Up Gmail Space without Deleting Important Emails
- Why is Indian Education Sector facing Record Number of Cyberattacks?
- How Accurate are AI Web Searches?
- Cloudflare’s One-Click Solution for Image Verification
- Grok’s ‘Spicy’ Feature lets you Generate Sexually Explicit Images
- From Silicon Valley to Gaza: Microsoft’s Cloud could be Israel’s War Machine
- Apple Intelligence to increase Global Reach with Multilingual Support
- Active Listening Feature on Phones raises Privacy Concerns
- You can Now Create a Video From a Single Image!
- How AI is Revolutionizing Combat for Indian Defence Forces