Google has identified early signs of malware that can rewrite its own code using AI, a mutation-driven threat that could outpace today’s cybersecurity defenses.


A new kind of cyber-threat is emerging, and Google is warning that it may mark the beginning of an entirely different era of malware. According to Google’s Threat Analysis Group and reports from DigWatch and Yahoo News, researchers have identified experimental malicious programs that use generative AI to rewrite their own code on the fly.

Instead of relying on fixed payloads, these early prototypes use AI models to mutate their behaviour every time they run, making traditional signature-based antivirus systems nearly useless. Google describes this as the first observable wave of “adaptive, AI-assisted malware,” which is not fully autonomous but capable of dynamic code generation and rapid evasion. Security analysts say the threat is still in the testing phase, but the intent is unmistakable: attackers are now exploring how to weaponize AI to stay one step ahead of defenders.

The warning is clear: if this continues, the cybersecurity landscape is due to change drastically.

AI-powered obfuscation

The examples flagged by Google aren’t polished, mass-distributed strains, but proof-of-concept malware discovered in underground testing channels. As Paubox and DigWatch note, these programs ask an embedded AI model to “re-create” or “refactor” parts of themselves just before execution. That means no two samples behave, or even look, the same, making detection significantly harder.

Google says some variants were designed to regenerate key functions such as encryption routines, data exfiltration scripts, or process injection patterns. Others simply reshuffle the structure and naming while keeping the underlying logic intact. What alarms researchers is not the sophistication of these prototypes but the direction: threat actors are experimenting with AI-powered obfuscation as a fundamental design choice. It’s the difference between changing a lock occasionally and having a lock that rebuilds itself every hour.

Medianama reports that Indian cybersecurity researchers have already observed hacker forums sharing guides that explain how to pair AI tools, sometimes even stripped-down local versions of Gemini or Llama, with freshly written malware components. The intent is to bypass antivirus engines that rely on known signatures or behavioural heuristics. Google explains that most current systems detect malware by spotting patterns that remain consistent across versions.

But an AI model capable of regenerating function names, shuffling file structures, rewriting loops, or modifying API calls can push malware beyond these detection thresholds. As Medianama highlights, attackers are running regeneration cycles as frequently as once per hour. This isn’t full automation, but rather human operators using AI as a weaponized code-assistant. The result is malware that behaves like a shapeshifter, familiar enough to execute its mission, differently enough to evade inspection.

Not sentient, just dangerous

It’s important to note that Google is not claiming that malware has become sentient, independent, or “self-aware.” All the reports emphasize this distinction. The danger comes from attackers using AI as a low-cost, high-speed mutation engine. Similar to how AI chatbots produce endless variations of emails, text, or images, these systems are now generating endless variations of malicious code.

Paubox points out that some developers are even experimenting with prompt-guided regeneration: instructing the embedded model to “make detection harder” or “appear benign to static analysis.” Because generative models are designed to satisfy prompts, they often comply. Cybersecurity experts say this creates a troubling asymmetry: defenders must protect millions of endpoints, while attackers only need a model that can spin out fresh variations every cycle.

The pace and volume of these mutations could overwhelm traditional defenses long before they can adapt.

What makes this moment particularly concerning is the accessibility of these tools. According to the Yahoo News coverage, Google’s security team stresses that the threat doesn’t require advanced cyber-espionage budgets. Even mid-tier threat actors can combine open-source LLMs with basic malware templates to create “families” of constantly evolving code.

Medianama adds that Indian firms are beginning to adjust detection strategies, shifting toward AI-based scanners that analyze behaviour patterns rather than file signatures. The challenge is steep. If malware can rewrite its own structure, defenders must rely on deeper indicators like unusual memory activity or irregular system calls. That means heavier, more compute-intensive monitoring, as well as manpower.

For emerging markets with limited cybersecurity budgets, the arrival of shape-shifting malware could widen the gap between attackers and defenders, placing critical infrastructure at risk.

The calm before the storm

Where this leads next is an open question. Google’s researchers emphasize that they have not detected widespread deployment yet, only early indicators, proofs-of-concept, and clear intent. But the pattern mirrors every major technological shift in cybercrime: capability first appears in experiments, then trickles into mainstream malware kits, and finally becomes standard practice. If AI-driven code mutation becomes normal, the era of relying on static blacklists and signatures may end altogether.

Governments and corporations will need to invest in AI-based defense systems capable of identifying malicious behaviour in real time, even when the code itself looks new. India, with its massive digital ecosystem and expanding cyber-attack surface, will have to move quickly. The threat isn’t science fiction anymore. It’s a preview of how malware will evolve when attackers wield AI not as a helper, but as an engine of constant reinvention.

In case you missed:

With a background in Linux system administration, Nigel Pereira began his career with Symantec Antivirus Tech Support. He has now been a technology journalist for over 6 years and his interests lie in Cloud Computing, DevOps, AI, and enterprise technologies.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved