Imagine creating artificial intelligence (AI) for your benefit, only for it to be used to create ransomware that could potentially even destroy your own data.


It sounds straight out of a Black Mirror episode, but it’s very much the reality. Recently, cybersecurity firm ESET’s researchers discovered the world’s very first AI-powered ransomware, ‘PromptLock.’ Interestingly, it was created using OpenAI’s gpt-oss-20b model and leverages Lua scripts to perform reconnaissance on the local filesystem. It not only sweeps through the target files and exfiltrates selected data, but can also encrypt and possibly destroy it.

If that didn’t scare you enough, these Lua scripts are compatible across macOS, Linux, and Windows systems. It’s a new dawn, it’s a new day, and it’s a new robot-made horror. Is this a turning point for cybersecurity?

The Story Behind PromptLock

What could be possibly more horrifying than the discovery of an AI-powered ransomware? The fact that it all started out as legitimate research! 

A group of NYU (New York University) engineers were looking at the intersection between AI, LLMs, and ransomware decided to develop a PoC (proof-of-concept) for a full-scale, AI-driven ransomware attack. The result? They came up with Ransomware 3.0, which employs AI throughout the entire ransomware life cycle. They tested two models, OpenAI’s heavier gpt-oss-120b and the lighter gpt-oss-20b.

It generated Lua scripts customised for every victim’s specific computer setup, identifying environments and mapping IT systems to determine which files were most valuable for demanding steep extortion payments. Since the ransomware is customised and targets few files specifically, it’s a lot harder to detect. Moreover, it’s also polymorphic, which means that the code generated across different systems, or even numerous times on the same system, will never be the same. Did we mention that the AI also wrote personalised ransom notes? Sigh – yes.

However, when they uploaded the malware to VirusTotal during testing to check whether it would be flagged as malicious, news stories about a new AI-powered ransomware started pouring in – and so did the messages. When the team checked, it was exactly the same prompts, functions, and code that they wrote, which is when they realised that the ESET researchers had found a binary for Ransomware 3.0 on VirusTotal – and that it was a real attack.

The Effects on Cybersecurity 

According to the NYU team, the Ransomware 3.0 binary will not function outside of a lab environment, which is definitely good news. If devices are connected to unrecognised online sources for prompts, it becomes easier to spot them in the long term. Moreover, OpenAI can’t keep calling APIs (application programming interface) on its servers every time these scripts need to be generated, which means they can’t snitch on the ransomware operators. Since the scripts are running on someone else’s system, the pitfalls of vibe coding also don’t apply either. So, the ransomware isn’t going to be stealing or encrypting any data systems – for the time being, at least. 

However, since this ransomware runs locally, it can surpass detection easier as it doesn’t employ any online resources. Moreover, the existence of this concept is nerve-wracking, as this could pave the way for the development of more powerful and high-tech AI-powered ransomware that could end up being used to hack systems.

Notwithstanding the intention behind creating PromptLock, this case certainly brings to light how AI tools could be used to automate the many stages of ransomware attacks, right from enumeration to data exfiltration, and that too at a scale and speed once thought impossible. The fact that AI-powered malware exists which could change its tactics and adapt to the environment on the fly is a harbinger of new frontiers in cyberattacks. After all, enterprises can’t wait for the next PromptLock to happen – with the technology existing and the techniques proven, criminal adoption is inevitable. 

This is the time when organisations and governments need to immediately assess how good their incident response capabilities are against AI-powered threats. They need to move beyond signature-based tools and activate behavioural detection that identify suspicious behaviour patterns, even from threats that were previously unknown. Mobilising rapid response teams can help quickly contain and analyse novel attack methods 24/7. Data protection is a critical step, where implementing robust recovery and backup systems can restore operations even after unprecedented attacks.

Ever since the news of AI-powered ransomware has broken, industry insiders have been calling for robust API usage and enhanced AI governance. The emphasis needs to be on addressing tool poisoning and jailbreaks, where hackers contaminate data sources to inject backdoors. Enterprises also need to be on the lookout for indirect prompt injection attacks in AI code assistants, audit data flows, and design AI-specific firewalls.

So, What Now?

As ingenious as cybercrimes already are, they’re only evolving and increasing in numbers and intensity. Take the example of EU (European Union) airports being hit by a cyberattack in September 2025, which disrupted automatic check-in and boarding software, causing absolute mayhem.

Ultimately, this convergence of cybercrime and AI necessitates a proactive stance. Ignoring this could lead to a chaotic future where AI assistants could be the first step of the digital chaos, reshaping how we protect and shield ourselves against what will be an increasingly intelligent adversary.

In case you missed:

Malavika Madgula is a writer and coffee lover from Mumbai, India, with a post-graduate degree in finance and an interest in the world. She can usually be found reading dystopian fiction cover to cover. Currently, she works as a travel content writer and hopes to write her own dystopian novel one day.

Comments are closed.

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved