Researchers Warn: AI Could Automate Every Stage of Ransomware

By simulating how AI can map systems, steal files, and craft ransom notes on its own, researchers warn that cheap, automated ransomware could soon outpace current defenses—making early preparation essential.

Research: Ransomware 3.0: Self-Composing and LLM-Orchestrated. Image Credit: janews / Shutterstock

Research: Ransomware 3.0: Self-Composing and LLM-Orchestrated. Image Credit: janews / Shutterstock

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Criminals can use artificial intelligence, specifically large language models, to autonomously carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into computer systems to writing threatening messages to victims, according to new research from NYU Tandon School of Engineering.

The study serves as an early warning to help defenders prepare countermeasures before bad actors adopt these AI-powered techniques.

Ransomware 3.0: a proof of concept

A simulation of a malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks—mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes—across personal computers, enterprise servers, and industrial control systems.

This system, which the researchers call "Ransomware 3.0," gained widespread attention recently as "PromptLock," a name chosen by cybersecurity firm ESET after its experts discovered it on VirusTotal, an online platform where security researchers test whether files can be detected as malicious.

The Tandon researchers had uploaded their prototype to VirusTotal during testing procedures, and the files there appeared as functional ransomware code with no indication of their academic origin. ESET initially believed they found the first AI-powered ransomware being developed by malicious actors. While it is the first to be AI-powered, the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment.

"The cybersecurity community's immediate concern when our prototype was discovered shows how seriously we must take AI-enabled threats," said Md Raz, a doctoral candidate in the Electrical and Computer Engineering Department who is the lead author on the Ransomware 3.0 paper the team published publicly. "While the initial alarm was based on an erroneous belief that our prototype was in-the-wild ransomware and not laboratory proof-of-concept research, it demonstrates that these systems are sophisticated enough to deceive security experts into thinking they're real malware from attack groups."

How it works

The research methodology involved embedding written instructions within computer programs rather than traditional pre-written attack code. When activated, the malware contacts AI language models to generate Lua scripts tailored to each victim's specific computer setup, utilizing open-source models that lack the safety restrictions of commercial AI services.

Each execution produces unique attack code despite identical starting prompts, creating a significant challenge for cybersecurity defenses. Traditional security software relies on detecting known malware signatures or behavioral patterns; however, AI-generated attacks produce variable code and execution behaviors that can evade these detection systems entirely.

Testing across three representative environments showed that both AI models were highly effective at system mapping and correctly flagged 63–96% of sensitive files, depending on the environment type. The AI-generated scripts proved cross-platform compatible, operating on Windows, Linux, and embedded Raspberry Pi systems without modification.

Economic implications

Traditional ransomware campaigns require skilled development teams, custom malware creation, and substantial investments in infrastructure. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models. Open-source AI models eliminate these costs.

This cost reduction could enable less sophisticated actors to conduct advanced campaigns previously requiring specialized technical skills. The system's ability to generate personalized extortion messages referencing discovered files could increase psychological pressure on victims compared to generic ransom demands.

Ethical safeguards

The researchers conducted their work in accordance with institutional ethical guidelines within controlled laboratory environments. The published paper provides critical technical details that can help the broader cybersecurity community understand this emerging threat model and develop stronger defenses.

The researchers recommend monitoring sensitive file access patterns, controlling outbound AI service connections, and developing detection capabilities designed explicitly for AI-generated attack behaviors.

Research team and funding

The paper's senior authors are Ramesh Karri, ECE Professor and department chair, as well as a faculty member of the Center for Advanced Technology in Telecommunications (CATT) and the NYU Center for Cybersecurity; and Farshad Khorrami, ECE Professor and CATT faculty member.

In addition to lead author Raz, the other authors include ECE Ph.D. candidate Meet Udeshi; ECE Postdoctoral Scholar Venkata Sai Charan Putrevu; and ECE Senior Research Scientist Prashanth Krishnamurthy.

The work was supported by grants from the Department of Energy, the National Science Foundation, and the State of New York via Empire State Development's Division of Science, Technology, and Innovation.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Source:
Journal reference:
  • Preliminary scientific report. Raz, Md, et al. “Ransomware 3.0: Self-Composing and LLM-Orchestrated.” arXiv.Org, 28 Aug. 2025, DOI:10.48550/arXiv.2508.20444, https://arxiv.org/abs/2508.20444 

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
ChatGPT Buzzwords Spill Into Everyday Speech, Study Finds