A new kind of ransomware may be on the horizon, and it’s powered by a local AI model. Security researchers have analyzed a prototype dubbed PromptLock, a project that uses a large language model to generate the very scripts it needs to steal and encrypt data. The sample turned up on VirusTotal on August 25 and, for now, appears to be a proof-of-concept rather than an active attack.
What sets PromptLock apart is how it weaponizes a prompt-injection strategy. A Golang-based loader talks to a locally hosted model through the Ollama API and asks it to produce Lua scripts on demand. Those scripts then enumerate files, search for sensitive information, exfiltrate selected data, and encrypt the remainder across Windows, macOS, and Linux systems. The encryption routine references SPECK 128-bit. The concept is cross-platform, modular, and engineered to adapt.
Two design choices make this approach especially tricky for defenders. First, the malware drives an on-premises model (noted as gpt-oss:20b via Ollama), which means there may be no telltale calls to external AI services for security tools to flag. Second, because large language models are inherently non-deterministic, they can generate different Lua scripts each time. That variability erodes static indicators of compromise and undermines signature-based detection.
The researchers also point out that attackers wouldn’t necessarily need to plant a large model inside a victim’s network. A tunnel or proxy to an external Ollama host could achieve the same effect while keeping the footprint lean. The prototype even includes instructions prompting the model to draft a ransom note, and it uses a famous Bitcoin address linked to Satoshi Nakamoto as a placeholder. A data-destruction component appears incomplete.
There’s no evidence that PromptLock is being used in the wild today. The discovery is best viewed as an early warning: the capability exists, and operational playbooks could follow quickly.
If your organization experiments with or relies on local AI models, treat this as a wake-up call. LLM-enabled services inside the perimeter create a new attack surface—and this prototype shows how quickly that surface can be abused.
Practical steps for defenders
– Inventory and lock down all Ollama or similar LLM endpoints; disable unused instances and require authentication.
– Restrict who can prompt local models and what those models are allowed to do; apply strict role-based access and least privilege.
– Monitor for automated Lua execution, unexpected interpreter launches, and script creation in temp or user directories.
– Watch for behaviors consistent with ransomware: sudden spikes in file modifications, mass renames, rapid encryption-like I/O, and unusual access to diverse file types across shares.
– Focus on behavioral and anomaly-based detection over static signatures, given the changing scripts generated by a model.
– Implement application control to block unapproved interpreters and scripting engines from running or spawning encryption tools.
– Segment networks and limit egress; scrutinize exfiltration patterns and encrypt outbound traffic inspection where feasible.
– Maintain tested, offline-capable backups and clear recovery runbooks to minimize downtime if an incident occurs.
Bottom line: PromptLock isn’t a campaign—yet. But it clearly illustrates how AI can automate the grunt work of intrusion, making attacks more adaptive and harder to spot with traditional signatures. Security teams should get ahead of this curve now by hardening local AI infrastructure and leaning into behavior-focused detection and rapid containment.






