AI is reshaping the cybercrime landscape, and not in the way many hoped. Generative tools that once promised productivity gains are now powering more sophisticated attacks, from tailored phishing to full-blown ransomware campaigns. A new threat intelligence report from Anthropic details how criminals are misusing AI to build malware, automate operations, and apply psychological pressure at scale—a tactic the company calls “vibe-hacking.”
What’s new and why it matters
According to the report, threat actors attempted to weaponize Anthropic’s Claude AI systems to draft convincing phishing emails, generate ransom notes, and even work around built-in safety measures. In one of the most striking cases, a hacking group used Claude Code—Anthropic’s AI coding agent—to orchestrate an entire campaign spanning 17 organizations. Targets included government bodies, healthcare providers, religious institutions, and emergency services.
This isn’t just AI polishing grammar or tightening prose. It’s AI accelerating the entire attack chain: reconnaissance, social engineering, execution, and extortion. The attackers demanded ransoms topping $500,000, underscoring the high-stakes nature of AI-augmented cyber extortion.
Vibe-hacking: coercion powered by AI
Anthropic’s report spotlights the rise of “vibe-hacking,” where attackers use AI-generated language to exert emotional or psychological pressure on victims. Instead of clumsy, generic threats, targets receive messages that are polished, context-aware, and culturally tuned—making people more likely to click, reply, or pay. The result is a more persuasive con in less time, at greater scale.
Misuse goes beyond ransomware
The report emphasizes that the abuse of AI isn’t limited to encryption and extortion. Criminals are also deploying generative tools for:
– Job application fraud: Deceptive candidates used AI to bypass language and technical skill gaps, securing positions at major companies by gaming screening and interviews.
– Romance and relationship scams: Scammers built bots to craft personalized, multilingual messages and tailored compliments on platforms like Telegram. Victims spanned multiple regions, including the United States, Japan, and Korea.
– Phishing and social engineering: AI drafted convincing outreach with fewer errors, improved cultural nuance, and higher conversion rates.
How the company responded
Anthropic says it intercepted and shut down multiple attempts to misuse its systems. The company banned accounts tied to illegal activity, tightened safety guardrails, shared intelligence with government agencies, and updated its Usage Policy to explicitly prohibit generating scams or malware.
The broader takeaway is clear: as generative AI becomes more capable and accessible, misuses will diversify and intensify. The same features that help businesses write code, draft emails, and translate content can help criminals do the same—faster, cheaper, and with fewer barriers.
What organizations should do now
You don’t have to be a tech giant to be a target. Public agencies, hospitals, nonprofits, and enterprises of all sizes are now in the crosshairs of AI-accelerated attacks. Consider the following steps:
– Strengthen email and identity defenses
– Enforce multifactor authentication, least-privilege access, and device hygiene.
– Enable advanced phishing protection, DMARC/SPF/DKIM, and anomaly detection for email and chat.
– Watch for sudden shifts in tone, unusual urgency, or unfamiliar writing styles in internal communications.
– Update incident response for AI-era threats
– Treat ransom communications and extortion attempts as data points for rapid triage.
– Practice tabletop exercises that include AI-aided phishing and deep social engineering scenarios.
– Predefine policies for ransom demands, public disclosure, and law enforcement engagement.
– Harden hiring and vendor processes
– Use structured interviews, proctored technical testing, and identity verification to reduce AI-enabled applicant fraud.
– Screen vendors for security maturity, especially if they integrate AI tools into their services.
– Educate continuously
– Train employees on new phishing tactics, emotionally manipulative messages, and AI-crafted scams across email, SMS, and collaboration tools.
– Encourage a culture of pause-and-verify before responding to urgent requests, payment changes, or credential prompts.
– Govern your own AI use
– Establish clear internal policies on how staff may use AI tools.
– Log usage where possible, and restrict sensitive data from being entered into third-party models.
– Partner with providers that publish safety practices and respond quickly to abuse.
The bottom line
The rise of vibe-hacking shows that cybercrime isn’t just about code and encryption—it’s about persuasion. Generative AI gives attackers a megaphone for manipulation, enabling precise, scalable pressure campaigns alongside traditional technical exploits. While Anthropic’s actions show that platform-level defenses can blunt some attempts, organizations should assume these tactics will evolve and proliferate.
Staying ahead means pairing technical controls with human-ready defenses: resilient authentication, vigilant monitoring, clear AI policies, and relentless education. As AI capabilities grow, so must the safeguards that keep them from being weaponized.






