The report also describes two other cases: Claude’s use in a fraudulent employment scheme tied to North Korea and its involvement in AI-generated ransomware development.
Broader AI Security Concerns
Claude is not the only AI implicated in criminal activity. Last year, OpenAI disclosed that its systems had been used by hackers linked to China and North Korea, who employed generative AI to debug malicious code, research targets, and draft phishing emails.
These cases highlight a growing concern: advanced AI systems are no longer just advisory tools for criminals—they can now perform complex tasks that once required specialized teams of hackers.
Anthropic emphasized that while these incidents underscore the risks of misuse, they also demonstrate the importance of strong safeguards and proactive monitoring.