A hacker leveraged technology from Anthropic as part of a vast cybercrime scheme that’s impacted at least 17 organizations, the company said, marking an "unprecedented” instance of attackers weaponizing a commercial artificial intelligence tool on a widespread basis.
The hacker used Anthropic’s agentic coding tool in a data theft and extortion operation that affected victims in the last month across government, health care, emergency services and religious institutions, according to the company’s August threat intelligence report published this week. The attacks using the Claude Code tool resulted in the compromise of health care data, financial information and other sensitive records with ransom demands ranging from $75,000 to $500,000 in cryptocurrency.
The campaign demonstrated a "concerning evolution in AI-assisted cybercrime” in which a single user can operate like an entire cybercriminal team, according to the report. The company said the hacker used AI as a consultant and active operator to execute attacks that would otherwise have been more difficult and time-consuming.
Anthropic, an AI startup founded in 2021, is one of the world’s most valuable private companies and competes with OpenAI and Elon Musk’s xAI as well as Google and Microsoft. It has positioned itself as a reliable, safety-conscious AI firm.
Anthropic also reported malicious use of Claude in North Korea and China. North Korean operatives have been using Claude to maintain fraudulent remote, high-paying jobs at technology companies intended to fund the regime’s weapons program. These actors appear to be completely dependent on AI to perform basic technical tasks such as writing code, according to Anthropic.
A Chinese threat actor used Claude to compromise major Vietnamese telecommunications providers, agricultural management systems and government databases over a nine-month campaign, according to the report.
While Anthropic’s investigation focuses specifically on Claude, the company wrote that the case studies show how cybercriminals are exploiting advanced AI capabilities in all stages of their fraud operations. Anthropic has found that AI-generated attacks are able to adapt to defensive measures in real time while making strategic and tactical decisions about how to most effectively exploit and monetize targets.
Other AI firms have also reported malicious use of their technology. OpenAI said last year that a group with ties to China posed as one of its users to launch phishing attacks against the AI startup’s employees. OpenAI has also said it has shut down propaganda networks in Russia, China, Iran and Israel that used its technology.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.