AI Crime Wave: Anthropic Reports Criminals Using ‘Vibe Hacking’ at Record Levels

Hardy Zad
By
Hardy Zad
Hardy Zad is our in house crypto researcher and writer, delving into the stories which matter from crypto and blockchain markets being used in the real...
4 Min Read

AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases.

Despite “sophisticated” guardrails, AI infrastructure company Anthropic said cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks.

In a “Threat Intelligence” report released Wednesday, several cases were shared by members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev, and Jacob Klein, in which criminals had misused the Claude chatbot, with some attacks demanding more than $500,000 in ransom.

It was found that the chatbot was used not only to provide technical advice to the criminals but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.

- Advertisement -

What is ‘Vibe Hacking’? AI-Powered Social Engineering Explained

Vibe hacking is social engineering using AI to manipulate human emotions, trust, and decision-making. In February, it was forecasted by blockchain security firm Chainalysis that crypto scams may have their biggest year in 2025 as generative AI has made them more scalable and affordable for attacks.

Anthropic found one hacker who had been “vibe hacking” with Claude to steal sensitive data from at least 17 organizations — including healthcare, emergency services, government, and religious institutions — with ransom demands ranging from $75,000 to $500,000 in Bitcoin.

Claude was trained by the hacker to assess stolen financial records, calculate appropriate ransom amounts, and write custom ransom notes to maximize psychological pressure.

While the attacker was later banned by Anthropic, the incident reflects how AI is making it easier for even the most basic-level coders to carry out cybercrimes to an “unprecedented degree.”

“Actors who cannot independently implement basic encryption or understand syscall mechanics are now successfully creating ransomware with evasion capabilities [and] implementing anti-analysis techniques.”

North Korean IT Workers Misuse Anthropic’s Claude

Anthropic also found that North Korean IT workers have been using Claude to forge convincing identities, pass technical coding tests, and even secure remote roles at U.S. Fortune 500 tech companies. Claude was also used by them to prepare interview responses for those roles.

Claude was also used to conduct the technical work once hired, Anthropic said, noting that the employment schemes were designed to funnel profits to the North Korean regime despite international sanctions.

Earlier this month, a North Korean IT worker was counter-hacked, and it was found that a team of six shared at least 31 fake identities, obtaining everything from government IDs and phone numbers to purchasing LinkedIn and UpWork accounts to mask their true identities and land crypto jobs.

One of the workers supposedly interviewed for a full-stack engineer position at Polygon Labs, while other evidence showed scripted interview responses in which they claimed to have experience at NFT marketplace OpenSea and blockchain oracle provider Chainlink.

Anthropic said its new report is aimed at publicly discussing incidents of misuse to assist the broader AI safety and security community and to strengthen the wider industry’s defense against AI abusers.

It said that despite implementing “sophisticated safety and security measures” to prevent the misuse of Claude, malicious actors have continued to find ways around them.

Share This Article
Follow:
Hardy Zad is our in house crypto researcher and writer, delving into the stories which matter from crypto and blockchain markets being used in the real world.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *