
In early 2026, cyber security researchers at Google spotted an alarming new tactic emerging from cyber crime circles. Hackers were deploying a combination of AI-powered tools to create traps that were nearly impossible to defend against.
The attacks used Google’s Gemini AI tool to “develop tooling, conduct operational research, and assist during the reconnaissance stages”, before AI deepfakes were used to trick victims over spoofed Zoom calls. In one instance, a group linked to North Korea used an AI-generated deepfake of a prominent CEO to fool the victim into compromising their computer security.
The attack method forms part of a new wave of AI-enabled online crime that is leading to record levels of cyber attacks, scams and financial losses.
This weaponisation of AI is turning once uniquely human skills, like persuasion, mimicry and coding, into hyper-effective tools that can be accessed on demand and customised for any target.
It has led to what some experts describe as the fifth wave of cyber crime, contributing to huge losses for both companies and individuals, and making the internet more dangerous than ever before.
AI-driven social engineering
Social engineering attacks like phishing – where attackers trick people to steal sensitive data or money – have been around for decades. But now generative AI tools are enabling attackers to create highly-personalised impersonation attacks, mimicking a target’s friends, family members or colleagues with unprecedented accuracy.
They can come in the form of hyper-realistic email scams, synthetic voice calls, and even deepfake personas appearing on video calls.
“AI-powered social engineering is alarmingly effective,” Brian Sibley, chief technology officer at IT consultancy firm Espria, tells The Independent.
“Attackers can now mimic colleagues, suppliers, or executives with near-perfect accuracy. The only effective defence is to monitor behaviour continuously, spotting the subtle indicators that something just isn’t right.”
A January report from cyber security firm Group-IB found that cyber criminals could acquire phishing kits on the dark web for the price of a Netflix subscription. These “synthetic identity kits” offer AI video actors, cloned voices and even biometric datasets.
“From the frontlines of cyber crime, AI is giving criminals unprecedented reach,” said Group-IB CEO Dmitry Volkov. “AI is enabling criminals to scale scams with ease and create hyper-personalisation and social engineering to a new standard.”
‘Pig butchering’ scams
One way AI is accelerating social engineering attacks is through so-called pig butchering scams, where criminals spend weeks, or even months, building an emotional connection with the target. This period, known as “fattening the pig”, creates trust so that a victim is then less skeptical when they are presented with a fake investment opportunity. The criminal then “slaughters” the pig by disappearing with the funds.
The advent of generative AI has transformed pig butchering from a niche type of consumer fraud into a major avenue for scammers. Fraudsters typically initiate contact through messaging apps, social media platforms or dating sites, before using apps like ChatGPT to establish the relationship.
Other forms of AI, such as face-swapping technology or deepfakes, can also be employed by criminals to trick targets into thinking that they are communicating with a sincere love interest.
Researchers have observed crime syndicates in South-East Asia adopting such techniques on a massive scale to lure victims, regardless of language barriers or technical skills.
Autonomous malware
Cyber criminals have found a new way to leverage artificial intelligence for the purpose of spreading malware – malicious software designed to steal data or damage computer systems.
This new type of malware uses large language models (LLMs) like Google’s Gemini to mutate its code in real-time as it spreads, making it nearly invisible to traditional antivirus software.
In a threat intelligence report in November, Google researchers described it as a “new operational phase of AI abuse, involving tools that dynamically alter behaviour mid-execution”.
They explained how new autonomous malware threats like Promptflux use a “Thinking Robot” function that allows AI to rewrite the malware’s entire source code on an hourly basis to evade detection.
“While Promptflux is likely still in research and development phases, this type of obfuscation technique is an early and significant indicator of how malicious operators will likely augment their campaigns with AI moving forward,” the researchers noted.
The fifth wave of cyber crime
Cyber criminals have been quick to adopt AI tools into their arsenals, leaving those responsible for defending against attacks playing catch up.
AI-driven scams surged 1,200 per cent in 2025, according to research from cyber security firm Vectra AI, with this surge expected to continue in 2026. By 2027, projected losses from AI-driven fraud could reach $40 billion – up from $16.6 billion in 2024.
Former Interpol Director of Cybercrime, Craig Jones, warned that AI has dramatically increased the speed, scale, and sophistication with which criminals can operate in 2026. It is also made it harder than ever to detect and attribute cyber attacks.
“AI has industrialised cyber crime,” he said. “The shift marks a new era, where speed, volume, and sophisticated impersonation has fundamentally changed how crime is committed and how hard it is to stop.”
Yahoo turns to AI to spark internet pioneer’s revival
Google says ‘quantum apocalypse’ that could break the internet is soon
Meta and YouTube just lost a landmark social media case. What next?
Russian hackers hijack thousands of ‘high intelligence’ messaging app accounts
Hundreds of millions of iPhones vulnerable to new Darksword spyware attack
Millions of UK businesses exposed by Companies House security flaw