Deepfake scams are rapidly spreading beyond social media and into everyday work tools, including video calls, identity checks, and online transactions, according to a new report from iProov, an identity verification company.
The iProov Threat Intelligence Report 2026 warns that artificial intelligence (AI) is making it easier for criminals to create highly convincing fake identities and impersonate real people at scale. These attacks are no longer rare, they are becoming part of daily business risks.
“Identity is becoming the new battleground in cybersecurity,” said Dr. Andrew Newell, chief scientific officer of iProov.
He said generative AI (GenAI) now allows attackers to mass-produce digital impersonations, making fraud faster and harder to detect.
One alarming finding is a 1,151% surge in iOS-targeted injection attacks, where fake video or biometric data is inserted to bypass security systems. This shows how deepfakes are being used not just for scams, but to break into secure platforms.
The threat is already widespread. Research from the Ponemon Institute found that 41% of companies have experienced deepfake attacks targeting executives. A study by research firm Gartner reported that 37% of cybersecurity leaders encountered deepfake incidents during video calls.
According to iProov, Southeast Asia is emerging as a key testing ground for these attacks. The report recorded a 720% spike in activity in the region during the third quarter of 2025, as criminal groups experiment with new techniques such as virtual camera hacks and stolen identity data. Once proven, these methods are quickly used in other parts of the world.
The increase in easy-to-use AI tools is also accelerating the problem. iProov cited platforms like Kling AI and Nano Banana can create realistic video deepfakes from just a few images, lowering the barrier for cybercriminals.
As deepfakes become more realistic and widespread, verifying identity online is becoming one of the biggest challenges in digital security.
