The widespread use of advanced AI by individuals and businesses has led to its exploitation by cybercriminals. AI’s adaptability enables automated, accelerated, and sophisticated attacks. Key concerns include:
- Malicious Software: AI tools like ChatGPT can write harmful code and automate large-scale attacks.
- Data Theft: AI can log smartphone inputs to capture sensitive information such as messages and bank codes.
- Autonomous Botnets: Swarm intelligence allows botnets to self-repair and maintain malicious networks.
AI in Password Cracking
Recent Kaspersky research highlights the risks in password security. The July 2024 leak of 10 billion lines of data revealed that 32% of passwords can be cracked within an hour using modern GPUs and brute-force methods. AI methods could crack 78% of passwords even faster, with only 7% being sufficiently secure against long-term attacks.
AI in Social Engineering
AI’s role in social engineering includes generating realistic phishing content and deepfakes. These tools can create personalized and convincing phishing emails, impersonate individuals, and conduct scams such as the $25 million Hong Kong attack using deepfakes.
AI Vulnerabilities
AI itself is vulnerable to attacks such as prompt injections and adversarial attacks, which can deceive AI systems and disrupt their functions. Addressing these vulnerabilities is crucial as AI becomes more integrated into daily life through products like Apple Intelligence and Microsoft Copilot.
Pursuit of truth
The post-truth era has posed hurdles for people who are especially vulnerable to being swayed by misinformation on social media and AI. Kaspersky has long employed AI to protect against threats, continuously researching AI vulnerabilities and harmful techniques to enhance their defenses.