By |Last Updated: October 17th, 2025|6 min read|Categories: AI, Cybersecurity, Network Protection|

How AI Phishing Is Powering A New Wave Of Cyberattacks

Phishing has long been one of the most effective ways for threat actors to infiltrate business systems, steal credentials or deploy ransomware. These attacks can be used as the starting point for data breaches, financial fraud or wider operational disruption. But as technology evolves, so too do the tactics behind these threats.

Cybercriminals are increasingly using AI to enhance their phishing campaigns. Generative tools like ChatGPT can craft faster, more personalized and more convincing attacks at scale. These AI-powered phishing attempts are harder to spot, bypass traditional filters and can target entire organizations in seconds. As this trend accelerates, it’s critical for businesses to adapt their defenses to improve their phishing detection and respond to this new wave of intelligent cyberthreats.

What Is AI Phishing?

AI phishing refers to the use of generative artificial intelligence tools to automate or enhance phishing campaigns. While they don’t typically present a new form of attack, AI can transform how familiar phishing attacks are executed. For example, it can automate the creation of attacks to hit businesses faster and at scale, increase personalization or make phishing messages harder to detect.

Cybercriminals may use large language models (LLMs) to instantly generate emails that mimic the tone, structure and vocabulary of legitimate communications. They can also reduce spelling and grammar errors that have traditionally been among the major red flags of suspicious content.

In more advanced scenarios, AI tools can also be used to create entire phishing websites or clone login portals to trick users into handing over credentials on pages that are almost indistinguishable from the real thing. In some cases, AI has even been deployed to synthesize voices for use in phone-based social engineering attacks. This makes the threat more versatile than ever, allowing even low-skilled criminals to launch high-quality campaigns that can bypass traditional email filters and trick even the most alert users.

How AI Is Changing The Phishing Landscape

Last year saw a 60% increase in AI phishing attacks

The impact of AI can be seen in how quickly threat actors are adopting the technology. For instance, one study by Zscaler found that 2024 saw a nearly 60 percent year‑on‑year increase in phishing attacks driven by generative AI, including voice phishing and deepfake schemes. As the technology becomes more powerful and widely available, it’s likely that in the coming years, AI will drive almost all phishing campaigns.

As AI allows attackers to bypass many of the telltale signs traditional anti‑phishing tools and humans rely on, such as errors in grammar, generic content and low personalization, this is also making them more effective. Indeed, one study found that AI-enhanced spear phishing messages created using publicly available LLMs like ChatGPT 4o and Claude 3.5 resulted in a 54 percent clickthrough rate – meaning more than one in two people can be fooled by these messages.

Real-World Examples Of AI-Enhanced Phishing

The power of AI has also been shown in a range of real-world applications as cybercriminals use it to increase the speed, scale and success of their attacks. The below cases highlight how these tools are being weaponized to bypass traditional defenses and exploit human trust:

  • AI-powered phishing automation: Uncovered by researchers at Varonis, SpamGPT is an as-a-service platform designed to draft phishing emails, spoof senders and automate the launching of entire campaigns. The tool lowers the barrier to entry for cybercriminals and helps scale attacks with minimal human input.
  • Building phishing sites in seconds: Threat intelligence firm Okta identified attackers using AI-powered tools to quickly generate fake login portals. This was able to create realistic phishing infrastructure in under 30 seconds using simple text prompts. The tools have been seen to impersonate a range of legitimate brands, including Microsoft 365 and cryptocurrency companies.
  • Deepfake voice used in finance fraud: In a widely reported case, AI was used to clone a company director’s voice during a video call, tricking a Hong Kong-based employee into transferring $25 million. The attack involved multiple AI-generated personas and highlights the risk of deepfake-powered social engineering.

How To Defend Against AI Phishing

AI phishing raises the stakes for businesses. These attacks are more convincing, faster to produce and harder to detect. If they do go unnoticed, malware or stolen credentials can be used to launch ransomware, cause data breaches or commit financial fraud

The automation and personalization enabled by AI means even less skilled threat actors can target businesses of all sizes with enterprise-grade phishing. To stay protected, organizations must strengthen their defenses across people, process and technology. The following steps are essential for achieving this.

  • Update training programs to include AI-enhanced phishing examples, especially deepfake and personalized content.
  • Deploy behavioral-based email security tools that look beyond wording and scan for patterns in activity.
  • Implement multifactor authentication across all accounts to limit the impact of stolen credentials.
  • Use anti data exfiltration (ADX) technology to prevent sensitive data from leaving the network.
  • Run phishing simulations regularly to test awareness and spot gaps.

Share This Story, Choose Your Platform!

Related Posts

  • 2025 Q3 Ransomware Report

2025 Q3 Ransomware Report

October 16th, 2025|

BlackFog’s 2025 Q3 Ransomware Report - global cyber battlefield heats up as ransomware groups escalate attacks. Download full report for key insights.