The New Era of Phishing: AI on Both Sides of the Battle

It looks like a message from your colleague, flawless grammar, contextually relevant, even friendly in tone. But it’s a trap: unknown to the recipient, the email was written by an AI on behalf of a fraudster. This is not fiction, it’s the new reality of phishing in the age of generative AI.
Phishing is fueling both fraud and scams
Generative AI is helping drive the explosion across the compound attack surface – which is the convergence of fraud and scams that many banks are dealing with today. An AI-crafted message can lead to an unauthorized account takeover when credentials are harvested, or to an authorized push-payment scam when a victim is persuaded to move their own money.
Generative AI has become a force multiplier for phishing
Generative AI has dramatically lowered the barrier to cybercrime, making phishing easier, faster, and more convincing, as it can:
- Write in multiple languages
- Produce fake websites or malware automatically
- Craft highly personalized messages
- Mimic writing styles for impersonation attacks
Tasks that once required coding or writing skills can now be executed with simple instructions.
A recent industry report, Arms Race: AI’s Impact on Cybersecurity observed that AI phishing kit generators on the dark web can now produce phishing emails with better localized language, custom graphics, and tailored landing pages than older kits. The result is more convincing bait for victims and more tools in the hands of attackers.
Attack volumes are increasing
At Outseer, phishing detection initially followed a moderate, steady growth pattern of roughly 30% YoY in 2022, consistent with long-term trends in digital fraud. This changed markedly beginning in 2023.
Between 2023 and 2024, phishing activity nearly tripled year over year, representing a clear break from prior growth rates and signaling a step-change in attack scale rather than incremental increase. High levels of phishing attacks continued into 2025, reinforcing that the surge was not a temporary spike but part of a sustained shift in attacker behavior.
This acceleration coincided with a broader expansion in the availability of automation and generative AI tools. While such technologies deliver significant benefits for legitimate use cases, they also lower the effort required to create and scale phishing campaigns, expanding the pool of potential attackers.
Spearphishing is one of the most dangerous evolutions
Creating highly personalized fraudulent messages aimed at specific individuals or organizations is no longer a slow and intensive process.
Modern AI tools can analyze vast amounts of open-source data and then incorporate those insights into tailored phishing content. This level of context-specific personalization makes it extremely challenging for recipients to distinguish the fake from the real. The email might mention recent company news or use insider jargon, bypassing generic tells.
The detection gap
For security teams and individual targets alike, all this means that common red flags like vague language, obvious grammar mistakes, or generic requests disappear. Highly personalized, context-aware phishing increases the likelihood of successful business email compromise, payment fraud, and data theft that bypasses defenses.
Fighting Back: AI-Powered Defenses Against Scams
To counter AI-generated attacks effectively, organizations must leverage AI themselves. Outseer’s FraudAction team is leveraging artificial intelligence on the defensive side for real-time phishing detection.
The sheer volume and variety of phishing attacks today make it impossible for human teams alone to keep up. AI systems can automatically process massive datasets, including emails, URLs, domain registrations, and dark web chatter, to identify patterns or anomalies that signal phishing campaigns.
Speed is critical in stopping phishing attacks.
AI models operate in real time, evaluating emails or web traffic as they arrive. An incoming email can be analyzed immediately for phishing indicators of brand abuse or impersonation. Advanced algorithms, including deep learning, allow these systems to detect malicious activity that traditional tools might miss.
FraudAction’s platform uses adaptive AI to continuously update its detection models with confirmed phishing templates, URLs, and attack techniques, ensuring rapid recognition and mitigation of new variants. By combining learning and real-time context, AI helps security teams stay one step ahead of attackers.
AI is a double-edged sword
Although there is understandable concern about the rise of AI-driven cyber threats, the technology itself is not the enemy. The same capabilities that criminals misuse can be harnessed for positive outcomes including faster detection, smarter automation, and greater resilience, especially when you connect phishing signals to both scam prevention and account takeover defenses.
Keeping the human in the loop
At Outseer, we combine advanced AI—including generative AI, agentic AI, and predictive ML models—with expert human oversight. This creates a system that keeps our clients protected and proactive against AI-driven threats across the new, compound attack surface of fraud and scams.
For more insights into attack trends that are shaping 2026 watch this on-demand webinar with About Fraud and Outseer: "Scam Warfare: Scam Controls Every Bank Needs in 2026"





