During our May 3rd webinar, I discussed the intersection of cyberfraud and AI and unveiled what we do at Outseer to help protect businesses and consumers alike. As the realm of fraud continues to evolve, this is indeed a brave new world.

Outseer FraudAction is a solution to protect customers from phishing, scams, and bad actors that abuse your brand – and the widespread adoption of Large Language Models (LLMs) like ChatGPT promises explosive growth for attackers targeting your customers with phishing and scams. Fraud is endlessly innovative, and since broad access to advanced LLMs is a recent development, we have not yet seen the full impact of this leap in fraud. Recent events like the leak of Meta’s LLaMA LLM adds more complications to forecasting future developments in fraud. In this blog post, we will explore how new AI like ChatGPT might help attackers with phishing, scamming, and cybercrime, and steps organizations can take to protect themselves and their customers from AI-enabled cybercrime.

LLMs, like ChatGPT, are a type of AI that can generate human-like text. These models are trained on massive datasets of written language and can generate convincing, natural-sounding text that can be difficult to distinguish from human writing. LLMs can be trained to generate phishing emails, social engineering messages, and other types of cybercrime lures that are difficult to detect. Attackers can use LLMs to create sophisticated pretexts that are tailored to individual victims at scale, making targeted fraud attacks more accessible to scammers of all experience and skill levels.

Cybercriminal scams share many scaling bottlenecks with legitimate business development. Sending e-mails or placing advertisements can be easily scaled up to reach a large potential audience, but responding promptly to all responses still requires a human element that cannot easily keep pace. Automation leveraging LLMs enables a small criminal group — or even a single attacker — to produce real-time personalized responses to an unlimited number of marks replying to lures. This is of particular concern with Business E-mail Compromise (BEC) attacks and investment scams, where the fraudster may need to engage with the victim for days or weeks to build rapport before trying to ask for a payment.

Phishing attacks have been around for years, but with LLMs, they are becoming increasingly sophisticated. LLMs can generate convincing emails that appear to come from reputable sources, making it easier for attackers to trick victims into divulging sensitive information or clicking on links that lead to malware. LLMs can be used to create convincing stories and pretexts that attackers can use to trick victims into giving up account login details, passwords, credit card numbers, and other valuable data. Attackers can use AI to augment their attacks, making them more efficient and effective. For example, LLMs can generate thousands of unique phishing emails in a matter of seconds, increasing the likelihood of success for attackers and reducing the ability of anti-phishing solutions to filter out phishing e-mails.

Ultimately, the best course of action is to be aware of the threat and take preventive measures to protect your organization and your customers. Being aware of breached and leaked information about your organization is the first step — attackers with insight into your organization have better chances of success when phishing employees and customers alike. Less available information means less material for LLMs to use when crafting customized lures.

Leveraging solutions like FraudAction that can hunt for leaked information while proactively scanning for brand abusing scams and phishing pages online and on social media, provides the strongest chance of staying ahead of advanced AI-supported cybercrime targeting your customers and organization. While LLMs and AI might revolutionize phishing and scams — fraud still has detectable infrastructure like fake social media presence and phishing pages that give away the attacker’s intent and, more importantly, are vulnerable to disruption by solutions like Outseer FraudAction.

When FraudAction is used in concert with the full line of Outseer products, your organization becomes a problematic target for attackers — and attackers are looking for high success rates at scale. No solution will stop 100% of attacks, but simply being proactive building defense reduces the number of attackers willing to try to victimize your customers and organization.

If you’d like to see a demo of FraudAction please get in touch as we are offering several of these over the course of the next few weeks.

Maximilian Gebhardt

Head of Commercial Success for FraudAction

Max has 20 years of experience in fraud prevention and financial crime prevention for the US Government and major financial institutions. He has driven innovative fraud solutions for Citi and Fidelity Investments, managed fraud analytics teams, as well as designed anomaly detection methods for the US Department of State to spot immigration fraud and illicit technology transfer. He has consulted on digital fraud issues for dozens of top US, UK, EU, and Canadian banks and brokerages. Based in the Dallas-Fort Worth Area of Texas.