A Fraudster’s Paradise – O’Reilly

Dark web forum posts mentioned the phrase “AI agent” far more in the second half of 2025 than in the first half. Could this mean that fraudsters are charmed by the AI hype? Or is AI truly a game changer for cybercrime? AI-related discussions—evident both in what “the bad guys” are saying and in what fraud-fighters are exploring—are everywhere. In fact, a Visa PERC analysis comparing data from June to November 2025 to the six months before that found an increase of more than 450% in dark web agentic AI-related posts.

Documented financial losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone. And Visa reported a 25% increase in the second half of the year in malicious bot-initiated transactions. There’s AI-generated scam and fraud content everywhere you look, and it doesn’t always give Eric Clapton six fingers to make it obvious that something’s off:

AI-generated post from Facebook promoting a fake tour of classic rock icons, shared in December 2025
AI-generated post from Facebook promoting a fake tour of classic rock icons, shared in December 2025

None of this is happening in a vacuum. Online fraudsters have embraced GenAI with open arms in a way that has changed the shape of the online fraud landscape. At the same time, ordinary consumers are leaning into the kind of cheating that GenAI makes easy.

How can the financial and digital ecosystem fight back against this new wave of AI-powered fraud? Learning from past experience, collaboration and knowledge sharing between fraud-fighting professionals is key to our collective ability to brace for impact. This short blog post is a call for harnessing the power of AI for boosting the know-how of digital defenders, as a way to strengthen our overall defense.

Remember COVID?

AI isn’t the first time that the rules of cybercrime have changed dramatically (and it won’t be the last). For example, back in 2020, during the first COVID-19 lockdowns, work-from-home schemes exploded, along with first-party fraud, phishing scams, and more. At the time, the community responded effectively. Teams of fraud-fighting experts joined forces to meet virtually, learn the new terrain and eventually write a study guide that would empower organizations to shield communities from a surge of digital fraud. Our book Practical Fraud Prevention was the result of that collaboration, and it quickly became a helpful training resource to many teams in the ecosystem in their fight against online financial crime.

Today, a new wave of fraud is emerging, powered by AI and specifically GenAI. While some AI-powered initiatives are here only for a short while, others are becoming truly powerful and dangerous tools in nefarious hands. It’s only natural that the professional community will once again regroup to form a playbook against these trends.

As we interview experts in diverse fields for our next book, The Fraud Fighter’s AI Playbook (with coauthor Chen Zamir, now in early release on O’Reilly), we’re building a picture of the ways that GenAI is changing not just the ways fraudsters operate online but also the shape of the online fraud landscape itself. We’re seeing, too, how vital it is that fraud fighters themselves invest in exploring and using this technology to boost their success, strategy, and internal reputation in their own companies.

More fraudsters, doing more harm

There are more online fraudsters today, carrying out more fraud attacks, than ever before. Not all of this can be blamed on GenAI. Rather, GenAI entered into a fraud world that, in retrospect, was poised to leverage it for crooked expansion.

The COVID-19 pandemic drew many new fraudsters online, through three main tracks:

  • Digital transformation + time to spare. Retailers, banks, and other organizations had to cram years of digitisation into weeks or months. It was inevitable that this would result in some vulnerabilities in processes, policies, or systems. People stuck at home with no work were tempted to make some money exploiting these weaknesses and learned fast.
  • COVID relief programs. Programs put in place to tide folks over the difficulties of the pandemic didn’t always include checks to verify identities or claims. Those who learned how to make fraudulent submissions, sometimes even by creating fake or synthetic identities or businesses, found out how easy and how lucrative that can be.
  • Scam compounds. Human trafficking rings pivoted from in-person exploitation to forcing people online to carry out phishing attacks and scams of all kinds.

None of these trends has disappeared, and the scam compounds in particular have expanded massively since the end of the pandemic. This was the world that GenAI was born into.

With GenAI, far more fraud attacks are possible, at a level of personalization that would have been impossible at scale without the technology. Common uses of GenAI in the wild include:

  • Phishing campaigns, personalized by using open source information about the target (sourced via GenAI), and the appropriate language and cultural touchstones for each victim (made easy by putting conversations through GenAI). More generally, the impact is seen in scams of all kinds, from romance scams to investment scams to kidnapping, catfishing, blackmail scams and more—similarly personalized. 
  • Malware and bot creation to carry out attacks, steal information, set up fake accounts, and more. Fraudsters no longer need much technical ability to create malicious programs or automation, enabling more fraudsters to amp up their reach. 
  • Deepfakes. Whether for clickbait generation or for financial scams, deepfakes allow attackers to pass identity validation checks. In shocking cases, deepfakes were even used to fake the path to a job (and the salary and access to data that comes with it). Broadly, AI simplifies the process of creating fake or synthetic identities using open source information, paired with stolen personal credentials.
  • Fake websites, fake apps, fake advertising. Manipulated content can be utilized to persuade consumers to purchase nonexistent or low-quality products, but it can also target digital advertising. Major brands are losing millions of dollars to AI schemes that generate views, impressions, or clicks for undeserved ad revenue. A study of fake mobile apps, led by Gilit’s team at DoubleVerify, found that fake iOS became three times more common (accompanied by six times more fake Android apps) in 2025 compared to previous years, a trend amplified by AI.
  • Rapid evolution in sophistication and evasion of bot schemes. Agentic AI is getting better every day. This means that operators of bot networks are smirking at CAPTCHA challenges (“solve this puzzle to prove you’re not a bot”). Fraud attacks of 2023 had some rookie mistakes, like bot networks that tried to pass as human beings watching TV content except that they were coming in with device settings of refrigerators. 2026 attackers will no longer fail at basic deception because they have AI chatbots (FraudGPT, WormGPT, etc.) to guide them on their way.

The good news—if there’s any real good news—is that at least these aren’t new types of attacks. They’re familiar attacks, carried out more convincingly, at far greater scale.

GenAI is a side hustler’s best friend

It’s not just dedicated fraudsters using GenAI to expand their reach. Ordinary people use it to level up their cheating game too.

Refund fraud has become easy at a very convincing level thanks to GenAI. Many retailers ask for photographic proof that an item has arrived broken or damaged, and with GenAI, that’s something that can be faked in seconds. Since the image is created for a purpose from scratch, there’s no way to find an original online as proof that it’s a cheat.

Some people have gotten even more creative, using the same kind of trick as part of an insurance claim. Others use GenAI to whip up fake receipts, which they can claim back from their company.

It’s important to note that, as with the professional use cases, it’s not that these cheats are new types of attacks. What’s new is the ease, scale, and sophistication with which they can be carried out.

The ground shifting under our feet

When we wrote Practical Fraud Prevention, we included a discussion of things like phishing, victim-assisted fraud, refund fraud, and so on. The focus on the book, though, was on the ways that fraudsters cash out their schemes. Follow the money, and find the fraudster.

Now, only a little more than three years after ChatGPT burst into all of our lives, that emphasis has shifted. Traditional third-party fraud, the kind you get when a fraudster uses your credit card online, isn’t even in the top three fraud concerns. TransUnion reports that the most business loss in 2025 came from scam/authorized fraud (24%), followed closely by synthetic identity fraud (20%) and account takeover (20%). That’s the GenAI impact.

There’s a significant price tag attached to numbers like that. The same report noted that “companies worldwide lost 7.7% of their annual revenue on average due to fraud over the past year.” In the US, it was 9.8%.

There’s also a worrying impact on trust. When deepfakes are common and convincing, who can ever believe their eyes? Customers don’t know which sites are real, which messages are authentic, or which ads or offers can be trusted. Businesses don’t know which claims are legitimate or how best to stay ahead of the verification challenges they now face. Marketplaces struggle to protect buyers from cheating sellers, and sellers from cheating buyers, and everyone from exploitation by malicious actors.

Ha! Fraud fighters have GenAI, too

The ray of hope in our research is that fraud fighters have GenAI too, and teams are already experimenting in a variety of ways. Some are looking at how they can use agents to expand their open source research to make their decisions faster and more accurate. Others are working on how to craft prompts to help analyze data or work out trends that can be used to identify and stop fraud. Still others are leveraging GenAI to analyze documentation, to pick out fakes or alterations. And so on.

It’s also encouraging to see how teams are using GenAI to expand their reach internally within a company. In some ways, it’s almost like fraud departments are getting the assistant they’d always wanted to do the tasks they’d always meant to get to—like pulling the data and putting it together for a biweekly update to relevant stakeholders or creating detailed material with helpful illustrations or graphs for presentations to other departments.

It’s inevitable that when a totally new technology comes along, the fraudsters will have an upper hand initially. They aren’t hampered by considerations like regulatory concerns or legal requirements, and they don’t care about things like accountability, responsibility, or consumer trust. It’s pretty much in their job description to ignore those things, in fact.

The fraud-fighting industry has been sensibly cautious about working out how to identify and employ GenAI, but it’s clear that they’re not standing still. The teams who do the best with this evolving challenge will be the ones who work closely and consistently with departments across their company to adapt quickly to the business’s needs—and how to meet them.

The Fraud Fighter’s AI Playbook is available now in early release, only for O’Reilly members. Follow along as Gilit Saporta, Chen Zamir, and Shoshana Maraney write it—and get access to their insights before the general public. You can read five chapters now, with more on the way soon.

Leave a Comment