
February 17, 2026 • 10 min read
Optro and IAF report: The more you know about AI-enabled fraud, the better equipped you are to fight it

Richard Chambers
Internal auditors typically assume that knowing more about a risk makes us more confident in our ability to help our organizations respond to that risk. "Knowledge is power,” as the old aphorism goes. In today’s hypervolatile risk environment, knowledge is still power — but that power is often paired with a profound awareness of how much we don’t know.
A new report from Optro and the Internal Audit Foundation (IAF), Internal audit and AI-enabled fraud, underscores this reality. The report finds, “Familiarity with AI-enabled fraud is associated with higher perceived risk, suggesting that deeper understanding of AI-enabled fraud contributes to greater awareness of exposure rather than reassurance.”
Fraud costs and frequency are growing globally. A 2025 TransUnion report found that fraud cost businesses an average of 7.7% of annual revenue last year, with scam/authorized fraud, account takeover, and synthetic identity fraud the top causes.
AI tools are expanding and accelerating fraud risk, introducing novel threats and new vulnerabilities — and most organizations aren’t prepared. The more you know about AI-enabled fraud, the more urgency you’ll feel about educating yourself, upskilling your team, and equipping your organization to detect and combat it. For a clearer view of the battlefield, review these key insights and trends and download the IAF and Optro report today.
Top takeaways from the Internal audit and AI-enabled fraud report
Limited but growing awareness

- Approximately one in three respondents is very or extremely familiar with AI-enabled fraud, 51% are only somewhat familiar, and 15% have minimal or no familiarity.
- As mentioned, perceived risk levels tend to increase alongside respondents’ familiarity. But only 27% of respondents perceive AI-enabled fraud risk as high or very high.
Unprepared teams, short on skills and resources

- Over 62% of respondents say their internal audit function is either unprepared or minimally prepared to successfully detect AI-enabled fraud.
- More than half identified a lack of appropriate technology/tools (57%) or insufficient staff with relevant skills or expertise (55%) as their most significant obstacles.
Limited understanding of key risks

- Among the eight risks listed, respondents are most concerned about AI-powered phishing (88%), use of fabricated invoices or financial documents (65%), and automated social engineering (58%).
- They are least concerned about synthetic identity fraud (27%). In fact, synthetic identity fraud may be the fastest-growing financial crime in the U.S., representing 85% of all identity fraud cases.
7 AI-enabled fraud risk trends to know about
Internal auditors must educate themselves and their organizations about AI-enabled fraud. Below are overviews of key trends and how internal audit can respond.
Broadly speaking, ensure risk assessments, policies, and training incorporate scenarios specific to each scheme. Focus on how processes and controls can be strengthened to improve authentication, encryption, monitoring, and detection. And remember: AI tools are key to combating AI-enabled fraud.
1. Synthetic identity fraud
Fraudsters use AI tools to fabricate synthetic identities, combining both real and fictional details. Example: A made-up name/DOB/address paired with a real social security number is used to apply for loans, benefits, insurance, or even jobs.
Review and pressure-test identity verification processes in vendor due diligence, customer onboarding, and hiring. Consider advanced verification mechanisms (e.g., biometric/behavioral authentication, device fingerprinting, IP/network reputation).
2. Deepfake audio or video impersonation
Fraudsters use AI-generated or AI-altered video, audio, or images to impersonate known, trusted persons to manipulate their targets. Example: A deepfake video or voice clone asks employees to send money, provide authorization, or bypass key controls.
Collaborate with IT, HR, finance, and other teams to assess and test communication verification controls, especially in high-risk areas (e.g., treasury, procurement). Review approval/escalation processes, requiring multi-layered verification for high-risk actions.
3. AI-assisted social engineering (e.g., phishing, BEC, impersonation)
Fraudsters use AI to quickly develop and scale automated, personalized phishing, business email compromise (BEC), and impersonation campaigns. Example: AI-powered chatbots pose as customer support agents to extract credentials or one-time passcodes.
Evaluate controls, testing identity verification, payment authorization, and escalation procedures against realistic social engineering fraud scenarios. Assess monitoring and reporting methods.
4. AI-powered insider threats
Employees or contractors use AI tools to scrape data, develop/insert harmful code, manipulate systems, or exploit vulnerabilities. Example: An employee uses AI-generated phishing emails to persuade coworkers to approve access requests, and an AI coding assistant to automate the gradual extraction of sensitive data.
Expand beyond traditional behavioral monitoring to account for how AI tools can amplify insider misuse. Collaborate with IT and HR to evaluate and monitor access controls and privileges, logging/monitoring, AI usage, and policies/training.
5. Financial manipulation
Fraudsters use AI tools to automate, optimize, or disguise false transactions, misleading communications, or market abuse. Example: An employee uses AI to generate and post false revenue entries and supporting documentation, inflating results to meet performance targets.
Review and test controls around financial reporting, accounting entries, and forecasting to ensure they’re robust enough to detect AI-generated or -automated manipulation. Test transaction monitoring, reconciliation, and model governance to ensure anomalies are flagged.
6. Vendor fraud (e.g., fabricated invoices, forged contracts)
Fraudsters use AI tools to impersonate vendors, generate fake invoices, or automate fraudulent transactions, deceiving organizations into paying them. Example: AI is used to create shell companies and fabricate/automate contracts, invoices, and other documents.
Evaluate processes for verifying supplier and invoice authenticity. Use AI in audit procedures to detect anomalies in payment patterns, vendor creation processes, and invoice formats.
7. AI-augmented money laundering
Fraudsters use AI-enabled methods and decentralized platforms to move funds, manipulate digital assets, mimic legitimate behavior, and exploit regulatory/enforcement gaps. Example: AI tools automatically move stolen funds through various crypto wallets and generate fake activity records.
If your organization has exposures in these areas, upskill in blockchain tracing, smart contract risks, and crypto fraud. Evaluate the effectiveness of anti-money-laundering and fraud controls for detecting AI-driven patterns (e.g., high-volume, low-value crypto transactions).
Get curious about AI-enabled fraud risk
No doubt, AI’s fast proliferation and rapidly advancing capabilities will keep us humble. Most CAEs already feel ill-equipped to advise on organizations’ above-board AI use and governance — let alone how AI is used to commit fraud. Only 28% of respondents in Optro’s 2026 Focus on the Future survey were confident in their teams’ ability to audit AI risks effectively. After reading this, I suspect that even fewer CAEs will express confidence.
Fortunately, our profession’s innate curiosity fuels its ongoing relevance. Curiosity is among our most vital superpowers, helping us differentiate our value in the age of AI. Now, it’s time to direct that curiosity at AI-enabled fraud — and lead the way in helping our organizations prepare for the battle to come. Download the report today.
About the authors

Richard Chambers, CIA, CRMA, CFE, CGAP, is the CEO of Richard F. Chambers & Associates, a global advisory firm for internal audit professionals, and also serves as Senior Advisor, Risk and Audit at Optro. Previously, he served for over a decade as the president and CEO of The Institute of Internal Auditors (IIA). Connect with Richard on LinkedIn.
You may also like to read


Audit reporting best practices: Guide for audit leaders

Latest data on AI adoption reinforces need for internal auditors’ “superpowers”

Boards are struggling with AI oversight. How internal auditors can help

Audit reporting best practices: Guide for audit leaders

Latest data on AI adoption reinforces need for internal auditors’ “superpowers”
Discover why industry leaders choose Optro
SCHEDULE A DEMO



