FinCEN: Deepfake Fraud Schemes Target Financial Institutions

I’m probably dating myself, but does anyone remember the old TV commercials with the tagline: “Is it live or is it Memorex?” The most famous one featured the legendary jazz singer Ella Fitzgerald and the shattering glass -- it’s on YouTube for those unfamiliar with the ad (or cassette tapes, for that matter). Well, these days you have to ask yourself, is it real or is it a “deepfake” image, video or audio file?

The Financial Crimes Enforcement Network (FinCEN) recently issued an alert (FIN-2024-Alert004) to help financial institutions identify fraud schemes involving the use of deepfake media created with generative artificial intelligence (GenAI) tools. Deepfake media, or “deepfakes,” are a type of synthetic content that uses artificial intelligence/machine learning (AI/ML) to create realistic but inauthentic videos, pictures, audio, and text. See Department of Homeland Security (DHS) “Increasing Threat of Deepfake Identities.”

Since 2023, FinCEN has observed an increase in suspicious activity reports (SARs) describing the suspected use of deepfakes in fraud schemes targeting financial institutions and their customers/members. These schemes often involve criminals altering or creating falsified documents, photographs, and videos to circumvent financial institutions’ customer/member identification/verification procedures and customer/member due diligence controls.

Malicious actors also combine GenAI images with stolen personal identifiable information (PII) or entirely fake PII to create synthetic identities. Criminals have successfully opened accounts using fraudulent identities suspected to have been produced with GenAI and used those accounts to receive and launder the proceeds of other fraud schemes, including online scams, and consumer fraud such as check fraud, credit card fraud, authorized push payment fraud, loan fraud, and unemployment fraud.

According to FinCEN, financial institutions often detect GenAI and synthetic content in identity documents by conducting re-reviews of a person’s account opening documents. When investigating a suspected deepfake image, reverse image searches and other research may reveal that an identity photo matches an image in an online gallery of faces created with GenAI. The use of multifactor authentication (MFA) and live verification checks (e.g., confirmation via audio or video) have also been effective tools for some institutions to help reduce their vulnerability to deepfake documents. Third-party service providers may also offer “more technically sophisticated techniques,” such as examining an image’s metadata or using software designed to detect possible deepfakes or specific manipulations.

Red Flag Indicators of the Use of Deepfakes

What should credit unions look out for when it comes to the use of deepfakes? FinCEN identified the following red flag indicators to help financial institutions detect, prevent, and report potential suspicious activity related to the use of GenAI tools for illicit purposes:

•    An individual’s photo is internally inconsistent (e.g., shows visual tells of being altered) or is inconsistent with their other identifying information (e.g., the person’s date of birth indicates that they are much older or younger than the photo would suggest).

•    A person presents multiple identity documents that are inconsistent with each other.

•    A person uses a third-party webcam plugin during a live verification check; or attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.

•    A person declines to use multifactor authentication to verify their identity.

•    A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.

•    A person’s photo or video is flagged by commercial or open-source deepfake detection software.

•    GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.

•    Geographic or device data is inconsistent with the person’s identity documents.

•    A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.

A single red flag is not necessarily indicative of illicit or suspicious activity. FinCEN instructs financial institutions to consider the surrounding facts and circumstances before determining whether a specific transaction is suspicious or associated with illicit use of GenAI tools.

When filing suspicious activity reports, FinCEN requests that financial institutions reference this alert by including the key term “FIN-2024-DEEPFAKEFRAUD” in SAR field 2 (“Filing Institutions Note to FinCEN”) and in the narrative to indicate a connection between the suspicious activity being reported and this alert. Also include any applicable key terms indicating the underlying typology in the narrative.

This is only a snapshot of the alert. Click here to read FIN-2024-Alert004 in its entirety.

In other news, please see the two links below that may provide some helpful insight into the potential changes to the CFPB under the new Trump administration:

•    What Lies Ahead for the CFPB as Trump 2.0 Takes Shape?

•    Trump 2.0: Potential CFPB Changes in 2025 | McGlinchey Stafford PLLC

 

Federal Regulatory Compliance Senior Counsel
America's Credit Unions