The Federal Bureau of Investigation (FBI) has issued a public warning about criminals exploiting generative artificial intelligence (AI) to enhance the believability and scale of their fraudulent schemes. The FBI highlighted that generative AI allows criminals to deceive their targets more efficiently, synthesizing new, convincing content based on user input data.
Generative AI can be used to facilitate various crimes, including fraud and extortion, by creating content that corrects human errors which might otherwise be red flags of fraud. The difficulty in identifying AI-generated content necessitates increased public awareness and scrutiny.
The FBI provided examples of how generative AI is used in fraudulent schemes:
AI-Generated Text
Criminals use AI-generated text to enhance the credibility of social engineering, spear phishing, and financial fraud schemes such as romance scams, investment fraud, and other confidence schemes or reduce the likelihood of fraud exposure. Criminals can use AI to create fake social media profiles, translate content to minimize grammatical errors, produce fraudulent website content, and embed AI-powered chatbots with malicious links in fraudulent websites.
AI-generated images are employed to create convincing social media profile photos, identification documents, and other visuals supporting various fraud schemes, including investment scams, romance scams, identity fraud and impersonation schemes, counterfeit products or non-delivery schemes, market manipulation schemes, and others.
AI-Generated Audio, aka Vocal Cloning
Criminals use AI-generated audio to mimic well-known public figures or personal acquaintances to solicit payments. This technology also enables criminals to access bank accounts by impersonating individuals through audio clips.
AI-Generated Videos
AI-generated videos create believable depictions of public figures to bolster criminals’ fraud schemes, including real-time video chats with purported company executives, law enforcement, or other authority figures, private communications for proof of authenticity, as well as misleading promotional materials for investment scams.
The public is advised to verify the identity of their “family”, look for subtle imperfections in images and videos, listen closely for anomalies in audio, make social media accounts private, verify callers' identities, avoid sharing sensitive information, and not send money to unknown individuals.