As we move into the new year, it's essential to stay informed about the evolving tactics of cybercriminals. Recently, the FBI issued an alert that highlights how criminals leverage generative artificial intelligence (AI) to facilitate financial fraud. Let's use that alert as a framework to discuss how you, your colleagues and your loved ones can remain vigilant against these AI-enhanced schemes.
What Is Generative AI?
Generative AI refers to a category of artificial intelligence that takes data inputs and synthesizes new content based on patterns it has learned. While this technology can be used for positive applications—such as creative content generation, language translation and more—criminals are exploiting its capabilities to enhance the believability and scale of their fraudulent activities.
The FBI warns that the use of generative AI in fraud schemes is not just a passing trend; it represents a significant shift in how criminals operate, allowing them to deceive victims more effectively and efficiently.
How Are Criminals Using Generative AI?
As we discuss these criminal AI uses, you will see that the bones of these schemes are very familiar. However, AI enables criminals to spruce up tired old schemes in new and concerning ways. Keeping in mind that there are rarely “new” scams, the FBI outlines several specific tactics that criminals are employing through generative AI:
- AI-Generated Text
- Criminals use AI-generated text for various fraudulent activities, including:
- Social Engineering and Spear Phishing: By generating believable messages, criminals can better trick targets into providing sensitive information or money. Generative AI also aids criminals in producing a greater volume of these messages. Since this fraud relies on volume, this increased capacity raises the danger.
- Creating Fake Social Media Profiles: They can produce numerous fictitious profiles to lure victims into financial traps, such as romance or investment scams.
- Language Translation: Generative AI helps non-English-speaking criminals to communicate more effectively with U.S. victims by reducing linguistic errors.
- Fraudulent Websites: Criminals can quickly generate content for fake investment platforms, often targeting cryptocurrency enthusiasts.
- Criminals use AI-generated text for various fraudulent activities, including:
- AI-Powered Chatbots: These bots can engage potential victims on fraudulent sites, increasing the likelihood of clicking malicious links. This again allows fraudsters to artificially increase the volume of their fraudulent interactions, which increases their odds of scoring success.
- AI-Generated Images
- Images created by generative AI can be used to:
- Create Fake Social Media Profiles: In the past, looking at the social media footprint could help to identify fraud. With generative AI, criminals can generate convincing profile pictures that make it difficult for potential victims to identify fraud.
- Produce Fraudulent Identification Documents: This includes fake driver’s licenses or other credentials used for identity theft and impersonation.
- Craft Convincing Personal Communications: By generating images to share in private chats, they can create the illusion of authenticity.
- Manipulate Public Sentiment: Criminals can create images of celebrities endorsing counterfeit products or promote fraudulent charities by fabricating disaster scenes.
- Images created by generative AI can be used to:
- AI-Generated Audio (Vocal Cloning)
- This technology uses artificial intelligence to create a synthetic replica of a person's voice. This advanced technology can be misused to:
- Impersonate Loved Ones: By generating audio clips in a loved one’s voice, they can create crises that demand immediate financial assistance.
- Access Financial Accounts: Using audio clips to impersonate individuals, they can trick financial institutions into granting them direct access to accounts.
- This technology uses artificial intelligence to create a synthetic replica of a person's voice. This advanced technology can be misused to:
- AI-Generated Videos
- Video content can be manipulated in the following ways:
- Real-Time Video Chats: Criminals can create deepfake videos to impersonate authority figures during live calls, making their scams more convincing.
- Private Communications: They can generate videos to "prove" they are who they claim to be, further enhancing their deception.
- Misleading Promotional Material: Generative AI can be used to create fraudulent marketing content for investment schemes, preying on unsuspecting investors.
- Video content can be manipulated in the following ways:
Protecting Yourself and Your Network
With the landscape of fraud evolving rapidly, it is crucial to implement strategies that can help you and those around you avoid falling victim to these sophisticated scams. Here are some practical tips:
- Establish a Secret Word or Phrase: Create a unique identifier with your family and friends that can be used to verify identity in case of suspicious calls or messages.
- Look for Imperfections: Train yourself to spot subtle signs that content may be AI-generated. This includes distorted images (check out the fingers and teeth on otherwise convincing AI images!), unnatural movements in videos (your literal gut response of nausea may be an invaluable tool) or unusual vocal patterns.
- Be Cautious with Personal Information: Set social media accounts to private, and only accept friend requests from people you know personally.
- Verify Identity: If you receive a suspicious call, hang up and independently verify the person’s identity by contacting them through known channels.
- Educate Yourself and Others: You can share this article or just discuss the issue with people that you know.
- Report Suspicious Activity: If you encounter potential fraud or scams, report them to the appropriate authorities.
Conclusions
The intersection of generative AI and financial fraud represents a troubling evolution in the tactics employed by cybercriminals. However, by staying informed and vigilant, you can help protect yourself and your community from these threats. Together, we can protect ourselves and create a safer digital environment for everyone.