How AI Is Changing Romance Scams
Romance scams have traditionally relied on a predictable weakness: the scammer could not prove they were who they claimed to be. They refused video calls, used stolen photos, and relied entirely on text-based communication to maintain the deception. Victims and fraud experts alike pointed to the refusal to appear on video as the single clearest warning sign.
That barrier is eroding. Artificial intelligence tools now available at low or no cost allow scammers to generate synthetic faces, conduct deepfake video calls, and produce cloned voice messages that sound like a specific real person. According to the Federal Trade Commission, reports of romance scams involving suspected AI-generated content increased substantially in 2024 and 2025, though the true scale is difficult to measure because many victims do not realize AI was involved (FTC Staff Report on AI and Consumer Protection, 2024).
This does not mean detection is impossible. AI-generated content has specific tells, and the verification techniques outlined in this guide are designed to expose synthetic media even as the technology improves.
How Scammers Use Deepfake Video
What Deepfake Video Is
Deepfake video uses machine learning models to superimpose one person's face onto another person's live video feed. The scammer runs software on their computer that captures their webcam feed, replaces their face with the face from stolen photos, and outputs the manipulated video to a video call application. The result is a video call where the other person appears to be someone they are not.
How It Is Used in Romance Scams
A scammer using stolen photos of a specific person can now conduct brief video calls that appear to show that person. The calls are typically kept short (30 seconds to 2 minutes), conducted in low lighting, and involve minimal head movement or unusual facial expressions, all of which reduce the likelihood of visible artifacts.
This directly undermines the traditional advice to "insist on a video call" as a verification method. A brief, controlled call no longer provides the same level of assurance it once did.
Real-World Reports
In 2024, the FBI IC3 noted an increase in romance fraud complaints where victims reported having seen the person on video before discovering the fraud. While the FBI does not publicly quantify AI-specific complaint categories, Special Agent in Charge statements at multiple field offices have referenced deepfake video as an emerging tool in romance fraud operations (FBI IC3 Public Statements, 2024).
A 2023 investigative report by Wired documented cases where victims in the UK and Australia had video-called their online partner multiple times before losing money, only to discover through law enforcement that the person's face had been digitally generated. The victims described the calls as "slightly off" but attributed the oddities to poor internet connections.
How to Detect Deepfake Video Calls
Deepfake technology is improving, but current systems have consistent weaknesses:
Visual artifacts to look for:
- Unnatural blurring or shimmer around the jawline, hairline, and ears
- Skin texture that appears too smooth or plastic-like
- Eyes that do not blink at natural intervals or that track slightly off
- Shadows on the face that do not match the lighting in the room
- The face "glitching" momentarily during rapid movements
- Glasses frames that distort or disappear briefly
- Teeth that appear blurred or merged together
Behavioral tests you can perform:
- Ask them to turn their head slowly to show a full profile view (deepfakes struggle with side angles)
- Ask them to place their hand over part of their face (deepfakes break when objects cross the face boundary)
- Ask them to move close to the camera and then far away (scaling is inconsistent in deepfakes)
- Ask them to tilt their head upward so you see under their chin (an angle deepfakes rarely handle well)
- Keep the call going for 10+ minutes (maintaining quality degrades over time with most tools)
- Change topics and watch for facial expression mismatches (AI responses lag slightly behind natural reactions)
How Scammers Use Voice Cloning
What Voice Cloning Is
Voice cloning uses AI to generate speech that mimics a specific person's voice. Modern voice cloning tools can produce convincing results from as little as 3-10 seconds of sample audio. The technology captures the unique characteristics of a person's voice, including pitch, cadence, accent, and tone, and allows the operator to type text that is then spoken in the cloned voice.
Where Scammers Get Voice Samples
- Public social media videos (Instagram stories, TikTok, Facebook)
- YouTube content
- News interview clips
- Podcast appearances
- Voicemail greetings
- Corporate promotional videos
If the scammer is impersonating a real person whose voice exists anywhere online, they can clone it.
How to Detect Cloned Voice Messages
Technical indicators:
- Audio quality that is unnaturally consistent across every message (no variation in room acoustics)
- Absence of natural speech artifacts: no ums, pauses, breathing sounds, or swallowing
- Slightly mechanical pacing that lacks the natural rhythm of spontaneous speech
- Background that is always perfectly silent (real environments have ambient noise)
- Emotional delivery that does not match the content (saying something sad in a neutral tone, for example)
Behavioral indicators:
- They send voice messages but consistently refuse live phone calls
- They cannot respond to spontaneous voice questions in real time
- When you reference something from a previous voice message, they cannot elaborate naturally
- Voice notes arrive with a delay that suggests generation time rather than recording time
AI-Generated Profile Photos
The End of Reverse Image Search as a Reliable Tool
Traditionally, reverse image search was a powerful way to detect stolen photos. Scammers used real people's images, which could be traced back to the original source. AI-generated photos created by generative adversarial networks (GANs) and diffusion models produce faces of people who have never existed. These photos will not appear in any reverse image search.
How to Spot AI-Generated Photos
- Asymmetry: Real faces are naturally asymmetrical. AI-generated faces sometimes have unnaturally perfect symmetry.
- Background anomalies: Look behind the person. AI often generates distorted text, melted objects, or inconsistent architectural details.
- Ear inconsistencies: AI frequently generates ears that do not match each other or that have unusual shapes.
- Hair irregularities: Strands that appear to merge into skin or that have unnatural boundaries.
- Accessory distortion: Earrings, necklaces, or glasses that appear warped or inconsistent between photos.
- Too perfect: The photo looks like a professional headshot but has no photographer credit and does not appear anywhere else online.
Verification Beyond Photos
Because AI-generated photos cannot be detected through reverse image search, verification must shift to behavioral and contextual checks:
- Do they have a social media presence that predates your conversation by years?
- Are there tagged photos of them posted by other real people (friends, family)?
- Do mutual connections exist who can verify their identity?
- Can they provide a LinkedIn profile with a verifiable employment history?
- Will they do an extended, unscripted video call with the tests described above?
Verification Checklist: Defending Against AI Impersonation
Use this checklist for any online relationship where doubt exists:
- Request a live video call at a time you choose, lasting at least 10 minutes
- During the call, ask them to turn their head to show both profile views
- Ask them to place a hand partially over their face
- Ask them to hold up a piece of paper with a word you specify written on it
- Request a spontaneous phone call (not voice notes) lasting at least 10 minutes
- Reverse image search their photos (effective for stolen photos, not AI-generated ones)
- Check whether their social media accounts have a history going back years
- Look for tagged photos posted by other people (not just self-posted images)
- Ask a trusted friend or family member to review the situation with fresh eyes
- If they have recommended an investment or trading platform, verify it with the SEC, FINRA, or CFTC
- If anything feels "slightly off," trust that instinct
What to Do If You Suspect AI Impersonation
- Do not confront the person. They may escalate manipulation or disappear with evidence.
- Save everything: screenshots of conversations, photos they sent, voice messages, video call recordings if possible, and financial records.
- Talk to someone you trust. A friend or family member can provide perspective you may not have when emotionally involved.
- Contact the AARP Fraud Helpline: 1-877-908-3360 (free, weekdays 8am-8pm ET).
- File a report with the FTC: reportfraud.ftc.gov
- File a complaint with the FBI IC3: ic3.gov
- If money was sent: Contact your bank immediately. For cryptocurrency, provide wallet addresses to the FBI. For wire transfers, request a recall through the sending institution.
Frequently Asked Questions
Can AI really make a video call look like someone else?
Yes. Real-time deepfake software can replace a person's face during a live video call. The technology is imperfect and has detectable weaknesses, but brief calls in poor conditions can be convincing. Extended calls with the behavioral tests described in this guide reliably expose deepfakes.
How common are AI-generated romance scams?
The FTC and FBI have both noted increasing reports involving suspected AI-generated content in romance scams. The exact prevalence is hard to measure because many victims do not know AI was involved. As AI tools become cheaper and easier to use, the proportion is expected to grow.
If they pass a video call test, does that guarantee they are real?
No single test provides absolute certainty. However, an extended video call (10+ minutes) with multiple behavioral challenges (profile views, hand over face, holding up specified objects) combined with verifiable social media history significantly reduces the likelihood of AI impersonation. Combine video verification with contextual checks like social media history and mutual connections.
Can I use AI detection tools to verify their photos or videos?
AI detection tools exist but are not reliable enough for personal use. They produce both false positives and false negatives. The behavioral verification methods described in this guide are more reliable than current automated detection tools for personal safety decisions.
What if I think I was scammed using AI but I am not sure?
File a report with the FBI IC3 regardless. Include as much detail as possible, including any suspicions about AI-generated content. Law enforcement is actively building expertise in AI-enabled fraud and your report contributes to their understanding of the threat.
If any of this article resonates with your situation, take the free Are They Real? Scam Risk Test now. The quiz evaluates your relationship against documented scam patterns, including AI-enabled deception. It is private, takes five minutes, and nothing is stored or shared.