What are deepfakes and how do they threaten my business?
Deepfake fraud caused over $3 billion in US losses in 2025. Learn how AI-generated video and voice attacks target businesses and what you can do about it.
Key Takeaways
- Deepfake-enabled vishing attacks surged 1,600% in early 2025, and CEO fraud now targets 400 companies per day
- The average deepfake fraud incident costs approximately $500,000, with some exceeding $25 million
- 80% of companies still lack clear plans for responding to deepfake attacks
- Human detection of high-quality deepfake video is only 24.5% accurate - we can no longer trust what we see
- Multi-channel verification, code words for financial transactions, and employee awareness training are your best defenses
In February 2024, a finance worker at the engineering firm Arup was tricked into wiring $25 million after attending a video conference call with what appeared to be the company’s CFO and several colleagues. Every person on the call was a deepfake - AI-generated video of real people created from publicly available footage.
This isn’t science fiction anymore. Deepfake fraud is a real, growing, and poorly understood threat to businesses of all sizes.
The Numbers Are Alarming
Deepfake-driven fraud has exploded:
- Over $3 billion in deepfake-related losses in the US between January and September 2025
- Deepfake-enabled voice phishing (vishing) attacks surged 1,600% in early 2025
- CEO fraud now targets at least 400 companies per day using deepfakes
- More than 10% of companies have dealt with attempted or successful deepfake fraud
- Sophisticated fraud methods including deepfakes rose 180% globally in 2025
- The average deepfake fraud incident costs approximately $500,000
Perhaps most concerning: 80% of companies still lack clear plans for responding to deepfake attacks. And human detection of high-quality deepfake video is only 24.5% accurate. We’re fighting a threat we literally cannot see.
What Are Deepfakes?
Deepfakes are AI-generated or AI-manipulated audio, video, or images that convincingly impersonate real people. The technology can:
- Clone voices from just a few seconds of audio sample
- Generate video of someone saying or doing things they never did
- Swap faces in real-time during video calls
- Create entirely synthetic people that don’t exist
- Manipulate documents with synthetic signatures and photos
The term “deepfake” comes from “deep learning” + “fake.” The technology has advanced so rapidly that what required expensive equipment and expertise five years ago can now be done with consumer-grade software in minutes.
Deepfake-as-a-Service
In 2025, “deepfake-as-a-service” platforms became widely available, making the technology accessible to cybercriminals with no technical expertise. For a few hundred dollars, an attacker can generate convincing audio or video impersonations.
How Deepfakes Target Businesses
1. CEO/CFO Impersonation (Voice)
The most common business deepfake attack:
- Attacker records the CEO’s voice from earnings calls, conference presentations, YouTube videos, or social media
- AI clones the voice with high fidelity
- Attacker calls an employee in finance, using the CEO’s cloned voice
- “Hi Sarah, it’s David. I need you to wire $85,000 to this account for a deal we’re closing today. I’ll send the details by email. Keep this between us for now.”
The voice sounds exactly like the CEO. The request follows normal business patterns. The urgency and secrecy prevent verification.
2. Video Conference Fraud
Like the Arup case, attackers create deepfake video of multiple team members in a video call:
- The “CFO” explains a confidential transaction
- “Colleagues” appear to be on the call, nodding and commenting
- The target employee follows instructions to process a payment
- Nobody on the call was real
3. Vendor Impersonation
An attacker uses deepfake audio to call as a known vendor, requesting payment to updated bank details. The voice matches the vendor contact your team already knows.
4. Social Engineering Enhancement
Deepfakes amplify traditional social engineering:
- Voice deepfakes add credibility to phone-based pretexting
- Video deepfakes make fake video messages convincing
- Synthetic photos create fake employee profiles on LinkedIn
- Manipulated documents forge signatures and identifications
5. Biometric Bypass
Deepfakes now account for 40% of all biometric fraud attempts. Face recognition and voice authentication systems are increasingly vulnerable to synthetic media designed to bypass them.
Why Detection Is So Difficult
The days of spotting deepfakes by looking for blurry edges or unnatural blinking are largely over. Modern deepfakes are:
- Generated in real-time during live video calls
- Trained on extensive datasets of the target’s face and voice
- Refined using adversarial techniques that specifically defeat detection tools
- Good enough to fool experts - human detection rates are below 25% for high-quality fakes
Detection tools exist, but they’re in an arms race with generation tools. And by the time a detection method is widely deployed, generation techniques have typically evolved to defeat it.
How to Protect Your Business
Since you can’t reliably detect deepfakes by looking at them, your defenses must focus on verification processes that don’t depend on visual or audio authenticity.
Establish Out-of-Band Verification
For any financial transaction, sensitive request, or unusual instruction:
- Always verify through a separate channel - if the request came by phone, verify by email or in person (and vice versa)
- Use known contact information - never use a phone number or email provided in the suspicious communication
- Call back on a verified number - even if the caller sounds exactly like your CEO
Create Code Words for Financial Transactions
Establish internal code words or challenge-response phrases for authorizing payments above a threshold:
- Code words should be changed regularly
- Only shared between authorized personnel
- Required for wire transfers, ACH changes, or vendor payment updates
- Never communicated digitally (shared in person only)
Implement Multi-Person Authorization
- Require two people to authorize wire transfers above a set threshold
- Neither authorizer should rely solely on a phone call or video for approval
- Use a dedicated approval workflow (not email or chat)
Train Your Team
Employees should understand:
- Deepfakes exist and can convincingly impersonate colleagues and executives
- Any unusual request involving money, credentials, or sensitive data should be verified through a separate channel
- It’s always acceptable to say “Let me verify this through our standard process” - even to the CEO
- Urgency and secrecy are the hallmarks of social engineering, regardless of the technology used
Reduce Your Digital Footprint
The less audio and video of your executives available publicly, the harder it is to create convincing deepfakes:
- Review what executive media is publicly accessible
- Consider the deepfake risk when posting video content of leadership
- Limit access to internal video recordings and call recordings
Prepare an Incident Response Plan
Given that 80% of companies lack deepfake response plans:
- Include deepfake scenarios in your incident response planning
- Define escalation procedures for suspected deepfake attempts
- Train your finance team specifically on deepfake payment fraud
- Report attempts to the FBI’s IC3
The Bigger Picture
Deepfakes represent a fundamental shift in how we think about trust. For decades, “seeing is believing” and “hearing is believing” were reliable. That’s no longer true.
The companies that will navigate this threat successfully are the ones that shift from trust-based verification (“it sounds like the boss”) to process-based verification (“regardless of who it sounds like, we follow our authorization procedure”).
By 2026, Gartner predicts that 30% of enterprises will no longer consider standalone identity verification solutions reliable in isolation. Multi-factor, multi-channel verification is becoming the only trustworthy approach.
The Bottom Line
Deepfakes are cheap to create, difficult to detect, and devastating in impact. But the defense isn’t a technology product - it’s process discipline.
If every financial transaction requires out-of-band verification, if code words are required for large payments, and if employees know that a voice or video alone is never sufficient authorization, deepfakes lose their power.
The threat is real and growing. The time to prepare is now - not after your company joins the growing list of deepfake fraud victims.
Want to assess your vulnerability to deepfake and social engineering attacks? Contact us for a security awareness assessment.
Have More Questions?
Our team is here to help. Whether you're evaluating IT services or have a specific question about your technology, we're happy to have a conversation.