The $25 Million Deepfake Video Call: Inside the Arup Scam
Free: How to Keep Yourself Safe From Scammers
9 chapters. Reporting checklist. 30-second protection checklist. Read on the site.
A finance worker gets an email from the CFO about a confidential transaction. It looks suspicious. He flags it internally. So the CFO invites him to a video conference to sort it out. On the call are the CFO, several senior executives he recognizes, and the request is restated face-to-face. He proceeds.
Every single person on that call, except him, was an AI-generated deepfake.
By the time he realized, he'd sent $25 million in 15 separate transfers to five Hong Kong bank accounts.
This is the Arup scam — the single most expensive publicly confirmed deepfake fraud to date, and the clearest example we have of what AI-enabled attacks look like at scale.
What happened at Arup
Arup is a British multinational engineering firm — the company behind the Sydney Opera House and Beijing's Bird's Nest stadium. It employs roughly 18,500 people across 34 offices globally.
In January 2024, a finance employee at Arup's Hong Kong office received an email purporting to be from the company's UK-based Chief Financial Officer. The message described a "confidential transaction" and requested the employee's help.
The employee was immediately suspicious. The tone was wrong. The secrecy felt off. He pushed back.
So the attackers escalated. They invited him to a video conference with the CFO and several other senior executives.
On the call, the employee saw and heard people who looked and sounded exactly like colleagues he recognized. They discussed the transaction naturally. They addressed his concerns. They restated the request with the full weight of senior leadership behind it.
He made 15 transfers totaling approximately $25.6 million to five different Hong Kong bank accounts.
He only realized he'd been scammed when he followed up with Arup's actual headquarters — and found that no one there had any knowledge of the meeting or the transactions.
How the attackers built the deepfakes
Hong Kong police determined that the attackers had harvested publicly available video and audio of Arup's CFO and other executives — from online conferences, interviews, press appearances, and company webinars. None of that content was confidential.
The scammers didn't hack Arup. They didn't steal credentials. They didn't compromise a single internal system. Arup confirmed that its IT environment remained fully intact — no malware, no intrusion, no data loss.
The attack bypassed all traditional cybersecurity because it didn't target the systems. It targeted the human instinct to believe what you can see and hear.
Why the employee wasn't reckless
He received a suspicious email and flagged it — exactly what security training teaches. When the request was escalated to a live video call with multiple senior leaders he recognized, speaking in real time, his skepticism was met with what looked like overwhelming counter-evidence.
For years, corporate culture has treated video calls as the strongest proof of identity short of showing up in person. The Arup attack weaponized that assumption. The employee wasn't reckless. He did the thing he was trained to do. The training was wrong.
The bigger picture
The FBI's 2025 Internet Crime Report showed victims lost $893 million to AI-related scams. Deloitte projects that generative AI could enable up to $40 billion in annual fraud losses in the U.S. by 2027.
Recent examples following the Arup playbook:
- WPP (2024) — CEO deepfaked in a fake Microsoft Teams call targeting a senior executive
- LastPass (2024) — Deepfake voicemail of the CEO used in an attack on an employee
- Ferrari (2024) — WhatsApp voice messages using a cloned voice of the CEO
What this means for small businesses
A scammer doesn't need $25 million targets. They can send a cloned-voice WhatsApp message to a bookkeeper at a 15-person company, impersonating the owner, and request a $30,000 transfer to "a new vendor." That attack is easier, cheaper, and happens daily.
How to protect your business
1. Out-of-band verification for every high-risk transaction
For any wire transfer over a defined threshold, require confirmation through a second, independent channel — one that wasn't used to request the transaction.
2. Pre-agreed verification codes for senior leadership
A deepfake can clone a voice. It cannot know a word that's never been spoken outside a secured room.
3. Build friction into the wire process
A mandatory 24-hour hold on new payee accounts with a compulsory second-human review would have caught Arup's attack before most of the money left.
4. Train your team to expect deepfakes
Modern training needs to explicitly cover deepfake video calls, cloned voice calls, and the psychological pattern — urgency + secrecy + senior authority + unusual payment destination — that every version of this scam shares.
What to do if you suspect a deepfake call
- Ask them to turn their head all the way to the side — many deepfake models struggle with profile angles
- Ask them to hold up a hand and wave it across their face — real-time models frequently glitch
- Ask about something personal and recent
- Suggest switching to a phone call — if they resist, that's a strong signal
- End the call and call them back on a number you already have
Suspicious about a message or request claiming to be from your boss or a company you work with? Scan it with ScamSecurityCheck before you respond.
Courtney Delaney
Founder, ScamSecurityCheck
Courtney Delaney is the founder of ScamSecurityCheck, dedicated to helping people identify and avoid online scams through AI-powered tools and education.
Learn moreKeep reading
Support Our Mission
ScamSecurityCheck is built to protect people from online fraud. Your contribution helps us keep building security tools and resources.
