A Video Call Cost This Company $25 Million. Nobody Got Hacked.

Imagine joining a Teams call. Your CFO is on it. So is a senior colleague from the London office and a couple of other familiar faces. The CFO explains there's a sensitive acquisition in motion and needs you to push through a few transfers. It's the kind of ask that raises an eyebrow, but the faces on the screen are the ones you see every week, the voices are right, the banter is normal. You do the transfers.

Then, a week later, you realise every single person on that call was fake. Your actual CFO was in London, asleep. You have just wired HK$200 million to a criminal syndicate.

This is not a hypothetical. This happened to the Hong Kong office of British engineering firm Arup in early 2024, the same firm behind the Sydney Opera House and the Beijing Bird's Nest. A finance employee made 15 transfers across five Hong Kong bank accounts, totalling about US$25.6 million, after a video meeting with people who looked and sounded exactly like his senior leadership team. Everyone on the call except him was AI-generated.

How it went down

The employee initially did the right thing. He got an email from the "CFO" mentioning a confidential transaction, smelled a rat, and suspected phishing. Ten out of ten. Gold star.

Then came the video call.

Seeing a room full of familiar faces, hearing familiar voices, watching them chat to each other the way colleagues do, his scepticism evaporated. He stopped asking questions. He did the transfers. He only twigged when he followed up with Arup's UK head office later and discovered nobody there knew anything about it.

The kicker: none of Arup's systems were compromised. No malware. No stolen credentials. No firewall breach. Arup's CIO Rob Greig called it "technology-enhanced social engineering," which is a polite way of saying the attackers didn't need to break in because they just walked the employee out the front door with a fake CFO.

This is not a "big company" problem

You might be reading this and thinking, "Well, I run an SME in Bangkok, I don't have $25 million lying around, this is a problem for the FTSE 100."

Nice try.

The Arup case is the poster child, but deepfake fraud is now a mass-market product. Every face and voice clip of you, your CEO, or your CFO that lives on LinkedIn, YouTube, a conference panel, or a company podcast is training data for someone. Voice cloning now needs 20 to 30 seconds of audio. Your "About Us" video is enough.

The money is following the capability. Deloitte projects AI-enabled fraud losses in the US will reach $40 billion by 2027, up from $12.3 billion in 2023. A 32% compound annual growth rate. Fraud doesn't grow like that unless the economics of the attack are obscenely good, and with generative AI, they are.

Why this should worry you, specifically, in Bangkok

APAC is the testbed. A recent Sumsub report found deepfake incidents surged 2,100% in the Maldives, 408% in Malaysia, and Thailand ranked in the top six APAC markets for year-on-year deepfake growth. This is happening around us.

The scam compounds in Cambodia, Myanmar, and Laos, which the UN has been documenting in increasingly grim detail, are industrialising this stuff. They're not working alone out of a basement in Odessa. They're running call centres with KPIs.

SMEs in the region are a very attractive target because:

  1. You don't have a dedicated CISO, let alone a fraud team.

  2. Your finance function is usually one or two people who know the founders personally, which means "the boss asked me to do it" carries weight.

  3. Your approval workflows are informal, often a LINE message or a quick call.

Informal is exactly what the attackers want.

Why I'm writing this now

Money20/20 Asia kicks off in Bangkok this week, and the whole industry is about to spend three days talking about the future of money. Cross-border payments, AI, stablecoins, tokenisation, all of it faster and shinier than last year.

Here's the thing about faster, shinier rails: fraud moves on them too, and it moves at the same speed. The same Money20/20 Asia research found 63.5% of fintech leaders name fraud prevention as their top operational priority for 2026. The industry knows. The question is whether you do.

What to actually do (three things, no fluff)

1. Verify every transfer above a threshold through a separate channel. If the request came in by email, you confirm by phone. If it came in on a video call, you hang up and call a mobile number you already have saved. Doesn't matter if the CEO is crying and begging on the video call. If the amount crosses your line, the process is: stop, switch channels, verbally confirm. A deepfake can't join a phone call you initiate to a number the attacker doesn't control.

2. Kill single-approver payments. Two humans, two devices, two channels. This is boring, unglamorous, and the single biggest control you can deploy against social-engineered fraud.

3. Treat "urgent and confidential" as the red flag it is. Real acquisitions, real tax issues, real regulators, none of them need you to move money in 20 minutes without telling anyone. The urgency-plus-secrecy combo is the fingerprint. Train your finance team to recognise it, and give them explicit permission to delay and verify without fearing they'll get a bollocking from management.

That's it. No seven-figure tech stack required. The Arup employee was not stupid. He was targeted by a well-resourced crew that knew exactly how to bypass a human brain. Your defence is process, not vigilance, because vigilance is what fails when the deepfake is good enough.

See you at QSNCC.

***

If you'd like a free 30-minute review of your payment approval workflow and where a deepfake could walk through it, book a risk assessment. We'll be around the Money20/20 floor all three days.

Next
Next

The Shai Hulud Worm Just Evolved. Is Your DevOps Pipeline Ready?