Deepfake Video Call Scams Global Firm Out Of $26Mn

Deepfake technology was used to scam firms in Hong Kong || Photo: Collected

Deepfake technology was used to scam firms in Hong Kong || Photo: Collected

Scammers tricked a multinational firm out of some $26 million by impersonating senior executives using deepfake technology, Hong Kong police said Sunday, in one of the first cases of its kind in the city.

Law enforcement agencies are scrambling to keep up with generative artificial intelligence, which experts say holds potential for disinformation and misuse -- such as deepfake images showing people mouthing things they never said.

A company employee in the Chinese finance hub received "video conference calls from someone posing as senior officers of the company requesting to transfer money to designated bank accounts", police told AFP.

Police received a report of the incident on January 29, at which point some HK$200 million ($26 million) had already been lost via 15 transfers.

"Investigations are still ongoing and no arrest has been made so far," police said, without disclosing the company's name.

The victim was working in the finance department, and the scammers pretended to be the firm's UK-based chief financial officer, according to Hong Kong media reports.

Acting Senior Superintendent Baron Chan said the video conference call involved multiple participants, but all except the victim were impersonated.

"Scammers found publicly available video and audio of the impersonation targets via YouTube, then used deepfake technology to emulate their voices... to lure the victim to follow their instructions," Chan told reporters.

The deepfake videos were pre-recorded and did not involve dialogue or interaction with the victim, he added.

What to know about how lawmakers are addressing deepfakes like the ones that victimized Taylor Swift

Even before pornographic and violent deepfake images of Taylor Swift began widely circulating in the past few days, state lawmakers across the U.S. had been searching for ways to quash such nonconsensual images of both adults and children.

But in this Taylor-centric era, the problem has been getting a lot more attention since she was targeted through deepfakes, the computer-generated images using artificial intelligence to seem real.

Here are things to know about what states have done and what they are considering.


Artificial intelligence hit the mainstream last year like never before, enabling people to create ever-more realistic deepfakes. Now they're appearing online more often, in several forms.

There's pornography — taking advantage of celebrities like Swift to create fake compromising images.

There's music — A song that sounded like Drake and The Weeknd performing together got millions of clicks on streaming services — but it was not those artists. The song was removed from platforms.

And there are political dirty tricks, this election year — Just before January's presidential primary, some New Hampshire voters reported receiving robocalls purporting to be from President Joe Biden telling them not to bother casting ballots. The state attorney general's office is investigating.

But a more common circumstance is porn using the likenesses of non-famous people, including minors._Agencies

Subscribe Shampratik Deshkal Youtube Channel


Shampratik Deshkal Epaper


Address: 10/22 Iqbal Road, Block A, Mohammadpur, Dhaka-1207

© 2024 Shampratik Deshkal All Rights Reserved. Design & Developed By Root Soft Bangladesh