October 09, 2024 5m read

Cato CTRL Threat Research: ProKYC Selling Deepfake Tool for Account Fraud Attacks 

Etay Maor
Etay Maor
Cato CTRL Threat Research_ DeepFake

Table of Contents

Wondering where to begin your SASE journey?

We've got you covered!
Listen to post:
Getting your Trinity Audio player ready...

Executive Summary 

Cato CTRL security researchers have recently discovered a threat actor, ProKYC, selling a deepfake tool in the cybercriminal underground that helps threat actors beat two-factor authentication (2FA) for conducting account fraud attacks.  

The tool being sold is customized to target cryptocurrency exchanges—specifically ones that authenticate new users leveraging a government-issued document and by enabling the computer’s camera to perform facial recognition. It’s a tool that’s received positive feedback from cybercriminals.  

By overcoming these authentication challenges, threat actors can create new accounts on these exchanges. This is known as New Account Fraud (NAF). Creating verified but synthetic accounts enables money laundering operations, mule accounts, and other forms of fraud. According to AARP, new account fraud accounted for more than $5.3 billion in losses in 2023 (up from $3.9 billion in 2022). 

ProKYC is a threat actor that represents a new level of sophistication, especially ones targeting financial institutions. KYC stands for Know Your Customer. This blog includes a demo of ProKYC’s deepfake tool and outlines potential detection, mitigation, and prevention techniques from account fraud attacks.  

Technical Overview 

With multi-factor authentication (MFA), the focus is usually on One Time Passwords (OTP). However, MFA consists of three factors: something you know (like a password), something you have (like a smart card), and something you are (like a fingerprint). We encounter 2FA almost on a daily basis, and not necessarily in the form of OTPs. 

One such example is when you withdraw money from an ATM. You are performing 2FA: you insert your ATM card (something you have) and you enter a PIN code (something you know). The same goes for when you pass through border inspection: you provide your passport (something you have) and the officer looks at you and verifies it is you in the picture (something you are). The list goes on.  

Cybercriminals have been attempting to beat 2FA for decades using forged documents and credentials. However, AI-powered tools now take these efforts to a new level.    

In the past, fraudsters have purchased forged documents on the dark web. These were usually in the form of a scanned document, and the quality would vary depending on the seller.  

Figure 1. Dark web shop selling counterfeit documents

Figure 2. Dark web shop selling counterfeit documents 

While some of these products are still being offered, they may not pass today’s authentication tests. Also, they do not offer any solutions in case the authenticator requires you to perform a facial recognition test. In comes ProKYC’s deepfake tool. 

ProKYC’s deepfake tool is sold in the cybercriminal underground as a means to overcome these obstacles. It uses deepfake technologies to both create fake documents, as well as create videos of the fake personas in these documents that would successfully pass a facial recognition challenge. In a video provided by ProKYC, potential buyers can see a demonstration of how the tool works against ByBit (a cryptocurrency exchange).  

  1. The threat actor creates fake credentials and a fake image of a person. Creating AI-generated faces is now a commodity, with sites like thispersondoesnotexist.com showcasing this capability.  
  1. The threat actor applies these credentials to what would look like a government-issued document. In this case, an Australian passport. The outcome is a high-quality forgery that was created in seconds (as opposed to ordering forged documents from cybercriminals on the dark web). The tool pays attention to small touches such as having the “official” stamps appear over the picture, just like a real Australian passport does.  
  1. The threat actor creates a video using the fake picture. The video is designed to adhere to the instruction of facial recognition systems—moving the person’s head left and right. If you pay close attention to the video, you will notice some imperfections (such as the eye) and a small video glitch at the end of the video. These, however, will likely be ignored by the facial recognition systems. You don’t want systems that are too strict and may create multiple false-positive alerts due to a slow internet connection, for example.  
  1. The threat actor initiates an account fraud attack. He connects to a cryptocurrency exchange (ByBit, in this example). He proceeds to upload the forged Australian passport. He is then asked to open his computer’s camera to perform facial recognition. Instead of that, the tool allows him to connect the video he created as if it is the camera’s input.    
  1. Following all these steps, and after waiting a couple of minutes for both passport and video to be analyzed, the attacker is notified that the account has been verified. 
Q2 2024 Cato CTRL SASE Threat Report | Get the report

Security Best Practices 

On a technological level, detection of account fraud attacks is tricky. As mentioned above, creating biometric authentication systems that are super restrictive can result in many false-positive alerts. On the other hand, lax controls can result in fraud.  

There are different telltale signs that a document, picture, or video are fake. One example is picture quality. A picture, and especially a video, which is very high quality (like the one in the demo) are indicative of a digitally forged file. Another example is glitches in facial parts and inconsistency in eye and lip movement during biometric authentication. They should be treated as suspicious and manually verified by a human. 

Conclusion 

AI is heavily hyped in the media, but threat actors have been perfecting their use of deepfake technologies for quite some time.  

What can organizations do to defend themselves against AI threats? Cato CTRL recommends collecting threat intelligence—be it from human intelligence (HUMINT), open-source intelligence (OSINT), or other means—and staying up to date on the latest cybercrime trends. 

Threat actors will continue to evolve and find ways to employ new deepfake technologies and software to their advantage. Cato CTRL will continue to share these findings to help organizations be aware of the latest threats.  

Related Topics

Wondering where to begin your SASE journey?

We've got you covered!
Etay Maor

Etay Maor

Etay Maor is the Chief Security Strategist at Cato Networks, a founding member of Cato CTRL, and an industry-recognized cybersecurity researcher. Prior to joining Cato in 2021, Etay was the Chief Security Officer for IntSights, where he led strategic cybersecurity research and security services. Etay has also held senior security positions at IBM, where he created and led breach response training and security research, and RSA Security’s Cyber Threats Research Labs, where he managed malware research and intelligence teams. Etay is an adjunct professor at Boston College and is part of the Call for Paper (CFP) committees for the RSA Conference and QuBits Conference. He holds a BA in Computer Science and an MA in Counter-Terrorism and Cyber-Terrorism.

Read More