Representative image of deepfakes taken from:

Deepfakes can effectively fool identity validation applications

Global companies rely on their transactional processes in applications to validate identity. A recent study revealed that many of them do not have the minimum standards of digital security.

A recent study by the United States and China revealed that several facial identification and authentication systems are vulnerable to deepfakes. As biometric processes improve, new forms of development emerge to supplant a person’s identity.

But, what are deepfakes?

Deepfakes are video files modified through artificial intelligence software. With this trickery methodology, even the voice can be supplanted, looking like real people. Some of the most used methods, is the use of masks to simulate real people.

How was the deepfakes investigation conducted?

A custom framework was used against facial life verification (FLV) systems, the most commonly used by authentication service providers. This service is purchased by companies that need to validate the identity of users in digital environments. The study reveals is that some of them are not prepared to counter the attacks of deepfakes.

In theory, the facial liveness is created so that transactional process is safe enough. For example, it must counter attacks through images, with masks, pre-recorded videos and more forms of impersonation.

When companies implement generic biometric services, deepfakes are likely to perform better. Instead, creating a secure transactional process with specific architecture improves the levels of protection for the end customer. The study mentions that many companies maintain obsolete biometric technologies for the needs of the market.

“Without access to the technical details of the target FLV suppliers, we speculate that such variations are attributed to the defense measures implemented by different suppliers. For example, certain providers may implement defenses against specific deepfake attacks”.


One of the data that most surprised in the study is that most APIs for identity validations don’t have a system to fight deepfakes.

Can a simple selfie provide security?

In general terms the answer is NO. Transactional processes with 2D are not safe, as most people’s photos are already on the Internet. On the other hand, there are other technologies, in which facial recognition is a solid verification of life. It contains a 3D FaceMap, which is NOT publicly available online.

At Unicus, we work with 3D life validation. In this way we guarantee that the user is physically present and not be fooled by photos, masks or known deepfakes. The artificial intelligence we use observes simultaneous human traits that deepfakes cannot recreate.

If your company needs to implement biometric processes, you can rely on our expertise and the use of the best technology.