Deepfake technology can now look and talk like you!

Seeing is no longer believing now that deepfake technology can look and talk like you!

I want to conclude Fraud Prevention Awareness Month with a more sobering story about how we are entering a world where seeing is no longer believing.  With the emergence of generative AI into the mainstream back in November of 2022 with the launch of ChatGPT, most of us are now aware of the seismic shift in what AI can do. What many may not be aware of is how effective other generative AI technology is advancing and what that means for all of us when it comes to combatting identity fraud. The technology I am talking about is deepfakes.

A deepfake is synthetic media in which a person’s face or voice is manipulated using generative artificial intelligence (AI) and involves training algorithms on large sets of data to recognize patterns and be able to create fake video, audio as well as synthetic IDs that is so realistic it tricks you into believing its real. Until very recently deepfake technology was a fun party trick but not a significant fraud threat. However, with further technological advancements as well as easy access to sites that make it so anyone can create a deepfake, this technology is quickly emerging as a real identity fraud threat in 2024.

One recent example of this emerging threat happened on February 4, 2024, when a finance worker at a multinational firm was tricked through a video meeting where he thought he was meeting with his colleagues and as per their instructions transferred over $25 million to what turned out to be fraudsters. The global media stories on this fraud all mentioned that the financial worker originally had doubts about the transactions but when he joined a video meeting call and met with people that looked and sounded exactly like his colleagues, he believed the transaction instructions were legitimate.

So how is this type of deepfake attack possible?  To start with face swapping which is deepfake technology that involves digitally replacing the face of one person with the face of another person simply from a photograph or video is now easily accessible to anyone. There are even face swapping Apps such as ZAO, DeepFaceLab, FakeApp and many more. It is now easy for a fraudster to go onto your LinkedIn profile, grab your photo from your profile and then digitally inject your face onto a synthetically generated ID or create a deepfake video.

According to iProov, a leader in facial biometric technology, face swaps is now firmly established as the deepfake of choice among persistent threat actors. From their Threat Intelligence Report released in early 2024, iProov’s research team observed an increase in face swap injection attacks of 704% from the first half of 2023 to the end of 2023.

The other threat concern identified by iProov is that face swapping is now scalable for criminals. They can generate deepfakes quickly by using off-the-shelf face-swapping technology and feeding the synthetic images they create to a virtual camera which makes it impossible for a human being to visually catch these fakes when meeting in person. What’s even more concerning is criminals now make use of cyber tools such as emulators, to conceal the existence of the fact they use a virtual camera, making it challenging for even biometric technology to detect these fakes. This is one of the reasons why we integrated iProov technology into our Identity Verification Technology as their patented technology can detect deepfake digital injections even if an emulator is used by the criminal.

Audio spoofing is another deepfake technology that has improved dramatically and is easy to access by criminals. According to a 2023 global McAfee survey, one person in ten reported having been targeted by an AI voice cloning scam; 77% of these targets reported losing money to these scams. In response to this explosion of audio spoofing the United States Federal Trade Commission banned the use of AI to fake voices for robocalls in February 2024.

When you combine face swapping and audio spoofing technology together you can see why IBM coined 2024 the year of deception. The key to combatting this new threat is to adopt identity verification technology like ours. At Treefort we pride ourselves on having the most robust technology available, including iProov, to combat identity fraud as well as ensure compliance with FINTRAC and Law Society client ID rules across Canada. We get that seeing is no longer believing and aim to achieve real trust with our multi-factor identity verification technology that ensures whether you meet your client online or in person that they are not a deepfake impersonation.

Kim Krushell
Co-Founder