Spanish English French German Italian Portuguese
Social Marketing
HomeSectorsBanking and InsuranceGenerative AI could make KYC futile

Generative AI could make KYC futile

KYC, or “Know Your Customer,” is a process intended to help financial institutions, fintech startups, and banks verify the identity of their customers. It is not uncommon for KYC authentication to involve “identification images” or verified selfies that are used to confirm that a person is who they say they are. Wise, Revolut, and cryptocurrency platforms Gemini and LiteBit are among those that rely on ID images for added security.

But generative AI could cast doubt on these controls.

Viral posts on to pass a KYC test. There is no evidence that tools of Artificial Intelligence generation to fool a real KYC system, yet. But the ease with which relatively convincing deeply falsified identification images are obtained is cause for alarm.

Cheat KYC

In a typical KYC ID image authentication, a customer uploads a photograph of themselves holding an identification document (a passport or driver's license, for example) that only they can possess. A person, or an algorithm, cross-references the image with archived documents and selfies to (hopefully) thwart spoofing attempts.

Image ID authentication has never been foolproof. Scammers have made sale of fake IDs and selfies for years. But AI generates a range of new possibilities.

online tutorials show how Stable Diffusion, a free and open source image generator, can be used to create synthetic representations of a person against any desired background (for example, a living room). With a little trial and error, an attacker can modify the renderings to show that the target appears to have an identification document. At that point, the attacker can use any image editor to insert a real or fake document into the hands of the forged person.

AI will rapidly accelerate the widespread use of private key cryptography and decentralized identification. Check out this Reddit “verification post” and the identification made with Stable Diffusion. When we can no longer trust our eyes to determine if content is genuine, we will turn to applied cryptography. pic.twitter.com/6IjybWRhRa – Justin Leroux (@0xMidnight) January 5, 2024

Now, to get the best results with Stable Diffusion you need to install additional tools and extensions and obtain about a dozen images of the target. A Reddit user with the username _harsh_, who posted a workflow for creating fake ID selfies, said it takes 1-2 days to create a convincing image.

But the barrier to entry is certainly lower than it used to be. Creating identification images with realistic lighting, shadows and environments Requires somewhat advanced knowledge of photo editing software. Now, that's not necessarily the case.

Entering spoofed KYC images into an app is even easier than creating them. Android apps running on a desktop emulator like Bluestacks can be tricked into accepting spoofed images instead of a live camera feed, while apps on the web can be thwarted by software that allows users Turn any image or video source into a virtual webcam.

growing threat

Some apps and platforms implement “liveness” checks as additional security to verify identity. They typically involve a user recording a short video of themselves turning their head, blinking, or otherwise demonstrating that they are actually a real person.

But life checks can also be bypassed using generic AI.

NEWS: Our latest research is now available! We found that 10 of the most popular biometric KYC providers are highly vulnerable to real-time deepfake attacks. And so can your bank, insurance or health providers. Full report at https://t.co/vryGJ7na0ihttps://t.co/VVaSZrCZRn — Sensitivity (@sensityai) May 18th 2022

Early last year, Jimmy Su, chief security officer at cryptocurrency exchange Binance, told Cointelegraph that current deepfake tools are sufficient to pass liveness checks, even those that require users to perform actions such as turning their heads in real time.

The bottom line is that KYC, which was already unpredictable, could soon become useless as a security measure. His, for his part, doesn't believe deepfake images and videos have reached the point where they can fool human critics. But it could only be a matter of time before that changes.

RELATED

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks