As technology advances, there is no slowdown in bad actors taking advantage of tools to commit crimes. Now, with more advanced AI entering the stage with vast capabilities, the barriers to entry for many identity verification bypass methods are being lowered. One doesn't need to run a business doing KYC to understand that the problem is only growing. Low-skilled fraudsters are leveling up using AI as their stepping stone, with no ethical controls to slow them down. As we head into the next few years, identity verification will not resemble anything close to today, especially as tools resembling AGI and ASI are released into the wild.
Companies like Microsoft are great at gating access to their AI tools. Still, sooner or later, various leaks of model weights and techniques are available through hacking or open-source, allowing essentially unrestricted access to similar solutions. The Deepsight project deepinsight/insightface: State-of-the-art 2D and 3D Face Analysis Project (github.com) has dramatically improved and unleashes tools that aid in swapping and facial analysis. Realtime solutions to detect fake identities exist today, but the swapping technology is steadily improving to the point where voice and face cloning are very difficult to differentiate. We will discuss other bypass techniques later that will only become more automated and easier to use.
One can imagine a much stepped-up version of GPT-4o, which takes in realtime voice, text, and video inputs and has coherent conversations. Add a variant of Sora in a few years with high-fidelity images that create video resembling any inputted face. Then upgrade it to the next level by 2030, and you will be looking at solutions that are better than any person and output life-like avatars that are more socially manipulative than an average human on any video call. An advanced agent is persuasive enough to read every emotion, word, hand motion, eye gaze, and tone. The AI agent could devise the perfect response to direct you to the desired goal. The agent could predict each interaction and already generate a response. Subtle inputs will be able to read the current verification level of an identity and then adjust it to increase a pass rate. In this scenario, the more advanced remote identity-proofing methods would be defeated. For example, a $1k certificate from a CA authority requires some steps to ensure it defeats deepfakes. "The CA or agent asks the Designated Individual to hold the ID in front of his or her face, to turn the document around in that position, and to wave his or her other hand in the space between the ID and the Designated Individual's face." (VMC_Requirements_latest.pdf (bimigroup.org) ) There are some very extensive web-based F2F checks in the list, but soon, even steps like this will not do anything to detect spoofs. As we detailed in an earlier post, the waving of hands and sudden movement is an easy way to detect certain spoofs, but eventually, the swap tools will get to the level where only new hardware prevention works. Cryptographically signed hardware might be a solution, as it is still possible to defeat infrared checks and depth ones from a real camera by bypassing Windows Hello Without Masks or Plastic Surgery (cyberark.com). There will probably be new physical devices similar to Microsoft's Secured-Core technology where everything from firmware and software to hardware can all be cryptographically verified, preventing tampering. In extremely stringent scenarios, a specific certified device will be required to verify an identity using multiple biometric signatures. Another prediction for future hardware bypasses is a device-in-the-middle attack with no detectable interface with the original hardware. With technology similar to a lens protector, an attacker could place it on top of an iPhone camera and manipulate what the camera sees. Then, there would be almost no way to detect it. The world's thinnest lens is only three atoms thick and manipulates light using quantum effects (zmescience.com). Fortunately, these are still far off and would have to be mass-produced.
An example scenario could be a bad actor directing their AI agent to commit a complex wire fraud scheme. It could involve multiple-prong attacks with phishing emails and calls to certain employees and customers, bypassing identity verification methods to authenticate a transfer. Even if one layer of an attack is caught, it could be a simple adjustment by the agent to try something else, like calling at a particular time to talk to a specific customer support agent. If individual actors can leverage AGI, they will be independent and able to scale their attacks. Right now, fraudsters are cooperating with multiple professional individuals to complete specific parts of attacks. The main barrier to fraudsters expanding their enterprise is the lack of trust in the ecosystem, as many operate private chats or require multiple vouchers to work with one another. Once fraudsters move from single-purpose fraud software like ID and Selfie deepfake generators to complex AI models that are jailbroken for any need, there will be a greater need for AI defenses. Already, there are scenarios of bad actors infiltrating companies using deepfake video and spoofed calls with voice clones, but once they can automate it, the scale will massively increase. For example, you could try to incorporate Gemini or ChatGPT to detect and identify fraud and develop prompts for deepfakes prevention. Still, there are safety limiters that stop these types of use cases. However, there are still ways around some of these Prompt Injection Attacks: How Fraudsters Can Trick AI Into Leaking Information — Antispoofing Wiki showing the dangers of what could be done. In a future where AI has vast knowledge and can dissect every identity data point comprehensively, it could either generate manipulative proofing or better understand a human to create fraud scenarios. In the above scenario, taking it one step further, the AI agent could deduce the customer support agent's identity just from a voiceprint and then have details on their family. There are cases now where fraudsters leverage these data points to manipulate and falsify time-sensitive family arrests or legal situations to force them to do specific actions. The solution that will continue to stop these attacks will be a dynamic one that stays one step ahead by including additional friction to verify identity.
Most identity verification vendors have been able to stay ahead of the bad actors now, as the current trend is around webcam injection attacks, which use life-like avatars. Companies have given up on browser-based verification because so many available bypasses are undetectable. One recent research paper shows you how close we are to more widely available solutions NPGA: Neural Parametric Gaussian Avatars (simongiebenhain.github.io) and 2311.13574 (arxiv.org) that can make it extremely difficult to detect. Bad actors initially started using simple bypass methods like masks, screenshots, virtual webcams, and deepfakes, which had generally easily automated and human ways to detect. In 2024, emulators and app camera injection are the main ways to bypass many KYC checks. A few methods we have seen by fraudsters include using a rooted/jailbroken device, in which case it's simple to stream a different camera source. Two use an older app or a tampered/patched version, which allows them to override the camera input and bypass any root/emulator checks. Three downgrading quality and devices to a low-definition camera similar to how fraudsters bypass fraud models by scanning a document with a low-quality capture tool. You could detect certain framerates and other video artifacts for virtual behavior, but that doesn't always work. We saw one bypass where video streams an avatar in a 3D simulated apartment that walks up to an in-game desk that has an image of a driver's license next to a fake laptop. Any depth or other lighting and environment checks would be wholly defeated in this case. The bypass we saw was rudimentary, but a V.R. world could be replicated precisely as a real person/home and streamed to a camera, which is used as input for verification.
A bad actor could use their room and never show their true identity. For example, after a few orders of magnitude improvements to this video, AI Gives Tour of Virtual Apartment (youtube.com), it would not be difficult for a similar scenario to play out but using identity documents lying around an apartment. It could be a human-operated avatar using motion capture or an AI agent that comprehends all verification request instructions. For now, many checks on IDs require high-resolution captures of IDs, which can detect nuisances in an ID like missing a hologram or raised printing. In these cases, fraudsters promote physical fake IDs to bypass checks, which almost anyone can see as real. The AI models will have to be able to mimic the more minor grain details but eventually will hit that same level with enough training and templates for the masses. In the U.S., driver's licenses need to embed NFC, similar to passports, which offer more integrity and authenticity checks. (ePassport Validation (ICAO.int))
Typically, there are a few things to look out for to defend against the latest attack methods. Older versions of iOS and devices are more accessible to jailbreak, and we have seen where fraudsters are selling prepackaged devices and shipping them with the tools already installed to spoof easily. Be especially aware of Android, too, as advanced bypass apps are available for people with little technical skill that will fool almost every current defense. We also recommend app shielding and hardening to prevent bad actors from manipulating your app. To avoid these attacks, obfuscation of the code, anti-tampering, anti-rooting, RASP, API security, virtualization, anti-cheat and more. (Additional information for Testing with App Shielding and Secure SDKs for your Mobile DevSecOps Pipeline (saucelabs.com)) Countermeasures will only last so long, so keeping up with the latest trends and reverse engineer attacks is essential. One tool works on Android to swap the video using OBS and a USB connection with no root. Another uses Parrell Spaces to hide the camera injection and even hide the app that is doing all the manipulation. Other companies are using thermal imaging and lighting changes to determine liveness, but those can be defeated, too. Newer cameras and newer devices are required for this, so users who do not have the latest hardware may be excluded. The trend toward securing identity verification will lead to more closed-loop systems requiring many integrity checks. This is the main problem, as software advances are currently innovating much faster than most consumer hardware-based solutions.
Another potential in the future is for fraudsters to use their face or images and apply noise to the image to fool the models that do the verification. This would lead to face list bypassing on existing blocks. Currently, some companies use blocklists on serial fraudsters or previously detected deepfakes. However, an adversary could apply some noise to their face and therefore appear unique to an AI model. However, in reality, it could be the same person who can bypass liveness checks. (Example of this alteration - Images altered to trick machine vision can influence humans too - Google DeepMind). This is why it will be necessary for random audit checks and to never rely entirely on an automated system to detect novel attacks. There may even be some noise filters or types of faces that AI can generate, which will more likely pass even a manual human review. There are already biases in specific models, and once attackers can understand who is reviewing a verification, they can apply certain image manipulation that would make the person more appealing to pass. Lastly, there is no telling if a particular noise can't be layered, which bypasses depth checks from cameras like the iPhone. These are also just exploits found by other humans and researchers, which will be even more challenging to detect when AI finds novel methods and they are not released to the public.
A new scheme we uncovered that will likely proliferate in the future is reverse honeypots set up by fraudsters to gather KYC data. Similar to how criminals operate fake free VPNs to use as jump points in fraud attacks, we have found the same for identity data. (Is Your Computer Part of ‘The Largest Botnet Ever?’ – Krebs on Security) These are apps and services advertised on social media for various financial gains. For example, one is a fake fintech that provides loans, and the other is a gambling and rewards program that pays out. The customer will believe it is a legitimate service but never know their identity was used to open multiple mule accounts on other real fintechs. For these types of services, it is normal for a customer to provide their ID, so they won't think many proofing requests are odd as long as they are in the expected industry. Some of these operations are operated legitimately for some time to gain positive reviews, but the KYC information is more lucrative. The identity-proofing data is then used to sign up for other services worldwide, typically through cryptocurrency platforms and other laundering operations. This type of activity puts even more pressure on liveness checks as everything from device activity to identity data will match real identities. The problem with AI is these types of front companies will be extremely easy to generate from the website, photos, reviews, and more, and they can all be faked by a single person in the background.
Fortunately, for now, the technical skills and hardware required to complete many fraud attacks prohibit larger groups of fraudsters from bypassing verifications. Multiple scenarios could play out in the future, one of which is that digital verification is not feasible for high-security environments. For example, going to the molecular level to verify an identity is something the military is planning (U.S. Air Force developing human molecular biosignature sensors and more | Biometric Update). The future of identity verification might follow a similar path as governments take for identity verification in confidential areas. Multi-modal biometrics similar to what DoD contractors do may work for some time Themis™️ Examination Workstation (athenasciences.com). It is important to remember that PII and other verification methods described here are (Powerful identity verification tools for fast businesses | Trust Swiftly) as important as biometrics being one signal. Intelligence on an identity is multi-faceted and can compromise countless data points and networks. Some stringent checks would not work for most commercial use cases, but leading-edge scenarios could be a few years behind what is being done at top-secret projects today. Next, it is highly likely that identity verification tools will use the same AI agents as the fraudsters to test their systems. Many fraud communities believe they have their secret tools and techniques, but it is not impossible to infiltrate and then reverse engineer. It is a steady cat-and-mouse game that ultimately might be solved with better trust and ethics. Technology will in turn be an arms race where new security measures are needed in days instead of months now. Overall, with the advent of AGI, the bar will be much higher on proof of identity, and rapid development will overturn years of security measures in a short time.