Now that AI has been around for a few years, we can finally revisit its capabilities and incorporate them into the latest updates of Trust Swiftly. While Trust Swiftly has utilized machine learning and various models, they were all designed to support specific tasks, such as extracting and verifying a particular type of driver's license. AI has now become synonymous with LLMs and adaptive models that can handle many different tasks. We attempted to use OpenAI's ChatGPT in 2023 to analyze selfies and documents as part of verification workflows, but there were too many limitations and inconsistencies in the results. It was a rudimentary capability, and there were many privacy concerns about where the data went and if it was used for training.

Now, in 2025, the story has changed dramatically with Trust Swiftly's latest advancement, which includes a setting that utilizes AI providers to enhance our clients' security further and detect more fraud. Clients who want to add a layer of protection will now have the option to enable AI for part of the review process. The first type is used for document-specific verifications, such as IDs, Passports, Bills, and any other kind of verification that requires a camera or an uploaded file. This dramatically expands the scope of available document types we can support, with the options now being endless. For example, you might have a specific school ID that you want to verify to gain access to a closed community. Another option is for video analysis to perform enhanced verifications of liveness and prevent deepfakes. This can be easily done using a custom prompt in Trust Swiftly to review each ID and biometric check, tailored to your specific requirements. Our system then handles the responses from the AI in a structured manner to ensure the consistency of the reviews.

Even better, Trust Swiftly supports multiple AI providers. We took an agnostic approach and decided our platform should be able to use the top AI tools, depending on your requirements. We support Anthropic, Bedrock, DeepSeek, Groq, Gemini, Mistral, Ollama, OpenAI, and any other custom provider on our Enterprise plan. This provides our customers with the latest and greatest AI models, as well as the ability to continue upgrading with AI that best fits their growth. However, each model has some pros and cons and will need to be tested and verified before final selection. Some models only support limited image analysis and would need to be used for our AI fraud report review feature. Another consideration is to learn about the privacy and security of the model.

Since many clients handle sensitive data, we require all scenarios to use models that do not train based on any data sent to the provider. OpenAI, Google, and Anthropic all have policies to prevent training on data submitted through their APIs. Google's Gemini is a great provider that meets many requirements and offers competitive pricing, allowing for scalability. Ollama could be used for maximum security by being hosted in your environment and even incorporating specific business training data to aid in risk and fraud analysis. The last factor to consider is the speed of the response. Most users do not want to wait a long time during processing, and we recommend short wait times for any AI tasks that require user interaction. Longer reports can be run in the background to determine final results. Overall, the selection process is intensive and requires thorough analysis and testing before proceeding due to the complexity of the task.

The new feature also supports further customization with AI analysis tailored to your specific business needs based on multiple verifications. For example, you may have a user complete five different verifications and forms as part of your KYC process. In this case, an analyst might need to manually review some of the submitted data to make the final decision for onboarding. Instead, with Trust Swiftly, our AI review agent can now automate the check to generate its analysis and decisions. A final approval by a human should be used in most cases to ensure that AI is not declining good customers and that you are adhering to local regulatory laws.

 In review, the new AI features introduced by our team will significantly enhance the user and business experience during identity verification processes. It is an opt-in feature, and companies can continue to use our platform without the added benefits of the AI provider. Users will receive more accurate and faster feedback about issues with their identity. Optical character recognition will continue to improve, with companies like Mistral focusing on OCR, as reliability improves even with blurry pictures. Businesses will save time on any manual review cases and catch more instances of fraud with a thorough review of an identity. Instead of millions of data points and analyses, we are looking at billions now with the new models. In the future, they will eventually reach trillions, and the system will become even more robust in detecting novel bad actors. Subtle signs of fraud will be detected from deepfakes as the evolution of those tools will eventually outpace human eyesight detection. It will become even more crucial for businesses to utilize AI to adapt and detect these types of fraud.

Share: