Menu

Viewpoint: How is AI used to commit chargeback fraud?

Long before the recent surge of enthusiasm for artificial intelligence (AI) (specifically, large language models like ChatGPT), AI has always been used as a tool in the fight against fraud and chargeback operations.

With an abundance of chargeback requests made every day by cardholders, it’s extremely difficult for human operators to examine and challenge all of them. AI can be useful for automating the process for checking individual chargeback claims, mending gaps within the ‘broken’ chargeback process, and helping companies protect their revenue from false chargeback claims.

However, there is a potential risk of this technology being used as a weapon by fraudsters to automate false chargebacks claims that are far more convincing than the attempts made by amateur fraudsters, and at a much higher rate. It would allow bad actors to scale their operations with false chargeback claims that are more likely to go undetected. So, how can merchants mitigate the impact from AI-powered fraud and avoid disaster?

Understanding the difference between AI and Large Language Models

Firstly, to navigate the hype around AI, it’s important to identify the key difference between ‘true’ AI (known as artificial general intelligence) and the output of a large language model (LLM).

A large language model collects and annotates vast amounts of written information to detect patterns. For example, it would ‘notice’ that the term ‘The Battle of Hastings’ often occurred alongside words like ‘1066’ and ‘William the Conqueror’ and is sophisticated enough to answer ‘1066’ to queries about the date of the battle and ‘William the Conqueror’ to questions about who won.

That said, we must be cautious about ‘hallucinations’ associated with LLMs, whereby simple mistakes are made due to incomplete or noisy training data, or a misunderstanding of the context. While they can convincingly match the output to the questions asked, LLMs cannot understand human requests, which separates them from AI. It is for this reason that an LLM would be unsuitable for many commercial applications, especially where money is on the line. One recent example of this is Air Canada, where they have seen cases of AI chatbots promising customers refunds that they weren’t entitled to.

Machine-learning algorithms used by anti-fraud companies also work by extracting specific information from datasets and making certain ‘decisions’ with the information based on decision-trees rather than creating new solutions. These systems can be very sophisticated, up to the point of being able to improve themselves, but they are not ‘intelligent’ in any real sense, and perhaps this is for the best. Therefore, generating large amounts of relatively convincing (but often inaccurate) information limits LLM’s impact.

Can LLMs help fraudsters?

The short answer is yes. While many chargebacks are raised by individuals, there are a significant portion that are carried out by professional criminal groups. For these fraudsters, the importance is on quantity. By carrying out hundreds of chargeback claims a day, they can make incredible amounts of money – at the expense of merchants.

Just as it is impossible for human operators to deal with every chargeback attempt, it is also very difficult for fraudsters to deal with the administrative side of chargeback attempts. Not only do the chargebacks themselves have to be created, they may have to answer enquiries from card schemes, and they will have to do so very accurately or risk being caught out.This makes it all the more important for merchants to respond to every chargeback request, even those that are legitimate.

There is also the important step of building synthetic identities: no professional criminal would use their own identity, so these need to be created from stolen information. And that’s made easier when an LLM can produce vast amounts of convincing text at the touch of a button and continue to reply to targets just as an AI chatbot could. Now, it will not be perfect but that won’t matter. Try enough times and you will find somebody, likely an elderly or marginalised person, who is convinced by a modern equivalent of a ‘Nigerian Prince’ scam.

Dealing with AI-enabled chargeback fraud

While it is absolutely possible that LLMs can be used to create large amounts of relatively convincing written content to support fraud, this does not mean that anti-fraud companies are behind the curve, nor that our efforts to fight chargeback fraud are over. Far from it.

Anti-fraud systems used by every major payments company look for much more than written content: they analyse potentially thousands of signals, no matter how small and seemingly insignificant, to build a complete threat assessment of each transaction and chargeback request. Even if any written elements (which are likely to be minimal) are perfectly fine, there are still more than enough places where a fraudster can slip up, and our track record shows that our constantly updated systems are more than capable of handling AI-enabled fraud.

Photo by Stephen Phillips on Unsplash

Roger Alexander serves as a key advisor to Chargebacks911’s Advisory Board and its CEO, Monica Eaton, assisting the company with its expansion initiatives, including the highly anticipated launch of its dispute resolution solution set to address the record spike of authorised push payment (APP) fraud claims.

With over 36 years of payments experience, Alexander has previously served in various leadership roles within the payments and financial services sectors, including more than a decade in directorial roles at Barclays and subsequently as the CEO of Switch (the UK’s Debit Card) and President of Elavon Merchant Services Europe. He is currently a strategic advisor for Tarci and Pennies, a major UK charity, and previously held key NED positions with ACI Worldwide, Caxton and Valitor, among others.


Related articles:


Click here to subscribe to our weekly newsletter

© SecuringIndustry.com


Home  |  About us  |  Contact us  |  Advertise  |  Links  |  Partners  |  Privacy Policy  |   |  RSS feed   |  back to top
© SecuringIndustry.com