Let’s take a simple sentence and think about it for a minute – “Most people are inherently good”.
Most people are not fraudsters.
This is important because it means we need to think more about not how we find and stop fraud but how can we make it so the majority of the population can access the services they are eligible for. But what does it actually mean? As an example, only a fraction of applications are fraudulent and this is depending on product/service, demographic and perception of detection. Let’s suggest this could be as low as 0.5% and as part of this process we could refer up to 20% of applications for fraud and 5-10% for electronic identification. If we also look at the recent phenomenon of push payment fraud, even though the loss numbers are large and numbers affected are sizeable, the actual percentage of losses compared to genuine transactions is miniscule.
Most fraud defences look to find the inconsistencies, patterns and linkages for an event but do not take into account the fact that most people are genuine. We look for the negative and reasons to decline instead of reasons to accept. Sometimes this can mean our hidden biases come into play from an investigative and referral management point of view. There is also a risk of certain name patterns, locations and nationalities coming under heightened scrutiny (and these decisions can influence subsequent activity such as machine learning or scorecards).
If there were no barriers or no recourse then obviously we would see organisations being targeted by fraudsters, but even so – “why do we not see more impersonation fraud?”. There is an opinion (and JMLSG guidance supports this view) that on its own, while electronic identity processes prove someone exists, they do not prove the person providing the credentials is the subject.
This is largely true as, when applying for services, there is little in the way of proving that the subject is the person at the end of the device, telephone or in branch. We trust that the anti-fraud mechanisms which look for the linkages, inconsistencies and other indications highlight the possibility that the presenter of the information is suspicious in some way. But above all else we rely on the fact that “most people are good”.
It’s important to understand that impersonation is not necessarily technically difficult and only a few years back an imposter did not need to know the full first name and full date of birth in order to pass a credit check. Some might see that as surprising but it’s simply because credit bureaus are designed to cope with legitimate typos, incorrect/approximate dates of birth, alternative spellings, double barrelled names and so on and because these fuzzy processes were too tight, many genuine applicants would fail the process.
So, if those anti-fraud services are not in place, well optimised or other barriers exist (such as a delay allowing for offline investigation) then impersonation could become a large problem, yet it isn’t.
Part of reason it’s not so prevalent might be because of the scale of organised criminal gangs in the UK and their potential lack of sophistication. Criminal gangs need resources and much of the fraud industry is manual and not industrialised. This is fairly obvious because if it were automated or industrialised then we would see much higher attack rates than today. For example, a bank might have 50,000 card applications per month and if we apply an arbitrary 1% fraud rate this provides 500 frauds. This is only an average of 17 frauds per day. Of course, this can be multiplied across many different organisations and products but even so this is not the action of a perceived to be huge fraud industry which is heavily sophisticated, industrialised and automated. CIFAS recorded around 170,000 impersonations in 2018 and while there are likely to be duplicates in this amount it is still only 465 impersonations per day across the entire UK. In addition, according to T May and B Bhardwa as part of the excellent “Organised Crime Groups involved in Fraud” publication, there were around 1400 groups involved in economic crime in the UK in 2013. At an average of 5 people per group, this is only 7,000 people across the UK.
The challenge for organisations will be when these groups develop automated processes and artificial intelligence bots to facilitate fraud and truly become industrialised. If this happened overnight then the existing referral rates would cause referral teams to be swamped, service level agreements to be breached and fraud easily slipping through – akin to a distributed denial of service attack.
But what if we could use the information available now and think about those available in the future to corroborate a person to the identity, their employment and other areas such as their precise location? Can we use the information to improve the authentication rates and reduce fraud referral rates? For example, a future triangulation of identity might involve the applicant taking a photo from the inside of their home address (accepting that applicants may not always apply at their home address!) combined with bureau data and evaluation of other presented information such as their email address, social media activity and Open Banking. On the flip side, could future technologies such as machine learning help us to understand why certain individuals are targeted? For instance, are there characteristics of a property and area which fraudsters exploit and can these be digested, learned and predicted using an input from services such as Google streetview and computer vision?
All of these things could contribute to an industry that is focused on how we can let majority of the population access the services they are eligible for.
For my latest blogs on current identity and fraud market issues and challenges please click here .