Voices from Accenture Public Service

Share

AI may have been around for decades, but its application to solve real world problems is still relatively new. We are learning all the time how to do it better. Detecting fraud and criminal activity is a well-known use of AI. Here I will use it as an example of how we are learning new rules to apply AI better, to the obvious benefit of society.

Tackling fraud

Imagine an application for a benefit claim or insurance pay-out. It includes some details of the claimant and a description of their situation or reason for claim. A proportion of such claims are found to be fraudulent. For example, the claimed identity may have been hijacked, details of the claim may be false, or it may direct money to mule accounts for routing on to organised crime.

The old rules

The first generation of applied AI services attempt to assess each claim in its entirety. Data from the claim plus contextual information about the applicant would be provided to a single machine-learning model. This would score the correlation of the claim in question with the recorded profile of claims known to be fraudulent. A broad expanse of data is mapped to a broad question – “is this claim likely to be fraudulent?”.

The challenge

There are many problems with this. We do not get a reason why the claim might be fraudulent; just a probability that it is. All the effort expended on detection is specific to this claim type; to detect fraud on another type of claim we would need to start again and train a new model against new data. For the same reason, the model is slow to adapt to changing methods of fraud. We all know that fraud evolves rapidly in an ongoing arms race with detection services, so slowness to adapt is a serious issue.

The new rules

The new generation of services learn from these shortcomings. Each element of the claim is broken down into the different indicators of fraud or criminal activity it could represent. For example, the identity could be hijacked, a text description could be a copy of another claim, the language used could imply a dishonest sentiment, the bank account used could link this to other frauds. The list goes on and will evolve over time.

Separate AI models can then be applied to each element. More specific questions drive more precise answers. For example, does the language used in the description correlate with that used in fraudulent claims? These specific AI models can be combined with other technologies to further improve accuracy. One example is graph databases. These allow near instant association of entities such as bank accounts, people and addresses, e.g. finding that the account number used on the claim also appears on multiple other claims. By asking multiple specific questions of different services, the reason why a claim is mistrusted is made apparent.

One or more further AI services is then used to evaluate the full set of answers from these separate micro-AI models. Their answers are assigned appropriate weightings in a meta-model to optimise overall fraud detection accuracy – i.e. some findings, such as a known mule bank account, are given higher importance than others in determining the overall likelihood of fraud.

What next?

AI will not improve simply by throwing more data at faster computers. More data and more granularity in what we are asking AI to classify, predict or solve means more opportunity for error. Human intelligence has a crucial role to play in breaking down problems and structuring AI to deliver better outcomes, such as reducing fraud.

To those familiar with systems development, the lesson that discrete, modular re-usable services are preferable to large monolithic solutions is not new. Good applied AI is structured and managed according to principles of good software engineering. That is a rule I do not expect to change.

For more on AI visit our Public Service AI hub

 

Submit a Comment

Your email address will not be published. Required fields are marked *