Voices from Accenture Public Service

Share

Artificial Intelligence (AI) is transforming the world around us. Almost every day, AI creates breakthroughs across a wide range of industries. In that context it seems counterintuitive to say that better AI often means less AI. However, the more I see of real-world applied AI, the more this is apparent.

What is better AI? Better AI is faster, cheaper, easier to maintain and, most importantly, more accurate. There are many specific definitions and variants of accuracy in data science, but for the purposes of illustration and simplicity I will use ‘classification accuracy’. This is the percentage of correct predictions out of all predictions made.

So, the question is how to improve accuracy whilst improving response times, reducing compute effort and improving maintainability?

To answer this, we’ll first need to look at how AI works and what it is used for, before exploring how it can be made better.

AI and probability – playing the numbers

AI is used to make decisions based on probability. For example, computer vision maps data from images to known entities, e.g. “cat”, based on a probability score. The AI determines that the image has a higher probability of being a cat than say, a dog, or any other entity that it has been trained to recognise. The more accurate the AI, the more often it correctly recognises a cat.

This is a relatively straightforward example. AI’s real-world applications are often far more complicated. And with greater complexity, the margin for error increases.

Minimise opportunities for error

Imagine the scenario where AI is being used to classify millions of documents into thousands of different categories. One approach would be to train one big machine learning model against a huge training data set that includes many examples of all the different document types.

The more types of document you expect one model to classify, the harder it is for the model to differentiate between types. Each additional category introduces more opportunity for error.  In my previous blog on “new rules for practical AI”, I suggested that by splitting big questions into smaller, more specific questions, AI maintainability and accuracy can be enhanced.

What does this look like in practice? The answer lies in a combination of machine learning and quality custom code. Let’s say that within a spread of thousands of documents, there are only two known types that ever have a file size less than 50 kilobytes. To help our machine learning model classify them, we could write a simple piece of code establishing the rule that any document under 50 kb must be one of two types. When sorting a document under 50kb, our machine learning model now only needs to choose from two options, rather than thousands. Any time certainty can be used in place of probability, accuracy will improve. Yes, the large machine learning model would almost certainly correlate file size to document type, but why work on probabilities when certainty can narrow the field and substantially reduce processing time?

By using a judicious combination of code and machine learning, one big classification problem can be broken down into many smaller classification challenges, each with more relevant and specific training data. The result is far greater accuracy and less processing than would ever be possible from a single model.

To get the best out of AI’s potential and drive real-world adoption, we need AI platforms that combine quality custom code with machine learning. The best AI uses certainty wherever it exists and resorts to probability only to fill the gaps.

Submit a Comment

Your email address will not be published. Required fields are marked *