One of the great things about the era we live in is that technology is finally working with humans as well as for them.
Artificial Intelligence (AI) is driving this trend as it offers much more human-like cognitive powers than traditional, programmatic computers. And today, AI is mature enough to reliably take on simple tasks in organisations, acting, for instance, as a customer service representative, a helpful colleague or a data analyst.
Several revenue agencies are already taking the opportunities offered by these new capabilities. Over the past few years, the Australian Tax Office (ATO) has used AI-powered voice biometrics to learn 3.4 million “voiceprints” which help to quickly identify people calling in to contact centers. The system speeds up the identification process for citizens and saves a great deal of time and drudgery for contact center workers.
AI’s expanding repertoire
Of course, identifying people through voice biometrics is a relatively small role, but AI is now ready to take on broader tasks. As these roles are introduced, revenue agencies will have to prepare their people to work more closely with “AI colleagues” – including not only adapting to new collaborative processes with AI, but critically, preparing workers to teach and take advantage of AI in the right ways.
Accenture’s Technology Vision 2018 is the latest edition of our annual research into the technology trends that are changing the world. This year’s research forecasts the many opportunities that organisations have to improve performance, innovate and become more people-centric. This blog post is linked to the first trend: Citizen AI, where organisations are “raising” AI to take on new “roles” but need to do this in a responsible, fair and transparent way.
These roles are myriad across revenue agencies. From personalised citizen support delivered by virtual agents, to bots that provide live nudges and support for employees on the phones. AI can also support some of the most complex revenue agency processes, such as audits and risk modelling, as well as new forms of advanced analytics to improve operational decision-making.
According to the global survey that forms part of the Technology Vision research, revenue agency executives believe AI will have a big impact on their organisation, with 77 percent agreeing that AI will work next to humans in their organisation within the next two years.
The wheels are already spinning fast at some agencies. The HMRC, for instance, is looking at implementing robotics and AI across a number of different areas, in a bid to ease the complexity of tax operations and become a more efficient working department. Its goal is to automate ten million processes by the end of 2018, as well as ending the need to file an annual tax return (at least for most people) by 2020.
But as our Technology Vision report puts it: with great power comes great responsibility. Leadership and the workforce at all agencies need to quite rapidly understand how to govern and manage AI “workers” to be ethical, collaborative and as transparent as possible.
With AI applications flourishing and advancing, this issue can’t be ignored. Almost eight-in-ten revenue agency executives (77 percent) feel their organisation is not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions, should issues arise. Governance frameworks are not developing as fast as the technology – which, incidentally, is almost always the case with emerging technologies – but with AI we need to make sure the applications don’t race too far ahead.
Transparency is a good example. It is not always easy to achieve with AI, particularly in systems that “train themselves” by digesting vast amounts of prepared data. The patterns these AI systems find, and the “rules” they then seem to follow, are often not accessible without extensive investigation.
Our research shows why this is a key issue:
- Nearly all revenue agency respondents agreed that it is important or very important for employees and customers to understand the general principles used to make AI-based decisions.
- At present however: only 8 percent think their employees fully understand the principles behind AI decision-making, while only 10 percent say citizens fully understand.
This highlights a disconnect between the ideals we have for AI and the current reality. Other findings corroborated this:
- The research found over 90 percent of revenue agency respondents hope to gain citizen trust and confidence by being transparent in AI-based decisions and actions.
- But just 19 percent believe they will be able to be fully transparent about AI-based decisions over the next two years.
Revenue agencies are not alone in this, most industries show a similar disconnect. The key is to understand that AI will need a special kind of governance and oversight. We need to work together to develop these new frameworks and close the gaps in the numbers above.
Doing this means that we cannot just treat AI like another software program or tool. AI is starting to “act” – making decisions and executing actions that impact citizens, employees, businesses and communities. This creates new responsibilities for those using and deploying AI systems. But by ensuring the workforce understands these responsibilities – and by developing the right frameworks to govern each AI system – we will ensure AI remains as positive as it is powerful.
Let me know if you agree by leaving your comments, thoughts, and suggestions at the bottom of this blog. To learn more about how to improve taxpayer experience through agility visit us here, and follow me on LinkedIn and Twitter.