Picture the scene. A shadowy group of criminals are undertaking their illegal activities across the open and ‘dark’ web, posting harmful content apparently at will. They think they’re beyond the reach of the law. But then the police unleash an artificial intelligence (AI)-enabled ‘web crawler’ that identifies and tracks them while also taking down their content.
Sounds like science fiction? Not at all. The technologies to do this are increasingly available to public safety agencies across the world. And while many other potential uses of AI in protecting the public may be less dramatic, that makes them no less valuable to officers in the front line.
This value is now beginning to be realised. Today’s public safety agencies are tasked with safeguarding both our physical and virtual spaces. This is a tough challenge. And the need to tackle it is positioning AI as a critical tool in preventing and detecting crime, giving public safety organisations a powerful new ally.
Why is AI so vital? Fact is, the scale and complexity of crime – including cybercrime – now demand real-time alerts and responses that humans simply can’t keep up with. And as public safety agencies turn to new technologies including AI to fill this need, they seem to be pushing against an open door in terms of public support. According to Accenture’s latest citizen survey, 65 percent of people across six countries – the US, Singapore, UK, Australia, France and Germany – want their governments to use AI to help combat cybersecurity threats and enhance their online security.
There are myriad ways in which AI can help public safety agencies achieve these goals and more. The web-crawler I mentioned earlier is one leading-edge example. But in the near term, there are two, slightly less dramatic areas where AI is poised to make the biggest contribution to policing and public safety: task management and data analysis.
First, task management. By automating routine administrative tasks otherwise handled by humans, AI can dramatically boost public safety agencies’ productivity and efficiency. We’re already seeing this in areas like HR and finance – but these uses are just the start. In policing, AI is now starting to extend into operational areas such as intelligence and investigation management, as well as case file preparation. The result? Law-enforcement officers and staff are freed up to focus on higher-value activities like interacting with the communities they serve.
Second, data analysis. AI can process huge volumes of information at a faster pace than a human could even dream of, identifying patterns and generating insights that might otherwise be missed. This will make it invaluable for tasks like analysing ‘unstructured’ data from image, video and voice recordings, something currently carried out painstakingly by human employees.
What’s more, AI has the ability to identify people, objects and movements in real time, and trigger alerts accordingly. Imagine the benefits for securing major public events, policing roads and monitoring crime hotspots. And the benefits of AI applications will only continue to grow as our society becomes increasingly connected through sensors and new internet-of-things (IoT) technologies.
So far, promising. But at this point a word of caution is in order. While the opportunities offered by AI in public safety are massive, we mustn’t let them blind us to the associated responsibilities. And one responsibility in particular: the need to ensure that a laser-focus on preventing and detecting criminal activity doesn’t override the fundamental rules and ethics that govern public safety operations.
It’s something the public is already aware of: our recent survey finds that 25 percent of citizens harbour concerns about whether governments will use AI in an ethical and responsible manner. Such findings explain why educating AI to be responsible is key to its success – and why public safety organisations mustn’t view it as ‘just another’ software tool. As AI systems are increasingly used to make choices that affect the public, organisations must teach them to act with accountability and transparency, and must put the right governance in place to ensure public trust and confidence.
So, how to train AI to earn this trust? The starting-point for agencies and their private-sector technology partners will be having high-quality data to work with. Data scientists will also need to choose taxonomies and data that actively minimise bias, and monitor AI decisions continuously to detect and resolve any biases that emerge. And it’ll be vital to keep citizens informed about how and why public services organisations are using AI. Almost a third of the citizens in our study (28 percent) told us they don’t fully understand the benefits of AI or how it’s being used by government agencies.
What’s clear is that adopting AI in public safety isn’t just about technology. While striving to realise the benefits it offers, we need to think carefully about how it’s adopted; understand its implications for agencies, their workforces and the citizens they serve; and stay focused on building public trust in the outcomes. Given the right strategy and controls, together with a willingness to learn from other sectors, ‘responsible AI’ offers great benefits for citizens and government. But only if we ensure trust and legitimacy remain front and centre.