Voices from Accenture Public Service

Share

In my first blog in this short series on AI (Artificial Intelligence) in defence, I examined three areas where AI can enable defence agencies to ‘do things differently’. In this second blog, I raise my sights to look at how AI can also allow agencies to ‘do different things’ that simply weren’t possible before. This is where things start getting really interesting.

As ever, it’s best to start with some context. Successful use of AI in defence essentially revolves around three elements: first, the sensors (physical or virtual) that collect the data; second, the AI itself that provides the “brain” and thinking based on that data; and third, the actors – objects, agents or personnel – who do things based that thinking.

All of this maps onto the “OODA loop”. As I explained in a previous post, this is the iterative cycle of observe, orient, decide and act around which much military thinking is based. And while the OODA loop was originally based on data and activities in the physical domain, it’s equally relevant to those in the cyber or virtual domain.

This aligns to the sensors inputting data to OODA. Some sensors observe what’s happening in the network. Some focus on the physical environment – weather, troop movements and so on. And some cover the cyber or virtual domain, monitoring cyber activity and responding to intelligence through things like virtual agents, unmanned vehicles and AI-enhanced cyber responses.

As all of this underlines, AI brings a blurring between the physical and cyber domains. And to position themselves to capture the advantage brought by innovative use of AI, defence agencies must understand its impacts, opportunities and implications across the OODA loop.

In “Observe” AI is really about identifying sources of data. Humans on both sides of a conflict are constantly providing information, effectively acting as a type of sensor. And when there’s AI being used on both sides too, questions arise around how AI can be trained to act as “spies” collecting data, and whether AI can learn how to deceive the other side’s AI into making mistakes.

How can it do this? Well, AI may be smart, but it can be fooled: change a few pixels, and what it previously identified as a panda becomes a dog. Imagine if this weakness could be exploited so AI-enabled cameras couldn’t recognize a tank. And the key to using AI to outsmart AI is ironically to use humans – underlining the need for alliances between people and machines, augmenting the capabilities of both.

In the “Orient” stage of the OODA loop, situational awareness has always been critical – with the key being that the commander can see what’s going on quickly and easily. However, the “traffic-light” status monitoring systems used today have a fundamental problem: they’re locked into tracking specific things, rather than shifting their focus as the situation evolves and other things become more relevant.

AI isn’t hampered by this rigidity. Instead, it can ingest, analyze and prioritize data dynamically from a vast array of sources to provide “situational awareness of situational awareness”. For example, an AI system might spot that a weather front will bring snow in three days, and simultaneously assess the projected equipment and troop conditions to alert the commander that the troops will need higher grade winter boots.

In OODA’s “Decide” phase, the question is whether AI can become so smart it should be leading actions and operations. Would you believe that a listed software company in Finland  has appointed an AI robot onto its leadership board, giving it participating rights in key decisions? And while this may sound futuristic, it may only be a matter of time before AI is also deciding on military strategies and deployments. We could even see AI on the battlefield leading a platoon.

Which brings us to the “Act”  stage – and it’s here that “doing different things” really takes off. One different thing that we’ve already mentioned is using AI for deception techniques. Another – potentially related – opportunity for AI is helping to avoid the outbreak of hostilities altogether, by optimizing deployment to maximize deterrence. If a neighboring country is mobilizing troops close to the border, then even a small deployment in response – if made smartly and at pace using predictive AI – will give the adversary pause for thought.

A further impact of AI will be to expand the focus of warfare from geopolitics to data politics. Nations have traditionally fought to gain strategic advantage through occupying physical terrain. But as the global mountains of data become ever larger and more important, the first step in any conflict may be to secure control of key data sources – the virtual territory that may determine who wins in an AI-enabled world.

Finally, AI opens up many opportunities to do different things across the full spectrum of joint capability areas (JCAs), by bringing the twin advantages of higher speed and greater persistence. Put simply, AI engines think at lightning pace and don’t get tired – bringing major implications in the four key JCAs.

For example, in battle space awareness, AI enables reconnaissance and surveillance to be both faster and more sustained, enhancing capabilities across tasking, collection, processing, exploitation and dissemination (TCPED) to improve situation awareness. In logistics and supply chain, it enables faster and more responsive 24×7 resupply, and predictive maintenance of platforms. In protection, it enables autonomous systems rather than humans to undertake dull, dirty and dangerous tasks. And in force application, it empowers quicker responses based on far fuller information – though questions remain about whether autonomous system will comply with the laws of war.

However, while technology is creating all these opportunities to do different things, one very human quality will be vital to make all of this happen: trust. At root, if humans don’t trust the decisions that AI makes, then they won’t put lives and nations at risk by relying on them. As the world of AI-enabled conflict looms, the real battle may be to establish trust in the machines themselves. And the billion-dollar question for the world’s military is how to do it.

See the original post on LinkedIn: By blurring physical and digital, AI empowers defense agencies to “do different things”

Submit a Comment

Your email address will not be published. Required fields are marked *