• Explore more blog posts
  • Newsletter

February 19, 2024

Frequently Asked Questions:

Artificial Intelligence (AI) in the military

The rise of ChatGPT and similar tools has shone a new spotlight on Artificial Intelligence (AI). For militaries, the technology raises unique issues around applications, limitations – and potential. 

Here, we have gathered 7 essential questions and responses regarding Artificial Intelligence in defence.

#1 What is Artificial Intelligence?

#2 Can Artificial Intelligence make autonomous decisions – even of a lethal nature?

#3 Can you trust Artificial Intelligence’s recommendations?

#4 How should militaries use Artificial Intelligence today?

#5 How does Artificial Intelligence integrate with existing or legacy systems?

#6 How much training or skills development is needed to work with Artificial Intelligence?

#7 Will Artificial Intelligence replace humans?

#1 What is Artificial Intelligence?

Let us start with the basics. Henrik Røboe Dam is Director of Air Strategy at Systematic and former Chief of the Royal Danish Air Force. Speaking on the ‘Command and Control’ podcast episode 6, he said there is often confusion between Artificial Intelligence (AI) and related areas like Machine Learning (ML).

The key difference, according to Dam, is that AI is “actually able to learn by itself”. Rather than simply synthesising or analysing vast data sets to produce a quick answer, AI algorithms can train themselves – for example, improving their analysis of a particular problem based on user feedback.

AI today is “developing towards a self-learning machine,” he said. “But you will still need to provide the data.”

#2 Can Artificial Intelligence make autonomous decisions – even of a lethal nature?

This is perhaps the single most pressing question from a military point of view. For now, the answer is a clear “no”.

Artificial Intelligence (AI) does not equal autonomy, said Henrik Røboe Dam, Director of Air Strategy at Systematic and former Chief of the Royal Danish Air Force. For at least the foreseeable future, it will most likely function as a decision support system for militaries, he said. For example, it could “help you sort out and analyse all the data you have available about a subject” within a given timeframe. But it is particularly unlikely that AI will be empowered to autonomously pull the trigger.

This decision-making aspect is essentially an ethical question, said Alex Roschlau, Trainer and Domain Adviser at Systematic.

“We as a company [Systematic, ed.] have a policy that there is no trigger in our software [for AI, ed.]”, he said. Instead, the AI can evaluate all available intelligence and information on a certain topic, providing the decision-maker with the best possible baseline of information.

Even this degree of decision support provokes questions of its own. For example, if a commander uses AI to support his or her decision-making, can they then blame the consequences on the machine? Not if the man-in-the-loop retains ultimate responsibility, said Henrik Sommer, Director at Systematic and Brigadier General (Rtd).

This factor must be included in software design, he said. Because the man-in-the-loop retains ultimate responsibility, there must be transparency around the machine’s decision-making process. It must “show you the steps when it is giving a recommendation, so you have visibility” on how it reached its conclusion.

#3 Can you trust Artificial Intelligence’s recommendations?

First, it is vital to remember that Artificial Intelligence (AI) does not create data – it evaluates information and creates a report or recommendation. To trust AI, then, it is first essential to ensure you provide it with reliable and trustworthy data.

Second, transparency is once again vital. You need to know how the AI works to assess the outcome of the algorithm, said Henrik Sommer, Director at Systematic and Brigadier General (Rtd). “AI will never say I don’t know,” he said. “AI will always give you an answer – if the reliability is extremely low, it will still give you an answer.”

How can it build trust? By demonstrating how it came to its conclusions. For example, when the AI identifies an object as a tank, it should be able to explain how it did so, perhaps through displaying heatmaps of relevant objects like turrets that helped it make its decision.

AI should be treated as one source among many, Sommer said. If you have seven reports and only one – driven by AI – is guiding you in a certain direction, then it deserves to be thoroughly interrogated, like any other report.

#4 How should militaries use Artificial Intelligence today?

According to Henrik Sommer, Director at Systematic and Brigadier General (Rtd), militaries should not aim for the most advanced Artificial Intelligence (AI) algorithms to help them in their tasks. Rather, they should look to relatively simple tasks for AI support.

For a start, AI faces unique vulnerabilities in a military context, he said. It may be capable of complex activities when it has access to large data sets, something that is easily achievable in a peacetime headquarters, for instance. But if the headquarters is targeted in a conflict scenario and there is a need to disperse and rely on radio communications, there will be a serious challenge to the AI in terms of bandwidth and computer power.

“I think we should aim to start with a simple algorithm for a simple task, and aim for the more advanced stuff later on,” he said.

This could come from capitalising on developments in the civilian world, he said. Militaries should assess how AI develops to address problems in the commercial domain and ask how they can be adapted to the military environment.

“Why do we not take the algorithm and paint it green for the army or light blue for the air force?” he said. “I think we should aim for the achievable.”

#5 How does Artificial Intelligence integrate with existing or legacy systems?

The technological challenge is not generally a question of hardware. Rather, the major technological leap is from unstructured to structured data, said Henrik Sommer, Director at Systematic and Brigadier General (Rtd).

Essentially, an image might not be very useful in isolation, “using just the Eyeball Mark One,” Sommer explained. However, a digital image can contain metadata, providing far greater detail: for example, showing that a tank is not simply a tank but a T-72.

Artificial Intelligence (AI) must have access to such structured data to reach its full potential, Sommer said. As it is trained to recognise different images, it can itself be used to expand structured data, by adding new metadata to an image automatically.

The real challenge is how you approach the integration of AI algorithms with computer systems, said Sommer. If you hard-code AI into your system, it will be outdated within a few months, when “there will be a better AI model that you can utilise within your system”, he explained.

SitaWare aims for a flexible approach, he noted, enabling users to “more or less toggle on or off the different kinds of AI algorithms”.

#6 How much training or skills development is needed to work with Artificial Intelligence?

There are a wide range of uses of Artificial Intellligence (AI), depending on the military application in question and the nature of the mission and technology concerned. However, it is fundamentally designed to assist human beings – for example, if you are searching for a specific vehicle type through 20,000 images, the AI algorithm can help narrow this down to 10.

In many ways, the major training challenge is training the AI itself: on the type of vehicles you need it to find, for example.

Different kinds of AI have different training needs, noted Henrik Sommer, Director at Systematic and Brigadier General (Rtd). Some computer vision or large language models essentially come pre-trained. They could also be designed to find an image quite easily after they are presented with a similar image, he said.

“Still, AI is not always just out of the box,” Sommer stressed. “Military customers must identify how they want to use the AI and if they need training material.”

Systematic is developing an ability to work with customers to help them on the training of AI systems, added Henrik Sommer, Director at Systematic and Brigadier General (Rtd).

#7 Will Artificial Intelligence replace humans?

For Henrik Røboe Dam, Director of Air Strategy at Systematic and former Chief of the Royal Danish Air Force, the answer is no. It comes down to human creativity – the ability to see things from a different angle and act accordingly, a vital talent in military operations.

While Artificial Intelligence (AI) may be capable of defeating human players in a board game with set parameters, it may struggle to adapt to the battlefield, where enemies can behave in unexpected ways.

In fact, humans can defeat AI in any game, Henrik Sommer, Director at Systematic and Brigadier General (Rtd), noted, simply by changing the rules. “It plays according to the rules. And if I am going to lose a war, I change the rules, because in war, there are no rules. And if you are losing, you just do something differently, that your opponent will not expect.”

It is important to strike the right balance, Sommer said, between preparing for new technological innovation and retaining traditional capabilities. Look at the real world – in many respects the conflict in Ukraine represents a far older form of warfare, with trench warfare similar to the First World War, rather than a hyper-modern cyberspace war.

“Cyberspace will never completely take over … but you have to be open-minded for new technology.”

Explore more blog posts

Sign up to our monthly defence newsletter

Want to stay up to date on all things Defence? Join our community today and receive our monthly newsletter packed with the latest updates and expert opinions.