Turning data into foresight: How AI should serve commanders
By Simon Thomas, Domain Advisor in Systematic, and former Infantry and Military Intelligence Officer in the British Army
At Systematic, I help translate that problem into software and architecture choices that actually change outcomes on the ground.
The question we should be asking about AI is not “can it do clever things?” but “what operational shortfall does it fix for a commander at 0200, under pressure, with imperfect information?”
Intelligence ≠ automation. Intelligence = insight + foresight
Intelligence, at its simplest, is the disciplined application of information to produce insight and, crucially, foresight. That means any “artificial intelligence” worth its name must give insight and, more importantly, the ability to see plausible futures.
If an algorithm only explains what already happened, it’s useful, but incomplete.
Good AI helps humans by highlighting patterns and possible actions that are too complex for people to figure out on their own.
Big data is relative — make it meaningful
“Big data” in a deployed headquarters isn’t a theoretical number; it is whatever exceeds the staff’s capacity to consume within a reporting cycle. Once inflow outstrips processing ability, you have got a problem.
The practical value of AI is straightforward: if you can reliably ingest and analyse a greater proportion of the incoming flow, your predictions of adversary locations, timings and likely courses of action become measurably more reliable, and commanders’ confidence rises with that reliability.
NATO, academic and defence analyses all point to the same conclusion: scale of data + properly-scoped algorithms = better situational awareness and decision support.
Blogpost: Decide faster, act smarter: Why AI is reshaping military intelligence
Don’t sprinkle pixie dust — scope to an operational problem
There is no “AI pixie dust.” AI is not a universal panacea. That is why I push teams to be surgical: identify the operational problem first (e.g., predicting enemy artillery manoeuvre areas, locating mobile radars, or forecasting likely emitter positions), then design automation and algorithms specifically to deliver the outputs a deployed headquarters needs. Automate the repetitive tasks to give humans time; apply AI where human cognitive limits are the bottleneck. This focused approach is how you deliver real value to commanders and reduce the risk of brittle or irrelevant systems.
The right design: operational problem + strategic aim
I use the tank analogy a lot: in WWI a tank was a concrete solution to a clearly defined tactical problem. In the AI world you still need that same clarity — align the algorithm’s intended effect with both the operational problem and the wider strategic aim. Get those two axes right, and you build the “tank” that commanders will trust and use. Get them wrong and you get models that are clever but ignored.
Recent NATO and allied work emphasises interoperability, explainability and responsible procurement — all practical requirements to make AI credible in operations.
Practical guardrails:
A few practical principles I apply when advising programmes:
• Start with the output: design for the decisions a commander will need to make, not for the model you want to build.
• Measure confidence: provide transparent confidence bounds and failure modes so staff can weigh recommendations.
• Scale data sensibly: prioritise reliable, relevant feeds rather than chasing volume for volume’s sake.
• Design for humans-in-the-loop: automation should buy time and clarity for humans, not remove responsibility.
Conclusion — AI as force multiplier, not replacement
AI’s value in defence is not in replacing commanders or staff; it is in giving them better foresight, earlier. When industry and militaries focus on operational problems, ensure responsible procurement, and build systems that commanders understand and trust, AI becomes a true force multiplier. That requires the right data, the right scope, clear measures of confidence, and a relentless focus on the output that matters in the operating environment.
If you would like to hear more, the audio bites embedded above come from my recent interview; they illustrate the ideas here in my own words and give the practical examples that informed this piece.