Image of a computerised brain
  • Explore more blog posts
  • Newsletter

C2 and Trust

Written by Dr Peter Roberts and reproduced with his kind permission. View the original post here.


Trust has always been a central concept in military command and control: it can be based on a ‘Band of Brothers’ construct or something a bit more complex with allies and partners. Yet this human-to-human rubric is not the same when we consider the concept of trust as it applies to human-machine trust.

There is a need to think more about human machine trust as more C2 systems are added into military headquarters and their assigned forces. What emerges is a demand for less coders (or software savvy commanders), and more about diverse education sets and inquisitive minds. Especially if the idea of delegated and mission-command philosophies are to remain more than pipedreams.

Dr Peter Roberts

 

Dr Peter Roberts is the host of the 'Command and Control' podcast. He is both a Royal Navy veteran and the former Director of Military Sciences at the Royal United Services Institute.

In addition, he is a regular global commentator on military affairs and the host of a popular defence podcast called Western Way of War.

 

The rise of the machine

Trust within the C2 paradigm has been a constant for as long as wars between humans have been conducted. Some of this trust relates to the relationship between commanders (the Nelsonian Band of Brothers, likewise with Monguls, Persians or Carthaginians), but more interesting today is the issue of trust between humans and the systems within C2 Headquarters.

Human relationships (usually build on trust), and training are vital for success in contemporary warfare: good battlefield C2 complements this by reducing friction and isolation. Bad C2, poor relationships undermined by a lack of trust, add friction, increase isolation and reduce the possibility of success on operations.

Underpinning all this is the concept of TRUST. Still human-to-human, but increasingly about human-machine trust than we have been used to. How much can staff and commanders trust the systems in their HQs? How far should they trust them? What does trust even mean in this context?

What is trust?

There is no agreed single definition of trust that works for military and national security sectors. Philosophers will have one definition, scientists another. In any case, there are two common facets of trust that transpose themselves well: that trust is built on the dual components of competence and motive (or intent).

Traditionally, for military personnel this means that trust is founded on a reasonable belief in the competence of colleagues and a common motive or intent. ‘One team, One fight’ as the saying goes. Definitions like this make sense in a military context, even if they are starting to lose resonance across wider society.

A wholly human experience

Yet we cannot simply transpose those this concept onto the instruments of C2 being inserted into military headquarters and forces. Machines, by their very being, do not possess sentient thought and therefore cannot have a motive. In a strictly academic sense, then, trust is a concept that can only exist between humans.

Be that as it may, informally trust also develops through a sense of familiarity and shared experiences. It is not simply an emotional bond, it is the normalisation of behaviours. It is in this way that an automated C2 system in a military HQ (like an automated Common Operating Picture) becomes trusted.

Not through motive, intent, or competence in a human sense, but by familiarity and normalisation: where a picture provides evidence and has been shown to be correct the majority of the time, it becomes a ‘trusted’ tool. In this, there is a predictability and shared understanding that emerges from human-machine teams.

Digital thinking

The extension to this, as AI increasingly arrives in HQs and military forces, will be the implicit trust that becomes invested in those systems that will provide course of action recommendations and intelligence assessments. As behaviours and familiarity with these systems become embedded over time, confidence in their ability to recommend the right decisions will grow. They will, doubtless, become as trusted as precision navigation has become in providing the ‘right’ answer. We have, to an extent, already started delegating decisions and interpretation to machines. The question we should be asking is whether we have done this consciously or out of habit?

As aspects such as navigation systems have evolved, military operators tend to understand how to interrogate the data effectively: buildings, landmarks, geography being out of place immediately inform a user that something is amiss. Radar or other systems can provide secondary confirmation. Yet this type of interrogation is not immediately transferrable to more sophisticated C2 systems entering military HQs and formations today. Our lack of an ability to intelligently interrogate and validate these systems is an aspect of AI in C2 that warrants a good deal more concern than is currently evident.

Data is influencing decision-making in a way that could fundamentally alter what we get from the systems and the very way we conduct C2. Our people need to be trained to understand how to think about these systems, indeed we are yet to define the model of thinking for the digital age: a renaissance man (or woman) for the contemporary world. The policy agendas have skipped this point of reference, diving immediately into the need to regulate and safeguard AI developments. Without understanding how we train people to think and question these systems, we have skipped a vital step.

Blind trust

Without some of these critical building blocks, we will develop a familiarity – a potential an implicit trust – in AI within C2 systems that is without validity or merit. Until we educate and inform our people with skills needed, we will continue to be reliant on the good will and presumed common cause (motive and intent) of the coders and developers of such systems.

In selecting AI C2 systems for military use now, picking companies that match Western military culture has never been more important.

 

Video: SitaWare Insight explained

Leveraging Artificial Intelligence tools like Natural Language Processing, Computer Vision, and Anomaly Detection, SitaWare Insight rapidly uncovers and delivers critical intelligence to support your troops, as well as across joint, allied, and friendly forces. Click on the animation below to learn more about SitaWare Insight.

SitaWare Insight explainer video animation

Sign up to our monthly defence newsletter

Want to stay up to date on all things Defence? Join our community today and receive our monthly newsletter packed with the latest updates and expert opinions.