Talking the talk and walking the walk
While the talk of the role of Artificial Intelligence (AI) has surged in recent years, how can it be used to support military operations and intelligence staff?
The capabilities of AI have grown exponentially in recent years, and as a result the possible applications of the technology to an ever-widening number of work functions continues to increase.
Looking at some of the tools within the AI toolbox, we can explore how they can be used by military intelligence and operational staff to deliver greater efficiency to workflows within a command environment.
Helping a plan come together
The explosion in the potential uses of ChatGPT and similar Large Language Model (LLM) systems is supporting the creation of content in a way that helps both creators and readers. For military staff, this technology can be used to speed up text creation processes that may otherwise be largely time consuming.
As military plans, orders, and presentations come in highly structured formats, the use of AI chatbots to draft textual elements of an operational plan can help to reduce the burden on operational staff and analysts and encourage the use of clear language.
Using Machine Learning (ML) to understand the current common operational picture, an AI chatbot can then compose an overview of the battlespace, summarise complex operational manoeuvres into easily understood and structured language, and output it in a format that can be easily integrated into a plan. ML can also be used to provide an overview of available forces and their capabilities, as well as the time it would take for them to begin and complete a tasking order.
As a result, the time that planning and operations staff spend on producing standardised content can be greatly reduced, leading to improved efficiency in the command cell and the ability to provide support to other units or undertake other tasks.
Deeper uses of Machine Learning will allow an AI system to help devise courses of action for commanders, thereby supporting decisions on the battlefield by unveiling options that may have not otherwise been considered.
Finding needles in haystacks
As the volume of data that sensors provide to planning and intelligence staff grows exponentially, the need to filter out relevant data to uncover insights becomes increasingly difficult. When the data is properly structured and attributed, it becomes easier to identify objects of interest – helping to enrich the Recognised Intelligence Picture (RIP) and bolster situational awareness in the common operational picture.
In delivering structured data, AI technology can help with the automatic tagging, categorisation, and classification of sensor data to ensure that it is placed in the right place at the right time. This can then be displayed in visual ways – such as heatmaps or charts – in dashboards to provide rapid interpretation of the battlespace.
Embracing these principles of business analytics can help reduce the burdensome tasks for intelligence analysts of collating and categorising data that is collected, and means that they can ultimately spend time on higher priority or more demanding tasks.
Is that normal?
Landing in an area of operations can be confusing for any operational team, and understanding what constitutes “normal” activity against potentially hostile activity can allow an opponent to gain an advantage. Understanding the pattern of life in an area – particularly when civilian and commercial actors are still present – can involve the examination of a wide range of data that ultimately does not require any intervention.
Interrogation of civilians going about their daily lives can further lead to alienation against a military force, and potential hostility. Digitalisation and the blurring of civil and military capabilities – from civilian drone use to satellite internet and 5G telecommunications – can mean that signature management for an enemy’s military force can be nearly identical to that of a local civilian population.
AI can help with the analysis of a pattern of life, utilising sensor data to alert operational staff to the presence of unusual actors, or changes in activity by actors who are normally in the area. Systematic’s SitaWare Headquarters Fusion helps staff in a de-duplicate tracks when reported by multiple sensor systems, allowing them to be correlated and merged into a single track containing the most reliable and accurate information.
The technology has already been applied and proven in the maritime domain in support of SitaWare Maritime. SitaWare Headquarters Fusion has worked with sensors and open-source data feeds to track civilian and military vessels as well as identify suspicious vessels or unusual behaviour. The technology is also part of SitaWare Insight, helping users to break down big data to uncover changes in behaviour that can impact the pattern of life.
Joining the [visual] dots
Computer vision is a subset of Artificial Intelligence where a computer system can identify and classify an object automatically, which then turns it into an exploitable datapoint for an intelligence or operations staff.
For example, a unit insignia on an armoured vehicle can help to identify an opponent’s order of battle, allowing a commander to plan according to the capabilities of the enemy’s strengths and weaknesses. Overhead Image Intelligence (IMINT) can also utilise computer vision to identify and search for objects, such as ships at sea or specific types of aircraft at an airbase, with Artificial Intelligence being used classify them for further analysis.
When coupled with other AI technologies such as pattern-of-life analysis and change detection, intelligence and operational staff can see their workloads greatly reduced through automation and smarter working. Being able to gain an overview of large areas of operations and focus in on where there have been changes enables optimised intelligence gathering and the ability to work on more tasks.