RSS feed Add to your Facebook page Watch us on Youtube

Activity title

Meaningful human control over AI-based systems

Activity Reference



Human Factors and Medicine

Security Classification




Activity type


Start date


End date



Autonomy, HumanMachine Teaming, Meaningful Human Control


This activity addresses an important issue identified in the Specialists Meeting SCI-296 on Autonomy from a System Perspective, held in May 2017 as part of the STO theme devoted to that topic. As noted in the SCI-296 TER, “in many or most cases, it is foreseen that ‘meaningful human control’ (MHC) will be mandated, necessitating the human to maintain awareness and ‘drill down’ on demand”. Although MHC was not specifically mentioned as proposed follow-on activity within the theme, it underlies several activities that were suggested: (a) Training & trust, (b) Mission planning, human-machine communications, HMI standards, and (c) V&V, Licensing, Training & Education and trust. As such, the current proposed activity integrates several research issues emerging from SCI-296, especially those combining humans, (technical) systems and behaviour. The activity builds upon work on human-autonomy teaming conducted in HFM-247 on “Human-Autonomy Teaming: Supporting Dynamically Adjustable Collaboration”. In this RTG, experts from 7 countries tracked technology activities, explored novel approaches such as “design patterns”, developed metrics, and prioritized key challenges for future research. Since meaningful human control is deemed to be important for many kinds of automated and (semi)autonomous systems, the term “AI-based systems” is used to encompass all AI-based forms of automation and autonomy, for tasks that are either physical (e.g. robotics, autonomous sensors, Mine Countermeasures) or informational (e.g. big data analytics, logistics, decision support). Given the implications of MHC for the latter application domain, this TAP is also relevant for the STO theme “Big data and AI for military decision making”.


The initial objective will be to develop a definition (and scope) of MHC not only for the current activity but also to be proposed for broader use within NATO. To guide the definition development, MHC will be examined from multiple perspectives. These include: (1) authority, responsibility and accountability in the chain of command, including legal aspects; (2) the type of application (e.g. whether the task is essentially physical or informational); (3) the type of interaction between humans and AI-based systems, ranging from oversight over systems (e.g., ‘human-on-the-loop’) to coordination with systems (e.g., command & control, management of swarms or large collections of heterogeneous systems, theatre-wide management of autonomous systems) or even collaboration with systems (partnerships and teams). Examining MHC from these different perspectives will enable better identification of the essential features and drivers for effective MHC over automated and AI-based systems. The main objective of the activity will be to delineate factors impeding / contributing to MHC in various future military applications of AI-based systems. It is expected that observability, predictability and directability of the system will be important factors, as well as the build-up and maintenance of calibrated trust in the AI . Also the way in which human-machine teams will be trained is foreseen as a major determinant. A crucial factor will be the way in which human-machine cooperation and interaction will take place. Most current human-machine interfaces (HMIs) are designed for systems with relatively low levels of autonomy, so it will be important to explore HMI concepts able to deal with highly autonomous systems where interface and communicative interactions will likely have a greater impact on MHC than systems’ individual physical capabilities. The activity should not only be devoted to identifying these (and other) individual determinants, but also to the question how they interact in their effects on MHC. It is in particular of interest to investigate how the self-directedness and self sufficiency of an AI system will interact with other factors. It is foreseen that AI-based systems will be used with different degrees of autonomy, that might be escalated or de-escalated depending on the state of pre-war, war and post-war. As there are already since the cold war existing concepts in automated warfare, e.g. in the strategic defence and air defence systems of NATO, these concepts and experience should be reviewed and investigated, how they can be adapted to AI based (weapon) systems.


Topics to be covered and questions to be answered include: - Definition and scope of MHC; the range of applications to be considered - Analysis of the SOTA: which MHC methods are currently used in AI-based systems (fielded systems, prototypes, and concept demontrations)? - Requirements for obtaining MHC, in particular: o Observability: e.g., what information should be (proactively or reactively) be made available to the human? o Predictability: e.g., how can the system be designed such that its actions are intuitive and can be anticipated upon by the human? o Directability: e.g., how, on which abstraction level, and by whom should there be human control of the system? - Analysis of (dynamic) factors affecting MHC: o Organizational (e.g. authorization, responsibility, chain of command) o Human factors (e.g. situation awareness, task load, human capability) o Technological (e.g. opaque or black box AI, performance envelope) - MHC for maintaining calibrated trust - Training human-machine teams for MHC - Legal, political, and public perception of MHC over AI-based systems

Contact Panel Office