Skip to Content

NATO’s outlook on a responsible military adoption of AI

Capgemini
2022-01-06

Moving toward a restricted AI

The current AI is trained on vast amounts of data collected in the real world, and since its typically used algorithms are model-free there are no restrictions on what can be learned from the input data. It is common to refer to this type of AI model learning as non-restricted AI. These non-restricted AI approaches cannot model causalities or perform counter-factual reasoning. In crisis situations, a lack of such capability might lead to inconclusive decisions.

So, whereas current approaches in AI often model and image the physical world, important perspectives on the world are neglected, such as political decisions, social agreements, and public understanding of the narrative of any critical situation facing decision-makers. These might be so strong that in real world applications any AI would have to adhere to them, even though they cannot be learned from the input data alone. Somehow such perspectives will have to be taken into consideration by the AI. Such an AI can be referred to as restricted AI.

Recently the EU and the NATO have taken steps to introduce a path towards restricted AI.

All AI developed by NATO and its partners will have to be adhering to 6 basic principles for the use of AI which, when implemented correctly, might boost the development of AI capabilities necessary to tackle crisis and threatening situations in a decisive and informed manner:

  • Lawfulness. The NATO strategy mentions development and use of AI in accordance with national and international law, including human rights. Not many laws for AI development exist yet, although the European Union recently launched its Artificial Intelligence ACT[1]. Due to ever increasing automation of processes and data ecosystems, lawfulness in AI should preferably be built in into algorithms and methods from the start.  Current AI models the input data but does not align outcomes with laws and regulations. Necessary enhancements to current algorithms and methods must be developed and implemented. Laws represent human conventions on what is allowed and what is not. And representing those in machine-readable and processable formats means finding ways to add semantics, logic and/or reasoning capabilities. However, in using AI to support analyzing an opponent’s course of action it should not be assumed the principles of lawfulness are being applied reciprocally by the adversary.
  • Responsibility & accountability in the NATO guidelines are seen as purely human efforts to follow definition, implementation and lay the responsibility for final execution of actions by humans in the end. This is an important decision, since it will ensure the ownership and involvement of humans in the process, such that social and political issues (which are currently hardly automated or machine processable) are decisive.
  • Explainability & traceability. There is an increased focus on the topics of explainability and traceability in the field of AI. These capabilities are decisive in vouching for the decisions proposed by any AI. Without explainability there will be no trust, and without traceability there will be no way to understand such decisions. Current algorithms have a very rudimentary understanding of explainability, and most applied algorithms show low traceability capabilities. Therefore, NATO guidelines might be the trigger for what would be significant improvements in the field of artificial intelligence. The current NATO guidelines will probably not only facilitate technological advances, but also procedural advantage and better methodology.
  • Reliability. NATO guidelines understand consistent behavior as reliability on specific use cases, which are monitored closely throughout the whole life cycle of development. Technically this means that these models must have some room for introspection in order to make sure that they don’t operate outside of their area of guaranteed behavior.
  • Governability. Closely related to the aspect of reliability, is the aspect of governability. AI applications will be developed according to the intended functions, and humans must be able to inspect whether these intended functions are fulfilled. Most important guideline for governability seems to be that human beings should at all points in time be able to switch off or eliminate the decision of the AI whenever it shows unintended behavior. This implicitly also means that the NATO guidelines propose there is a human in the loop of machines at all times.
  • Bias mitigation. Bias in models is an effect typically appearing in non-restricted AI. Bias is often defined as unwanted modelling of specific aspects in the input data. What is unwanted is often defined by social agreement or political decisions. Bias can severely influence the quality of decisions taken by AI. Much effort is currently put into the recognition and modelling of bias in applied artificial intelligence by introducing restrictions in order to eliminate the effects of bias. However, the problem of bias in data is not easily solved and certainly not always easy to monitor. The issue of bias in AI is also focused on by the European commission in their artificial intelligence act and efforts to find good solutions have increased significantly recently.

As you can see, many of the issues above are related to data and the data ecosystem in which data is created, propagated, merged and published. Therefore, the issues mentioned in the NATO artificial intelligence strategy need improving of the data ecosystems.

Or as Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, stated: “It’s better for the alliance to focus on the basics, like increased data sharing to develop and train military AI and cooperating on using artificial intelligence in logistics. (..) If NATO countries were to cooperate on that, that could create good procedures and set precedents.”[2] Training, at all levels of command, is certainly a key factor to cooperatively synchronize the development of maturity in both AI algorithms and operators, thus gradually building increased capacity and professionalism.

NATO stresses the importance of an ethical approach and points out that “Allies and NATO must strive to protect the use of AI from such interference, manipulation, or sabotage, in line with the Reliability Principle of Responsible Use, also leveraging AI-enabled Cyber Defence applications.”. Furthermore, they point out the need to develop adequate security certification requirements for AI due to the fact that AI can impact critical infrastructure, capabilities and civil preparedness creating potential vulnerabilities, such as cyberspace, that could be exploited by certain state and non-state actors.

The principles mentioned in the NATO strategy allow for modernization and use of AI without stifling innovation, on the contrary even: they might significantly boost the development of areas in artificial intelligence that have not been in focus until now. The AI strategy can point the direction how AI play a decisive role in how NATO’s partners cooperate, analyze and provide vital decision-making information faster and more comprehendible relevant to a wide range of potential challenges and threat situations.

About the author

Robert HP Engels

Robert is Vice President and CTO in Global Business Line Insights & Data, Capgemini Scandinavia. With a PhD to his credit, he is an AI and machine learning expert. His current interests include providing “context for AI”, preferably by making available information through Knowledge Graphs. Also, broad interest in topics like Semantics, Knowledge Representation, Reasoning, Machine Learning (in all its different colours & shapes) and putting it together in more (or less) intelligent ways.

[1] 2021/206 FINAL. EU Artificial Intelligence Act:  https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206

[2] Politico.eu. 2021. https://www.politico.eu/article/nato-ai-artificial-intelligence-standards-priorities/