Accountability, Artificial Intelligence, Autonomous Systems, Deep Learning, Machine Learning, Machine Learning Systems, Robustness, Trust
Machine learning systems are a key element in virtually all decision support systems, autonomous systems, and other systems that are important in NATO operations. These systems will over time have the potential to influence both control and over vehicles, sensors and weapons, as well as the decisions made from sensor input.
There are many activities, projects and programs that look at manipulation of machine learning systems (MLS) and how specific systems can be influenced by creative input. But there is too little activity in machine learning research to look at how we can create more robust systems and how such systems might require a fundamental change in training, testing, validation and/or product phases.
One problem might be that commercial MLS may be trained in ways that cannot be verified through the product. Can the products contain back doors in the system, much like software in general, only made by creatively crafting the input/training data? E.g. is it possible to train a missile detection system, that is trained to report no detection on one specific type of missile, and that this manipulation cannot be detected because the machine learning model is too large and complex? This RTG will look into methods for how such training can take place, how training can take place which will avoid these types of challenges, and how systems must be documented in order to avoid being the victim for such solutions as a customer.
Data from military sensors are being fed directly into systems for fast analysis and decisions. Robustness in training phase is only one step towards a more robust overall system. Military systems also need sensor input to be unpredictive enough that the analysis will not be compromised with fake data. Robustness in operations will also be an important area of research.
Another problem might be the accountability of using MLS when decisions have been made. How can the decision be documented at the time of the event in a way that later can be verified was correct with information currently available. This accountability will require machine learning systems, especially dynamic MLSs, to have major changes from todays “take it or leave it” output.
• Determine the state-of-the-art in robustness and accountability for machine learning systems. Especially deep learning systems with complex and large models which are virtually impossible to manage by humans.
• Examine whether a methodology can be made to verify that commercial MLS, e.g. cloud MLS, comply with a set of criteria. Including what kind of criteria that may be possible, and what should be mandatory in a military setting.