|Multi-sensor Fusion Architecture for the Detection of Person-borne-Improvised Explosive Devices (PB-IEDs)|
|Systems Concepts and Integration|
CIED, Compound Security, Detection, IED, Improvised Explosive Device, PBIED, Personborne, Sensorfusion
The use of Person Borne Improvised Explosive Devices (PB-IEDs) by terrorists is one of the key factors that makes the asymmetric warfare model so difficult to defeat. PB-IEDs can be easily, fabricated and concealed on the human body and they allow the perpetrator to adopt different methods of attack on targets of opportunity. These types of attacks are increasing not only in areas of conflict, but also in urban areas far away from the war zones. Therefore, governments worldwide are actively seeking countermeasures to be better prepared against current and future threats.
A first stage in countering PB-IEDs is the detection of the threat. Detection of PB-IEDs can be carried out with various techniques and technologies and at various levels of operation. Aviation security is an example of multi-level, multi-sensor detection of an IED threat. Military forces are also tasked to protect sensitive locations using similar technologies and compound protection or event protection are common in counter PB-IEDs operations.
The use of multi-sensors for countering PB-IEDs is important in order to increase the probability of detection and reduce false alarm rates. Furthermore no single sensor can detect all explosive threats. Usually each sensor uses its own decision algorithm and the outputs are combined either in parallel or series in a post-detection sensor fusion in order to improve the detection performances. This type of sensor fusion is relatively easy to employ, since the sensors can be considered as black boxes.
Further improvements can be achieved if multi-sensor fusion techniques are applied before detection by individual sensors and this can mitigate the limitations associated with the loss of information through hard decisions at sensor level. However, this is only possible if the raw and pre-processed data produced by each sensor is made available to the fusion node.
Several national research campaigns have focused on the detection of PB-IEDs in various military scenarios. Tests have been carried out in collaboration with the industry and research institutes, and sensor fusion techniques were applied to the output data (post-detection). There are only a few cases of pre-detection sensor fusion techniques carried out in laboratory environments, at a low system architecture TRL.
The main goal of the RTG is to define the steps for building up novel sensor fusion architectures that would improve detection capabilities such that they are greater than the sum of the single sensors. The architectures will use sensor fusion techniques at different levels and ideally at the pre-detection level. The expected TRL of the sensor-fusion system architecture is 5.
The architectures will be designed in several stages, with a combination of technical activities, including one or two demonstrations: the first demonstration will aim of collecting data from multiple sensors and use it to design the architecture; the second demonstration (if possible a CDT) will aim at assessing the capabilities of the system architecture. The knowledge and the algorithms developed during the RTG will be shared amongst the partners.
The RTG will focus on the development of a sensor fusion architecture based on standardized input data. It is assumed that data alignment has already been carried out. The main activities of the RTG, divided per year, will be:
1. Definition of realistic military scenarios and operational requirements;
2. Selection of a short list of detection technologies based on combination of requirements/availability/peer review performances;
3. Scoping test in a controlled environment carried out based on standardized test procedures;
4. Define test plan for data collection based on scenarios, available detection technologies and test facility.
1. Review of the test plan for data collection;
2. Execution of the data collection to:
a. Assess the performances of the single sensors;
b. Collect data in preparation for sensor fusion.
3. Process data for sensor fusion in a static scenario:
a. Define the sensor fusion software architecture;
b. Develop algorithms for sensor fusion;
c. Assess detection performances.
2nd half of Year 3
1. Optional test/CDT to demonstrate the sensor fusion concept;
2. Assess feasibility and implications to extend the sensor fusion concept to a dynamic scenario (i.e. base protection). The main requirement is the need for synchronised sensors and processing of the data in a common field of view;
3. Final report and TAP for follow-on project on a sensor fusion architecture applied to a dynamic scenario.