|Integrating Compressive Sensing and Machine Learning Techniques for Radar Applications|
|Sensors & Electronics Technology|
Adaptive Signal Processing, Compressive Sensing, Deep Learning, Dictionary Learning, Machine Learning, Optimization, Radar, Sparse Signal Representation, Target Classification
In the field of radar there is a growing need for higher resolutions to enable the detection of small targets in a complex background, to improve the tracking of multiple closely spaced targets, and to support automatic target recognition. Recently, Compressive Sensing (CS) emerged as a technique that enables to achieve higher resolution than conventional methods using a combination of random sampling and sparse signal reconstruction algorithms. In the past decade, a lot of theoretical progress has been made in CS and simulations have demonstrated the potential of CS methods to improve radar performance. However, the adoption of CS techniques in operational systems is still lagging behind the theoretical and algorithmic advances due to practical constraints such as latency, memory size, and power consumption dictated especially by the iterative nature of the algorithms used in CS.
In parallel, in recent years Machine Learning (ML) has achieved tremendous success in many commercial applications such as automatic face recognition, speech recognition, natural language processing, autonomous vehicles, and robotics. An important element in the success of ML for these applications is the availability of large databases which are used in training. Machine Learning has the potential to provide computationally efficient approaches to improve target detection, tracking and classification in radar with enhanced resolution. However, Machine Learning techniques, and in particular deep neural networks, are often poorly understood from a theoretical perspective due to their black box nature.
An integration of Compressive Sensing and Machine Learning for radar applications offers the potential to combine the benefits of both worlds. For example, iterations of many CS algorithms for sparse signal recovery have the structure of neural network layers. Therefore, computationally efficient ML models such as Deep Neural Networks can be used to replace the expensive iterations with fixed depth feedforward networks learned from the data. Furthermore, the learning process permits extraction of better dictionaries for sparse signal representation from training data. On the other hand, CS based generative models of target and clutter could possibly be used to produce vast training sets required for ML algorithms filling in the gaps in measured data by means of online predictions from a “compressed” database, improving the generalization process. This CS assisted training strategy will significantly widen the scope of problems where ML could be successfully applied, as in the military domain the availability of a large measured radar data training sets cannot always be guaranteed. In addition, for undersampled and interrupted datasets encountered in multi-function radar systems, CS techniques can be used as preprocessors feeding into ML architectures that cannot effectively impute missing data in these scenarios.
The main objectives of the RTG are to:
• Assess the implementation, performance and robustness of integrated CS and ML architectures and
algorithms for radar applications/scenarios;
• Identify and quantify generalization capability of integrated CS and ML systems across wide range of operational conditions;
• Create a data repositories and algorithm libraries to enable learning techniques.
The results of the RTG will be described in a technical report to be delivered before the end of the RTG.
Themes of convergence in CS and ML techniques that will be addressed in this NATO group include:
• Classification of SAR images combining ML with CS recovery: combining CS imaging algorithms with discriminative deep neural networks for learning imaging processes optimized for decision/classification tasks while remaining robust to clutter variation and occlusions:
• Learning of novel signal structures, e.g., sparsifying transforms, Generative Adversarial Networks (GANs), denoisers and dictionaries that captures radar phenomenology resulting in higher quality reconstructions under different sampling regimes;
• Data augmentation using CS for supporting training of ML/NN;
• Waveform design and CS front-end offering reduced NN schemes.
• Developing NN for signal recovery/parameters estimation: non-iterative signal reconstruction algorithms for radar unrolling classical iterative CS algorithms and learning fixed depth architectures from data suitable for GPU implementation.
• Data driven design of radar sensors using NN: constrained design of adaptive waveforms, (sparse) array layouts, and receive filters with NN.
• Detection of targets in non-Gaussian clutter using NN: exploit NN to jointly learn (statistics of) non-Gaussian clutter and design receive filters and detectors for targets embedded in non-Gaussian clutter.
Enabling technologies will be investigated across the different applications listed above which play a crucial role for the use of NN in radar systems. Examples of these are: incorporating complex valued signals within the NN architectures and analyzing which types of signal and data processing steps can be embedded effectively within the network (determining inputs/outputs of NN).