RSS feed LinkedIn Watch us on Youtube

Activity title

Machine Learning for Wide Area Surveillance

Activity Reference



Sensors & Electronics Technology

Security Classification




Activity type


Start date


End date



ABI, GMTI, Machine Learning, Maritime Radar, WAMI, Wide Area Surveillance


A key building block underpinning intelligence, surveillance, target acquisition, and reconnaissance (ISTAR) is the provision of a wide area surveillance (WAS) capability. A fundamental sensor technology for achieving WAS remains the radar sensor which has traditionally employed real aperture radar (RAR) scanning modes to rapidly survey large areas. More recently, the WAS capability has been supplemented and greatly enhanced by the development of wide area motion imagery (WAMI) based on electro-optical (EO) sensors which provide a detection based surveillance capability across regions of tens of square kilometers. Despite these developments in sensor technology the continued evolution of the problem space raises significant new challenges to the achievement of robust WAS. For example, the migration of sensors to high altitude platforms, such as Remotely Piloted Aircraft System (RPAS), leads to greatly increased surface clutter interference resulting in severe degradation of radar performance. Furthermore, the increased importance of surveillance of urban areas coupled with the desire to detect and track small maneuverable targets in support of activity based intelligence (ABI) results in an extremely challenging detection and tracking problem. Legacy detection and tracking approaches tend to perform poorly under these new clutter environments. This failure is strongly linked to the inability to accurately describe and model the statistical processes associated with the clutter and target signatures which have been observed to be complex nonlinear functions of time, space, environment and target class. This is the type of challenge for which machine learning (ML) has been shown to be highly applicable in other fields. While the transference of civilian applications, such as object recognition or speech recognition techniques, have been investigated for application to military problems such as synthetic aperture radar (SAR) image analysis, there has been little investigation of the application of ML techniques to the WAS problem.


The RTG will accomplish the following objectives: 1) Collect and identify common data sets to support development and comparison of data sets across multinational effort. Develop truthing strategies and tools to support labelling and annotation of data sets. 2) Develop ML performance metrics for detection, tracking and classification. 3) Identify unique features of collection environment and strategies to exploit via ML. 4) Baseline performance of machine learning against traditional approaches. 5) Leverage lessons learned from higher TRL WAMI ML processing to identify most suitable ML algorithms for radar applications. 6) Develop/modify/improve ML algorithms to address identified challenges including but not limited to: a. Automated track repair. b. Use of external semantics data (e.g. road layouts, building data, weather etc.) c. Improve static detectors. d. Integrated ML detection and tracking. e. Learning track semantics


- Supervised versus unsupervised machine learning for Wide Area Motion Imagery (WAMI) and Real Aperture Radar (RAR) - Machine learning for object detection, tracking and classification. - Benchmark ML approaches for real-time application, e.g., processing load. - Semantic segmentation of surveillance space - Feature space definition and extraction

Contact Panel Office