CCE banner
 
Funded Research

An Advanced Learning Framework for High Dimensional Multi-Sensor Remote Sensing Data

Seablom, Michael: NASA Headquarters (Project Lead)

Project Funding: 2012 - 2015

NRA: 2011 NASA: Advanced Information Systems Technology   

Funded by NASA

Abstract:
Advances in optical remote sensing technology over the past two decades have enabled the dramatic increase in spatial, spectral, and temporal data now available to support Earth science research and applications, necessitating equivalent developments in signal processing and data exploitation algorithms. Although this increase in the quality and quantity of diverse multi-source data can potentially facilitate improved understanding of fundamental scientific questions, conventional data processing and analysis algorithms that are currently available to scientists are designed for single-sensor, low dimensional data. These methods cannot fully exploit existing and future earth observation data. Capability to extract information from high dimensional data sets acquired by hyperspectral sensors, textural features derived from multispectral and polarimetric SAR data, and vertical information represented in full waveform LIDAR is particularly lacking. New algorithms are specifically needed for analysis of existing data from the NASA airborne (e.g. AVIRIS, LVIS, and G-LiHT) and EO-1 Hyperion and ALI instruments, and future data acquired by the upcoming Landsat Data Continuity Mission, ICESat-II, and HyspIRI. New paradigms are also required to effectively extract useful information from the high dimensional feature space resulting from multi-source, multi-sensor, and multi-temporal data. Classification of remotely sensed data is an integral part of the analysis stream for land cover mapping in diverse applications. High dimensional data provide capability to make significant improvements in classification results, particularly in environments with complex spectral signatures which may overlap or where textural features provide discriminative information. In this project, a new classification framework will be developed and implemented for robust analysis of high dimensional, multi-sensor/multi-source data in small training sample size conditions (limited in-situ/ground reference data). The framework will employ a multi-kernel Support Vector Machine, an ensemble-classification, and decision fusion system for effectively exploiting a diverse collection of potentially disparate features derived either from the same sensor (e.g. spatial-spectral analysis tasks) or from different sensors (e.g. LIDAR and hyperspectral data). The system will significantly advance reliability of accurate mapping of remotely sensed data, particularly for scenarios with very little reference data to train the classification model. An active learning (AL) component will be integrated in the multi-source/multi-sensor environment to mitigate the impact of limited training data, effectively closing the loop between image analysis and field-collection to acquire the most informative samples for the classification task. The proposed project will have an entry level of TRL-2. During Year 1, the team will incorporate the multi-kernel SVM into the classification system for hyperspectral and multi-source high dimensional data. Multi-view active learning methods previously developed by the PI will also be implemented in the multi-source environment. Efforts in Year 2 will focus on extending the spatial-spectral feature extraction capability, incorporating multi-sensor classification via the multi-kernel SVM ensemble model, and integrating active learning in the multi-sensor environment. Application of the methods will be initiated using a test-bed of multispectral, hyperspectral, and LIDAR data. In Year 3, integration of the proposed framework will be completed, and extensive testing and validation will be conducted using multiple relevancy scenarios. The prototype system will be implemented on the Purdue HUB computational platform to enable a TRL-4 system at the end of the study.


More details may be found in the following project profile(s):