Vinsamlegast notið þetta auðkenni þegar þið vitnið til verksins eða tengið í það: http://hdl.handle.net/1946/25244
The development of the latest-generation optical data sensors mounted on board of both spaceborne and airborne Earth observation platforms have led to the increasing volume, acquisition speed and variety of sensed images They have been classified as big remote sensing (RS) data since the reached volume of the archived data (i.e., petabytes scale), the challenging velocity in the sense of the continuous acquisition at increasing rate (short re-visiting time), and the wide variety of the available optical sensors with high resolutions. The RS images can be utilized in interdisciplinary applications addressing specific topics such as global and local climate change studies, ecological and environmental monitoring, urban planning, etc. However, their information content depends upon various factors, e.g. the sensor resolution (i.e., spectral, spatial, radiometric), the equipment unreliability, the type and amount of noise, etc. As a consequence this thesis contributes to the interpretation of RS images that is not straightforward and it requires a powerful yet highly accurate processing scheme in order to extract reliable and valuable information. In order to address the challenges of big RS data, this thesis focuses on automatic, scalable, and parallel processing methods within the presented RS classification scheme. The above is achieved on the basis of two core thesis objectives: develop an automatic modeling of the spatial information and provide a scalable classification algorithm. The first objective is to identify an effective method that seeks to exploit the spatial information included in RS images. Very High Resolution (VHR) images with sub-metric resolution allow to make an accurate analysis of the geometrical features of the objects present in the scene under study. However, the intrinsic mixture of land covers in natural landscapes and the overwhelming amount of details present in urban areas make the analysis both particularly complex and demanding. One of the most promising strategies for the analysis and the interpretation of a scene are region-based hierarchical representations included in the Mathematical Morphology (MM) framework. This strategy relies on the Tree of Shapes (ToS), which is a well-adapted structure for high-level image processing, i.e. it is invariant to contrast change and it describes how objects are included in each other. Attribute Filters (AFs), which are connected operators (COs), had efficiently been implemented on the ToS in multilevel architectures in order to compute Self-Dual Attribute Profiles (SDAPs). The thesis offers effective strategies able to generate SDAPs capturing the most discriminant features in regards to the classification problem, such as structures with heterogeneous characteristics (e.g. scale and shape). These SDAPs can be defined by different attributes (i.e., increasing or non increasing) and filter rules (i.e., pruning or non pruning) combinations. Their performances are studied on the background of the classification problem of multispectral datasets. A solution is proposed to the unresolved issue of the selection of filter parameters for the SDAPs, which is able to compute profiles that are both representative (i.e., they contain salient structures in the image) and non-redundant (i.e., the objects are present only in one or few levels of the profile). A novel strategy of automatic selection of the thresholds is proposed for tackling this issue. The SDAPs have already proven to be more effective than Attribute Profiles (APs) since they process bright and dark regions simultaneously. In order to maximize the potential of the SDAPs, Extended Self-Attribute Profiles (ESDAPs), a generalization of SDAPs, are proposed for the analysis of hyperspectral images. The second objective is to take advantage of the emerging parallel computing architectures for accelerating challenging classification problems. Traditional serial classifier implementations present several limitations when considering high dimensional RS datasets. This dimensionality depends on the number of features and samples. The features can be an intrinsic dimension of the data (e.g., hyperspectral images) and/or the result of particular processing analysis (e.g., spatial enhancement). The sample dimension can be caused by the frequency and/or the size of the acquisition of the coverage. As a result the classification process becomes more complicated and cumbersome, requiring considerable processing power and data storage capability. Among the widely used RS classifiers, Support Vector Machines (SVMs) have often been found to be more effective in terms of classification accuracies and stability of the parameter settings. However, SVM can be very demanding with respect to the processing time, e.g., in tuning the hyperplane parameters with cross-validation. In order to find the best solution to these issues, a survey of SVM parallelization approaches is presented.
|Declaration of access Cavallaro.pdf||498.61 kB||Lokaður||Yfirlýsing|