Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8900328
M. Molinier, J. Kilpi
Spatial-spectral approaches applied on hyperspectral images (HSI) with limited labels suffer from overfitting when the size of input filters and the percentage of training data increases. In those cases, pixel values corresponding to testing sets are partly or completely seen during training phase, reducing the number independent testing pixels and leading to overoptimistic accuracy assessment. These effects have been demonstrated in several previous works but still require attention. In this work we propose additional visulizations and measures of the overlapping and overfitting effects, demonstrated on common HSI datasets, to increase awareness on these issues.
{"title":"Avoiding Overfitting When Applying Spectral-Spatial Deep Learning Methods on Hyperspectral Images with Limited Labels","authors":"M. Molinier, J. Kilpi","doi":"10.1109/IGARSS.2019.8900328","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8900328","url":null,"abstract":"Spatial-spectral approaches applied on hyperspectral images (HSI) with limited labels suffer from overfitting when the size of input filters and the percentage of training data increases. In those cases, pixel values corresponding to testing sets are partly or completely seen during training phase, reducing the number independent testing pixels and leading to overoptimistic accuracy assessment. These effects have been demonstrated in several previous works but still require attention. In this work we propose additional visulizations and measures of the overlapping and overfitting effects, demonstrated on common HSI datasets, to increase awareness on these issues.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"125 1","pages":"5049-5052"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89268133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8899857
Changming Yin, B. He, M. Yebra, Xingwen Quan, A. Edwards, Xiangzhuo Liu, Zhanmang Liao, Kaiwei Luo
In this study, the burn severity of several wildfires ignited at northern Australian tropical savannas area were estimated using the Forest Reflectance and Transmittance (FRT) radiative transfer model (RTM) and Sentinel-2A Multi-Spectral Instrument (MSI) satellite data. To alleviate the spectral confusion between severe (SV) and not-severe (NSV) burnt levels caused by sparse tree distribution, the MODIS Vegetation Continuous Fields (VCF) tree cover percentage data was used to constrain the inversion. The results showed that the accuracy of burn severity estimation significantly improves when considering the tree coverage, with overall accuracy for two study sites increasing from 65% to 81% and kappa coefficient from 0.35 to 0.55. Future work will focus on extending the methodology to other ecosystems.
{"title":"Burn Severity Estimation in Northern Australia Tropical Savannas Using Radiative Transfer Model and Sentinel-2 Data","authors":"Changming Yin, B. He, M. Yebra, Xingwen Quan, A. Edwards, Xiangzhuo Liu, Zhanmang Liao, Kaiwei Luo","doi":"10.1109/IGARSS.2019.8899857","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8899857","url":null,"abstract":"In this study, the burn severity of several wildfires ignited at northern Australian tropical savannas area were estimated using the Forest Reflectance and Transmittance (FRT) radiative transfer model (RTM) and Sentinel-2A Multi-Spectral Instrument (MSI) satellite data. To alleviate the spectral confusion between severe (SV) and not-severe (NSV) burnt levels caused by sparse tree distribution, the MODIS Vegetation Continuous Fields (VCF) tree cover percentage data was used to constrain the inversion. The results showed that the accuracy of burn severity estimation significantly improves when considering the tree coverage, with overall accuracy for two study sites increasing from 65% to 81% and kappa coefficient from 0.35 to 0.55. Future work will focus on extending the methodology to other ecosystems.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"113 1","pages":"6712-6715"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80631611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8899247
A. Marinoni, M. M. Espeseth, P. Gamba, C. Brekke, T. Eltoft
In this paper, we introduce a new approach for investigation of polarimetric Synthetic Aperture Radar (PolSAR) images for oil slick analysis. Our method aims at enhancing discrimination of oil types by exploring the polarimetric features that can be produced by processing PolSAR scenes without dimensionality reduction. Taking advantage of a mixture description of the interactions among classes within the dataset and a characterization of their intra- and inter-class variability, our algorithm is able to quantify the areal coverage of different elements. These estimates can be used to hence improve classification. Experimental results on a PolSAR dataset acquired by unmanned aerial vehicle (UAV) on oil slicks in open water show the capacity of our method.
{"title":"Assessment of Polarimetric Variability by Distance Geometry for Enhanced Classification of Oil Slicks Using SAR","authors":"A. Marinoni, M. M. Espeseth, P. Gamba, C. Brekke, T. Eltoft","doi":"10.1109/IGARSS.2019.8899247","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8899247","url":null,"abstract":"In this paper, we introduce a new approach for investigation of polarimetric Synthetic Aperture Radar (PolSAR) images for oil slick analysis. Our method aims at enhancing discrimination of oil types by exploring the polarimetric features that can be produced by processing PolSAR scenes without dimensionality reduction. Taking advantage of a mixture description of the interactions among classes within the dataset and a characterization of their intra- and inter-class variability, our algorithm is able to quantify the areal coverage of different elements. These estimates can be used to hence improve classification. Experimental results on a PolSAR dataset acquired by unmanned aerial vehicle (UAV) on oil slicks in open water show the capacity of our method.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"18 1","pages":"5217-5220"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82013185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8900030
Ralf Burr, Markus Schartel, W. Mayer, T. Walter, C. Waldschmidt
In this contribution a polarimetric side-looking synthetic aperture radar (SAR) mounted on a unmanned aerial vehicle (UAV) is presented and discussed with respect to the detection and localization of landmines. As an example for an anti-personal mine a PFM-1 which contains an elongated aluminium rod was considered. Such anisotropic geometries exibit a polarization dependend radar cross section (RCS). Through a special configuration of three antennas, polarimetric SAR measurements involving a back-projection algorithm could be implemented. This concept allows for the detection and furthermore the classification of such anisotropic objects. First field tests using a tachymeter for localization of the UAV over a snow covered meadow successfully demonstrated the performance by the detection of small metal rods depending on their orientation with respect to the flight path of the UAV. These experimental results were supported by simulations expressing the necessity of polarimetric measurements in combination with a distinct flight trajectory for a robust detection of certain landmines.
{"title":"Uav-Based Polarimetric Synthetic Aperture Radar for Mine Detection","authors":"Ralf Burr, Markus Schartel, W. Mayer, T. Walter, C. Waldschmidt","doi":"10.1109/IGARSS.2019.8900030","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8900030","url":null,"abstract":"In this contribution a polarimetric side-looking synthetic aperture radar (SAR) mounted on a unmanned aerial vehicle (UAV) is presented and discussed with respect to the detection and localization of landmines. As an example for an anti-personal mine a PFM-1 which contains an elongated aluminium rod was considered. Such anisotropic geometries exibit a polarization dependend radar cross section (RCS). Through a special configuration of three antennas, polarimetric SAR measurements involving a back-projection algorithm could be implemented. This concept allows for the detection and furthermore the classification of such anisotropic objects. First field tests using a tachymeter for localization of the UAV over a snow covered meadow successfully demonstrated the performance by the detection of small metal rods depending on their orientation with respect to the flight path of the UAV. These experimental results were supported by simulations expressing the necessity of polarimetric measurements in combination with a distinct flight trajectory for a robust detection of certain landmines.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"24 1","pages":"9208-9211"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81709842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8898151
S. Al-Mansoori, R. Al-Ruzouq, Diena Al Dogom, Meera Al Shamsi, Alya Al Mazzm, N. Aburaed
Accurate and precise spatial hydrologic information is essential for effective management of natural resources, planning, and disaster response. Very high-resolution images and precise digital elevation models (DEMs) are crucial to accurately predict overflow in urban and mountainous regions; however, available course resolution DEMs with insufficient details cannot provide reliable overflow models. In this context, unmanned aerial vehicles (UAVs) offer a competitive alternative over satellites or airplanes and provide high spatial details essential for significant improvement of hydrological modeling. In this study, photogrammetric processing that includes stereo images captured via a fixed-wing drone were processed to generate a high-resolution DEM for the area surrounding the Hatta Dam in the United Arab Emirates. Three levels of details were introduced: data collection, photogrammetric processing, and hydrologic modeling. This study determined that flow modeling based on the UAV DEMs resulted in accurate hydrological modeling.
{"title":"Photogrammetric Techniques and UAV for Drainage Pattern and Overflow Assessment in Mountainous Terrains - Hatta/UAE","authors":"S. Al-Mansoori, R. Al-Ruzouq, Diena Al Dogom, Meera Al Shamsi, Alya Al Mazzm, N. Aburaed","doi":"10.1109/IGARSS.2019.8898151","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8898151","url":null,"abstract":"Accurate and precise spatial hydrologic information is essential for effective management of natural resources, planning, and disaster response. Very high-resolution images and precise digital elevation models (DEMs) are crucial to accurately predict overflow in urban and mountainous regions; however, available course resolution DEMs with insufficient details cannot provide reliable overflow models. In this context, unmanned aerial vehicles (UAVs) offer a competitive alternative over satellites or airplanes and provide high spatial details essential for significant improvement of hydrological modeling. In this study, photogrammetric processing that includes stereo images captured via a fixed-wing drone were processed to generate a high-resolution DEM for the area surrounding the Hatta Dam in the United Arab Emirates. Three levels of details were introduced: data collection, photogrammetric processing, and hydrologic modeling. This study determined that flow modeling based on the UAV DEMs resulted in accurate hydrological modeling.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"82 1","pages":"951-954"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89009839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8898891
Sylvain Lobry, J. Murray, Diego Marcos, D. Tuia
Remote sensing images carry wide amounts of information beyond land cover or land use. Images contain visual and structural information that can be queried to obtain high level information about specific image content or relational dependencies between the objects sensed. This paper explores the possibility to use questions formulated in natural language as a generic and accessible way to extract this type of information from remote sensing images, i.e. visual question answering. We introduce an automatic way to create a dataset using OpenStreetMap1 data and present some preliminary results. Our proposed approach is based on deep learning, and is trained using our new dataset.
{"title":"Visual Question Answering From Remote Sensing Images","authors":"Sylvain Lobry, J. Murray, Diego Marcos, D. Tuia","doi":"10.1109/IGARSS.2019.8898891","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8898891","url":null,"abstract":"Remote sensing images carry wide amounts of information beyond land cover or land use. Images contain visual and structural information that can be queried to obtain high level information about specific image content or relational dependencies between the objects sensed. This paper explores the possibility to use questions formulated in natural language as a generic and accessible way to extract this type of information from remote sensing images, i.e. visual question answering. We introduce an automatic way to create a dataset using OpenStreetMap1 data and present some preliminary results. Our proposed approach is based on deep learning, and is trained using our new dataset.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"17 1","pages":"4951-4954"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74547671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8898003
R. Hänsch, O. Hellwich
Due to the increasing amount of remotely sensed data, methods for its automatic interpretation become more and more important. Corresponding supervised learning techniques, however, strongly depend on the availability of training data, i.e. data where measurements and labels are provided simultaneously. The creation of reference data for large data sets is very challenging and approaches addressing this task often introduce a significant amount of label noise. While other works focused on the influence of label noise on the training process, this paper studies the impact on the evaluation and shows that the corresponding effects are even more adverse.
{"title":"The Truth About Ground Truth: Label Noise in Human-Generated Reference Data","authors":"R. Hänsch, O. Hellwich","doi":"10.1109/IGARSS.2019.8898003","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8898003","url":null,"abstract":"Due to the increasing amount of remotely sensed data, methods for its automatic interpretation become more and more important. Corresponding supervised learning techniques, however, strongly depend on the availability of training data, i.e. data where measurements and labels are provided simultaneously. The creation of reference data for large data sets is very challenging and approaches addressing this task often introduce a significant amount of label noise. While other works focused on the influence of label noise on the training process, this paper studies the impact on the evaluation and shows that the corresponding effects are even more adverse.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"101 1","pages":"5594-5597"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80848362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8899303
J. Contreras, Joachim Denzler
In this paper, we propose a deep learning-based framework which can manage large-scale point clouds of outdoor scenes with high spatial resolution. For large and high-resolution outdoor scenes, point-wise classification approaches are often an intractable problem. Analogous to Object-Based Image Analysis (OBIA), our approach segments the scene by grouping similar points together to generate meaningful objects. Later, our net classifies segments instead of individual points using an architecture inspired by PointNet, which applies Edge convolutions. This approach is trained using both visual and geometrical information. Experiments show the potential of this task even for small training sets. Furthermore, we can show competitive performance on a Large-scale Point Cloud Classification Benchmark.
{"title":"Edge-Convolution Point Net for Semantic Segmentation of Large-Scale Point Clouds","authors":"J. Contreras, Joachim Denzler","doi":"10.1109/IGARSS.2019.8899303","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8899303","url":null,"abstract":"In this paper, we propose a deep learning-based framework which can manage large-scale point clouds of outdoor scenes with high spatial resolution. For large and high-resolution outdoor scenes, point-wise classification approaches are often an intractable problem. Analogous to Object-Based Image Analysis (OBIA), our approach segments the scene by grouping similar points together to generate meaningful objects. Later, our net classifies segments instead of individual points using an architecture inspired by PointNet, which applies Edge convolutions. This approach is trained using both visual and geometrical information. Experiments show the potential of this task even for small training sets. Furthermore, we can show competitive performance on a Large-scale Point Cloud Classification Benchmark.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"34 1","pages":"5236-5239"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79388234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8900228
Ruituo Jiang, Xu Li, Ang Gao, Lixin Li, H. Meng, Shigang Yue, Lei Zhang
Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.
{"title":"Learning Spectral and Spatial Features Based on Generative Adversarial Network for Hyperspectral Image Super-Resolution","authors":"Ruituo Jiang, Xu Li, Ang Gao, Lixin Li, H. Meng, Shigang Yue, Lei Zhang","doi":"10.1109/IGARSS.2019.8900228","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8900228","url":null,"abstract":"Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"62 1","pages":"3161-3164"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88831381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-14DOI: 10.1109/IGARSS.2019.8899787
M. M. Espeseth, S. Skrunes, C. Brekke, A. M. Johansson
We attempt to understand how a set of well known polari-metric Synthetic Aperture Radar (SAR) features are impacted by the additive system noise for mineral oil and produced water slicks. For this, we use quad-polarimetric SAR scenes from Radarsat-2. Oil slicks at sea can be detected using SAR instruments, and the dual- (HH-VV) and quad-polarimetric modes can provide additional information about the characteristics of the oil. Therefore the increase in polarization dimensionality may be beneficial in a potential clean-up situation. For example, characterization could aid in separating different types of oil slicks, like mineral oil and produced water as studied here. Oil slick characterization using scattering properties can only be performed if the returned signal is well above the noise floor. To avoid misinterpretation it is important to understand how the noise impacts the measured radar signal. Most of the features investigated in this study were to a larger degree influenced by the additive noise. Further, a backscatter signal level of 10 dB above the noise floor is identified as necessary to support analysis of the scattering properties within the oil slicks without too much noise contamination of the signal. The mineral oils and produced water slicks showed similar polarimetric behavior, despite their chemical and physical differences at release.
{"title":"The Impact of Additive Noise on Polarimetric Radarsat-2 Data Covering Oil Slicks","authors":"M. M. Espeseth, S. Skrunes, C. Brekke, A. M. Johansson","doi":"10.1109/IGARSS.2019.8899787","DOIUrl":"https://doi.org/10.1109/IGARSS.2019.8899787","url":null,"abstract":"We attempt to understand how a set of well known polari-metric Synthetic Aperture Radar (SAR) features are impacted by the additive system noise for mineral oil and produced water slicks. For this, we use quad-polarimetric SAR scenes from Radarsat-2. Oil slicks at sea can be detected using SAR instruments, and the dual- (HH-VV) and quad-polarimetric modes can provide additional information about the characteristics of the oil. Therefore the increase in polarization dimensionality may be beneficial in a potential clean-up situation. For example, characterization could aid in separating different types of oil slicks, like mineral oil and produced water as studied here. Oil slick characterization using scattering properties can only be performed if the returned signal is well above the noise floor. To avoid misinterpretation it is important to understand how the noise impacts the measured radar signal. Most of the features investigated in this study were to a larger degree influenced by the additive noise. Further, a backscatter signal level of 10 dB above the noise floor is identified as necessary to support analysis of the scattering properties within the oil slicks without too much noise contamination of the signal. The mineral oils and produced water slicks showed similar polarimetric behavior, despite their chemical and physical differences at release.","PeriodicalId":13262,"journal":{"name":"IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium","volume":"1 1","pages":"5756-5759"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76807058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}