Xiao Zhu , Tiejun Wang , Andrew K. Skidmore , Stephen J. Lee , Isla Duporge
{"title":"Mitigating terrain shadows in very high-resolution satellite imagery for accurate evergreen conifer detection using bi-temporal image fusion","authors":"Xiao Zhu , Tiejun Wang , Andrew K. Skidmore , Stephen J. Lee , Isla Duporge","doi":"10.1016/j.jag.2024.104244","DOIUrl":null,"url":null,"abstract":"<div><div>Very high-resolution (VHR) optical satellite imagery offers significant potential for detailed land cover mapping. However, terrain shadows, which appear dark and lack texture and detail, are especially acute at low solar elevations. These shadows hinder the creation of spatially complete and accurate land cover maps, particularly in rugged mountainous environments. While many methods have been proposed to mitigate terrain shadows in remote sensing, they either perform insufficient shadow reduction or rely on high-resolution digital elevation models which are often unavailable for VHR image shadow mitigation. In this paper, we propose a bi-temporal image fusion approach to mitigate terrain shadows in VHR satellite imagery. Our approach fuses a WorldView-2 multispectral image, which contains significant terrain shadows, with a corresponding geometrically registered WorldView-1 panchromatic image, which has minimal shadows. This fusion is applied to improve the mapping of evergreen conifers in temperate mixed mountain forests. To evaluate the effectiveness of our approach, we first improve an existing shadow detection method by Silva et al. (2018) to more accurately detect shadows in mountainous, forested landscapes. Next, we propose a quantitative algorithm that differentiates dark and light terrain shadows in VHR satellite imagery based on object visibility in shadowed areas. Finally, we apply a state-of-the-art 3D U-Net deep learning method to detect evergreen conifers. Our study shows that the proposed approach significantly reduces terrain shadows and enhances the detection of evergreen conifers in shaded areas. This is the first time a bi-temporal image fusion approach has been used to mitigate terrain shadow effects for land cover mapping at a very high spatial resolution. This approach can also be applied to other VHR satellite sensors. However, careful image co-registration will be necessary when applying this technique to multi-sensor systems beyond the WorldView constellation, such as Pléiades or SkySat.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"134 ","pages":"Article 104244"},"PeriodicalIF":7.6000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569843224006009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0
Abstract
Very high-resolution (VHR) optical satellite imagery offers significant potential for detailed land cover mapping. However, terrain shadows, which appear dark and lack texture and detail, are especially acute at low solar elevations. These shadows hinder the creation of spatially complete and accurate land cover maps, particularly in rugged mountainous environments. While many methods have been proposed to mitigate terrain shadows in remote sensing, they either perform insufficient shadow reduction or rely on high-resolution digital elevation models which are often unavailable for VHR image shadow mitigation. In this paper, we propose a bi-temporal image fusion approach to mitigate terrain shadows in VHR satellite imagery. Our approach fuses a WorldView-2 multispectral image, which contains significant terrain shadows, with a corresponding geometrically registered WorldView-1 panchromatic image, which has minimal shadows. This fusion is applied to improve the mapping of evergreen conifers in temperate mixed mountain forests. To evaluate the effectiveness of our approach, we first improve an existing shadow detection method by Silva et al. (2018) to more accurately detect shadows in mountainous, forested landscapes. Next, we propose a quantitative algorithm that differentiates dark and light terrain shadows in VHR satellite imagery based on object visibility in shadowed areas. Finally, we apply a state-of-the-art 3D U-Net deep learning method to detect evergreen conifers. Our study shows that the proposed approach significantly reduces terrain shadows and enhances the detection of evergreen conifers in shaded areas. This is the first time a bi-temporal image fusion approach has been used to mitigate terrain shadow effects for land cover mapping at a very high spatial resolution. This approach can also be applied to other VHR satellite sensors. However, careful image co-registration will be necessary when applying this technique to multi-sensor systems beyond the WorldView constellation, such as Pléiades or SkySat.
期刊介绍:
The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.