Christ Alain Nekuie Mouafo, Charles Antoine Basseka, Suzanne Ngo Boum Nkot, Constantin Mathieu Som Mbang, Cyrille Donald Njiteu Tchoukeu, Yannick Stephan Kengne, Paul Bertrand Tsopkeng, Jacques Etame
The aim of this study is to map and analyze the lineament network in the Edéa, Cameroon, area using remote sensing and gravimetric data to determine their hydrogeological implications. Principal component analysis and directional filters applied to Landsat7 ETM+ and Shuttle Radar Topography Mission imagery, respectively, were used to extract remote sensing lineaments. Rose diagram of these lineaments highlights four families of lineaments along the N–S, E–W, NE–SW, and NW–SE directions. There are three major directions accounting for 74% of lineaments, including N0° to N10°, N20° to N30°, and N40° to N50°; and four minor directions (with 26% of the lineaments), including N60° N70°, N80° to N90°, N130° to N140°, and N150° to N160°. N20° to N90° directions correlate with those of major structures of the Oubanguides Complex, such as the Sanaga Fault and Central Cameroon Shear Zone. N130° to N140° direction corresponds to orientation of Shear Zones and blastomylonitic faults of Nyong Complex. Superposition of these lineaments on hydrographic network shows similarities between their directions, thus highlighting strong impact of tectonics on orientation of hydrographic network. The presence of numerous lineaments highlights strongly fractured subsoil, and their high density favors the circulation and accumulation of groundwater. Upward continuation and horizontal gradient maxima methods applied to Earth Gravitational Model 2008 data allowed the extraction of gravimetric lineaments, with a major N–S orientation, which correlates with general orientation of South Atlantic opening. Superposition of remote sensing lineaments and gravimetric lineaments highlights their parallelism, admitting that gravimetric structures are an extension in depth of surface structures defined by remote sensing.
{"title":"Lineament mapping in the Edea area (Littoral, Cameroon) using remote sensing and gravimetric data: hydrogeological implications","authors":"Christ Alain Nekuie Mouafo, Charles Antoine Basseka, Suzanne Ngo Boum Nkot, Constantin Mathieu Som Mbang, Cyrille Donald Njiteu Tchoukeu, Yannick Stephan Kengne, Paul Bertrand Tsopkeng, Jacques Etame","doi":"10.1117/1.jrs.18.032402","DOIUrl":"https://doi.org/10.1117/1.jrs.18.032402","url":null,"abstract":"The aim of this study is to map and analyze the lineament network in the Edéa, Cameroon, area using remote sensing and gravimetric data to determine their hydrogeological implications. Principal component analysis and directional filters applied to Landsat7 ETM+ and Shuttle Radar Topography Mission imagery, respectively, were used to extract remote sensing lineaments. Rose diagram of these lineaments highlights four families of lineaments along the N–S, E–W, NE–SW, and NW–SE directions. There are three major directions accounting for 74% of lineaments, including N0° to N10°, N20° to N30°, and N40° to N50°; and four minor directions (with 26% of the lineaments), including N60° N70°, N80° to N90°, N130° to N140°, and N150° to N160°. N20° to N90° directions correlate with those of major structures of the Oubanguides Complex, such as the Sanaga Fault and Central Cameroon Shear Zone. N130° to N140° direction corresponds to orientation of Shear Zones and blastomylonitic faults of Nyong Complex. Superposition of these lineaments on hydrographic network shows similarities between their directions, thus highlighting strong impact of tectonics on orientation of hydrographic network. The presence of numerous lineaments highlights strongly fractured subsoil, and their high density favors the circulation and accumulation of groundwater. Upward continuation and horizontal gradient maxima methods applied to Earth Gravitational Model 2008 data allowed the extraction of gravimetric lineaments, with a major N–S orientation, which correlates with general orientation of South Atlantic opening. Superposition of remote sensing lineaments and gravimetric lineaments highlights their parallelism, admitting that gravimetric structures are an extension in depth of surface structures defined by remote sensing.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140932728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shenming Qu, Yongyong Lu, Can Cui, Jiale Duan, Yuan Xie
Extracting roads from complex remote sensing images is a crucial task for applications, such as autonomous driving, path planning, and road navigation. However, conventional convolutional neural network-based road extraction methods mostly rely on square convolutions or dilated convolutions in the local spatial domain. In multi-directional continuous road segmentation, these approaches can lead to poor road connectivity and non-smooth boundaries. Additionally, road areas occluded by shadows, buildings, and vegetation cannot be accurately predicted, which can also affect the connectivity of road segmentation and the smoothness of boundaries. To address these issues, this work proposes a multi-directional spatial connectivity network (MDSC-Net) based on multi-directional strip convolutions. Specifically, we first design a multi-directional spatial pyramid module that utilizes a multi-scale and multi-directional feature fusion to capture the connectivity relationships between neighborhood pixels, effectively distinguishing narrow and scale different roads, and improving the topological connectivity of the roads. Second, we construct an edge residual connection module to continuously learn and integrate the road boundaries and detailed information of shallow feature maps into deep feature maps, which is crucial for the smoothness of road boundaries. Additionally, we devise a high-low threshold connectivity algorithm to extract road pixels obscured by shadows, buildings, and vegetation, further refining textures and road details. Extensive experiments on two distinct public benchmarks, DeepGlobe and Ottawa datasets, demonstrate that MDSC-Net outperforms state-of-the-art methods in extracting road connectivity and boundary smoothness. The source code will be made publicly available at https://github/LYY199873/MDSC-Net.
{"title":"MDSC-Net: multi-directional spatial connectivity for road extraction in remote sensing images","authors":"Shenming Qu, Yongyong Lu, Can Cui, Jiale Duan, Yuan Xie","doi":"10.1117/1.jrs.18.024504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.024504","url":null,"abstract":"Extracting roads from complex remote sensing images is a crucial task for applications, such as autonomous driving, path planning, and road navigation. However, conventional convolutional neural network-based road extraction methods mostly rely on square convolutions or dilated convolutions in the local spatial domain. In multi-directional continuous road segmentation, these approaches can lead to poor road connectivity and non-smooth boundaries. Additionally, road areas occluded by shadows, buildings, and vegetation cannot be accurately predicted, which can also affect the connectivity of road segmentation and the smoothness of boundaries. To address these issues, this work proposes a multi-directional spatial connectivity network (MDSC-Net) based on multi-directional strip convolutions. Specifically, we first design a multi-directional spatial pyramid module that utilizes a multi-scale and multi-directional feature fusion to capture the connectivity relationships between neighborhood pixels, effectively distinguishing narrow and scale different roads, and improving the topological connectivity of the roads. Second, we construct an edge residual connection module to continuously learn and integrate the road boundaries and detailed information of shallow feature maps into deep feature maps, which is crucial for the smoothness of road boundaries. Additionally, we devise a high-low threshold connectivity algorithm to extract road pixels obscured by shadows, buildings, and vegetation, further refining textures and road details. Extensive experiments on two distinct public benchmarks, DeepGlobe and Ottawa datasets, demonstrate that MDSC-Net outperforms state-of-the-art methods in extracting road connectivity and boundary smoothness. The source code will be made publicly available at https://github/LYY199873/MDSC-Net.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Early disease detection is required, considering the impacts of diseases on crop yield. However, current methods involve labor-intensive data collection. Thus, unsupervised anomaly detection in time series imagery was proposed, requiring high-resolution unmanned aerial vehicle (UAV) imagery and sophisticated algorithms to identify unknown anomalies amidst complex data patterns to cope with within season crop monitoring and background challenges. The dataset used in this study was acquired by a Micasense Altum sensor on a DJI Matrice 210 UAV with a 4 mm resolution in Gottingen, Germany. The proposed methodology includes (1) date selection for finding the date sensitive to sugar beet changes, (2) vegetation index (VI) selection for finding the one sensitive to sugar beet and its temporal patterns by visual inspection, (3) sugar beet extraction using thresholding and morphological operator, and (4) an ensemble of bottom-up, Kernel, and quadratic discriminate analysis methods for unsupervised time series anomaly detection. The study highlighted the importance of the wide-dynamic-range VI and morphological filtering with time series trimming for accurate disease detection while reducing background errors, achieving a kappa of 76.57%, comparable to deep learning model accuracies, indicating the potential of this approach. Also, 81 days after sowing, image acquisition could begin for cost and time efficient disease detection.
{"title":"Cercospora leaf spot detection in sugar beets using high spatio-temporal unmanned aerial vehicle imagery and unsupervised anomaly detection methods","authors":"Helia Noroozi, Reza Shah-Hosseini","doi":"10.1117/1.jrs.18.024506","DOIUrl":"https://doi.org/10.1117/1.jrs.18.024506","url":null,"abstract":"Early disease detection is required, considering the impacts of diseases on crop yield. However, current methods involve labor-intensive data collection. Thus, unsupervised anomaly detection in time series imagery was proposed, requiring high-resolution unmanned aerial vehicle (UAV) imagery and sophisticated algorithms to identify unknown anomalies amidst complex data patterns to cope with within season crop monitoring and background challenges. The dataset used in this study was acquired by a Micasense Altum sensor on a DJI Matrice 210 UAV with a 4 mm resolution in Gottingen, Germany. The proposed methodology includes (1) date selection for finding the date sensitive to sugar beet changes, (2) vegetation index (VI) selection for finding the one sensitive to sugar beet and its temporal patterns by visual inspection, (3) sugar beet extraction using thresholding and morphological operator, and (4) an ensemble of bottom-up, Kernel, and quadratic discriminate analysis methods for unsupervised time series anomaly detection. The study highlighted the importance of the wide-dynamic-range VI and morphological filtering with time series trimming for accurate disease detection while reducing background errors, achieving a kappa of 76.57%, comparable to deep learning model accuracies, indicating the potential of this approach. Also, 81 days after sowing, image acquisition could begin for cost and time efficient disease detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140933053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inaccurate solar vector orientation knowledge can considerably deteriorate calibration results for the Visible Infrared Imaging Radiometer Suite (VIIRS). We develop a methodology to use the Suomi National Polar-orbiting Partnership (SNPP) VIIRS solar diffuser stability monitor (SDSM) sun view data to assess the knowledge accuracy of the solar angles that reside in the onboard calibrator intermediate product (OBCIP) files used for on-orbit radiometric calibration. We applied an initial version of this methodology in 2013 and found that the solar declination angle had a relative error that varied between ∼0 deg to 0.17 deg. The relative error is referenced to the error at the SNPP satellite yaw maneuver time that occurred on February 15 to 16, 2012. Our mission long results from the current methodology show that the solar vector angular knowledge error occurred from the early mission until mission day 1129 (November 30, 2014). The error undulates yearly with the largest error in the solar declination angle increasing from ∼0.17 deg in the first year to 0.19 deg in the third year, agreeing with the solar vector error root cause understanding realized in early 2014. With the reprocessed OBCIP files, we find the solar vector declination and azimuth angular knowledge errors have near zero biases. The detection limit of this methodology strongly depends on how finely the solar angle is sampled by the SDSM detectors. With the SDSM sun view data collected when the SDSM operated once per day, this methodology yields detection standard deviations of 0.013 deg and 0.024 deg for the solar declination and azimuth angles. With a 3-sigma criterion, at the detection limits, the solar orientation errors result in a calibration error of 0.088%. This method can be applied to other Earth-orbiting sensors.
{"title":"SNPP VIIRS solar vector orientation knowledge error revealed by solar diffuser stability monitor sun views","authors":"Ning Lei, Xiaoxiong Xiong, Sherry Li, Kevin Twedt","doi":"10.1117/1.jrs.18.027502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.027502","url":null,"abstract":"Inaccurate solar vector orientation knowledge can considerably deteriorate calibration results for the Visible Infrared Imaging Radiometer Suite (VIIRS). We develop a methodology to use the Suomi National Polar-orbiting Partnership (SNPP) VIIRS solar diffuser stability monitor (SDSM) sun view data to assess the knowledge accuracy of the solar angles that reside in the onboard calibrator intermediate product (OBCIP) files used for on-orbit radiometric calibration. We applied an initial version of this methodology in 2013 and found that the solar declination angle had a relative error that varied between ∼0 deg to 0.17 deg. The relative error is referenced to the error at the SNPP satellite yaw maneuver time that occurred on February 15 to 16, 2012. Our mission long results from the current methodology show that the solar vector angular knowledge error occurred from the early mission until mission day 1129 (November 30, 2014). The error undulates yearly with the largest error in the solar declination angle increasing from ∼0.17 deg in the first year to 0.19 deg in the third year, agreeing with the solar vector error root cause understanding realized in early 2014. With the reprocessed OBCIP files, we find the solar vector declination and azimuth angular knowledge errors have near zero biases. The detection limit of this methodology strongly depends on how finely the solar angle is sampled by the SDSM detectors. With the SDSM sun view data collected when the SDSM operated once per day, this methodology yields detection standard deviations of 0.013 deg and 0.024 deg for the solar declination and azimuth angles. With a 3-sigma criterion, at the detection limits, the solar orientation errors result in a calibration error of 0.088%. This method can be applied to other Earth-orbiting sensors.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last several decades, large wildfires have become increasingly common across the United States causing a disproportionate impact on forest health and function, human well-being, and the economy. Here, we examine the severity of large wildfires across the Contiguous United States over the past decade (2011 to 2020) using a wide array of meteorological, land cover, and topographical features in a deep neural network model. A total of 4538 wildfire incidents were used in the analysis covering 87,305 square miles of burned area. We observed the highest number of large wildfires in California, Texas, and Idaho, with lightning causing 43% of these incidents. Importantly, results indicate that the severity of wildfire occurrences is highly correlated with the weather, land cover, and elevation of the study area as indicated from their SHapley Additive exPlanations values. Overall, different variants of data-driven models and their results could provide useful guidance in managing landscapes for large wildfires under changing climate and disturbance regimes.
{"title":"Predicting large wildfires in the Contiguous United States using deep neural networks","authors":"Sambandh Dhal, Shubham Jain, Krishna Chaitanya Gadepally, Prathik Vijaykumar, Ulisses Braga-Neto, Bhavesh Hariom Sharma, Bharat Sharma Acharya, Kevin Nowka, Stavros Kalafatis","doi":"10.1117/1.jrs.18.028501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.028501","url":null,"abstract":"Over the last several decades, large wildfires have become increasingly common across the United States causing a disproportionate impact on forest health and function, human well-being, and the economy. Here, we examine the severity of large wildfires across the Contiguous United States over the past decade (2011 to 2020) using a wide array of meteorological, land cover, and topographical features in a deep neural network model. A total of 4538 wildfire incidents were used in the analysis covering 87,305 square miles of burned area. We observed the highest number of large wildfires in California, Texas, and Idaho, with lightning causing 43% of these incidents. Importantly, results indicate that the severity of wildfire occurrences is highly correlated with the weather, land cover, and elevation of the study area as indicated from their SHapley Additive exPlanations values. Overall, different variants of data-driven models and their results could provide useful guidance in managing landscapes for large wildfires under changing climate and disturbance regimes.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial and temporal land-use patterns in the Songhua River Basin (SRB) over the past 20 years were analyzed; the influence of natural geographic, socioeconomic, and anthropogenic factors was considered. Using spatial analysis and geodetector modeling, we assessed various indicators to comprehensively analyze land-use changes in the SRB in a long time series (2001 to 2021). Our goal was to determine the extent to which each factor influences land-use change and the mechanisms of interaction. We found that natural geographic factors and anthropogenic factors, particularly elevation and population density, had a greater influence on land-use changes than climatic and socio-economic factors. Despite a positive trend in land use indicated by the composite index, the SRB is experiencing a decrease in undeveloped land resources annually. We also identified that interactions between factors had varying effects, with the superposition of multiple factors potentially exacerbating conflicts between different land-use types. These findings provide valuable insights for strategic planning, policy formulation, and optimization of land resources in the Songhua River Basin.
{"title":"Determinants of land-use and cover change: role of natural resources and human activities in spatial-temporal evolution","authors":"Wenqing Wu, Yunlong Zhao, Jianwen Xue, Xiangzhou Dou, Jiale Xu, Gaopeng Wu, Qiang Zhao","doi":"10.1117/1.jrs.18.026501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.026501","url":null,"abstract":"Spatial and temporal land-use patterns in the Songhua River Basin (SRB) over the past 20 years were analyzed; the influence of natural geographic, socioeconomic, and anthropogenic factors was considered. Using spatial analysis and geodetector modeling, we assessed various indicators to comprehensively analyze land-use changes in the SRB in a long time series (2001 to 2021). Our goal was to determine the extent to which each factor influences land-use change and the mechanisms of interaction. We found that natural geographic factors and anthropogenic factors, particularly elevation and population density, had a greater influence on land-use changes than climatic and socio-economic factors. Despite a positive trend in land use indicated by the composite index, the SRB is experiencing a decrease in undeveloped land resources annually. We also identified that interactions between factors had varying effects, with the superposition of multiple factors potentially exacerbating conflicts between different land-use types. These findings provide valuable insights for strategic planning, policy formulation, and optimization of land resources in the Songhua River Basin.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide an innovative methodology for detecting small objects in remote sensing imagery. Our method addresses challenges related to missed and false detections caused by the limited pixel representation of small objects. It integrates super-resolution technology with dynamic feature fusion to enhance detection accuracy. We introduce a cross-stage local feature fusion module to improve feature extraction. In addition, we propose a super-resolution network with soft thresholding to refine small object features, resulting in improving resolution of feature maps while reducing redundancy. Furthermore, we embed a dynamic fusion module based on feature space relationships into a dual-branch network to strengthen the role of the super-resolution branch. Experimental validation on DIOR and NWPU VHR-10 datasets shows mAP improvements to 73.9% and 93.7%, respectively, with FLOPs of 24.89G and 22.33G. Our method outperforms existing approaches regarding accuracy and number of parameters, effectively addressing challenges in small object detection in remote sensing imagery.
{"title":"Small object detection model for remote sensing images combining super-resolution assisted reasoning and dynamic feature fusion","authors":"Jun Yang, Tongyang Wang","doi":"10.1117/1.jrs.18.028503","DOIUrl":"https://doi.org/10.1117/1.jrs.18.028503","url":null,"abstract":"We provide an innovative methodology for detecting small objects in remote sensing imagery. Our method addresses challenges related to missed and false detections caused by the limited pixel representation of small objects. It integrates super-resolution technology with dynamic feature fusion to enhance detection accuracy. We introduce a cross-stage local feature fusion module to improve feature extraction. In addition, we propose a super-resolution network with soft thresholding to refine small object features, resulting in improving resolution of feature maps while reducing redundancy. Furthermore, we embed a dynamic fusion module based on feature space relationships into a dual-branch network to strengthen the role of the super-resolution branch. Experimental validation on DIOR and NWPU VHR-10 datasets shows mAP improvements to 73.9% and 93.7%, respectively, with FLOPs of 24.89G and 22.33G. Our method outperforms existing approaches regarding accuracy and number of parameters, effectively addressing challenges in small object detection in remote sensing imagery.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the reliability of satellite river width (SRW) measurements to estimate the river discharge and its sensitivity to various hydro-geomorphological features. The study encompasses SRW extents at 141 in-situ hydrological observation stations, across seven tropical basins in India, with a mean annual discharge ranging from 2351 m3/s to less than 1 m3/s. Integrating optical (Sentinel-2, Landsat) and synthetic-aperture radar (SAR; Sentinel-1) data in the Google Earth Engine (GEE), 63,885 images are processed in the GEE to generate a dense time series of the SRW. Results demonstrate a good correlation (>0.50) between the SRW and in-situ discharge at 61 stations, primarily in the Godavari and Mahanadi basins. Furthermore, SRW-based rating curves exhibit reliable predictive capabilities at 44 stations, highlighting the potential to develop SRW rating curves in sparsely gauged basins. Investigations on the possible impact of different hydro-geomorphological features on the performance of the SRW to estimate the river discharge revealed optimal conditions in river reaches at lower elevations with substantial temporal variations in the discharge and associated variation in the river width along with a history of maximum water spread. Consequently, the Surface Water and Ocean Topography satellite’s river networks in the region are classified based on these findings, with 3567 out of 6132 river reaches identified as suitable for reliable SRW-based discharge estimation.
{"title":"Examining the impact of hydro-geomorphological features in satellite river width-based discharge estimations","authors":"M. S. Adarsh, C. T. Dhanya, Shard Chander","doi":"10.1117/1.jrs.18.024503","DOIUrl":"https://doi.org/10.1117/1.jrs.18.024503","url":null,"abstract":"We investigate the reliability of satellite river width (SRW) measurements to estimate the river discharge and its sensitivity to various hydro-geomorphological features. The study encompasses SRW extents at 141 in-situ hydrological observation stations, across seven tropical basins in India, with a mean annual discharge ranging from 2351 m3/s to less than 1 m3/s. Integrating optical (Sentinel-2, Landsat) and synthetic-aperture radar (SAR; Sentinel-1) data in the Google Earth Engine (GEE), 63,885 images are processed in the GEE to generate a dense time series of the SRW. Results demonstrate a good correlation (>0.50) between the SRW and in-situ discharge at 61 stations, primarily in the Godavari and Mahanadi basins. Furthermore, SRW-based rating curves exhibit reliable predictive capabilities at 44 stations, highlighting the potential to develop SRW rating curves in sparsely gauged basins. Investigations on the possible impact of different hydro-geomorphological features on the performance of the SRW to estimate the river discharge revealed optimal conditions in river reaches at lower elevations with substantial temporal variations in the discharge and associated variation in the river width along with a history of maximum water spread. Consequently, the Surface Water and Ocean Topography satellite’s river networks in the region are classified based on these findings, with 3567 out of 6132 river reaches identified as suitable for reliable SRW-based discharge estimation.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140841638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral unmixing (HU) in hyperspectral image (HSI) processing is a crucial step. However, the accuracy of unmixing methods is limited by the variability in endmember and the complexity of the HSI structure found in natural scenes. Endmember variability refers to the variations or differences exhibited by endmembers in different locations or under varying conditions within a hyperspectral remote sensing scene. Therefore, to enhance the accuracy of unmixing results, it is crucial to fully leverage spectral, geometric, and spatial information within HSIs, comprehensively exploring the spectral characteristics of endmembers. We present a cascaded dual-constrained transformer autoencoder (AE) for HU with endmember variability and spectral geometry. The model utilizes a transformer AE network to extract the global spatial features in the HSI. Additionally, it incorporates the minimum distance constraint to account for the geometric information of the HSI. Given the similarity in shape exhibited by endmembers of each individual material, with the primary endmember variability being expressed through overall intensity fluctuations, an abundance-weighted constraint method for endmember spectral angle distance is proposed. During training, the architecture utilizes two cascaded networks to preserve the detailed information in the HSI. We evaluate the proposed model using three real datasets. The experimental results indicate that the proposed method achieves superior performance in abundance estimation and endmember extraction. Furthermore, the effectiveness of the two constraint methods was verified through ablation experiments.
{"title":"CDCTA: cascaded dual-constrained transformer autoencoder for hyperspectral unmixing with endmember variability and spectral geometry","authors":"Yuanhui Yang, Ying Wang, Tianxu Liu","doi":"10.1117/1.jrs.18.026502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.026502","url":null,"abstract":"Hyperspectral unmixing (HU) in hyperspectral image (HSI) processing is a crucial step. However, the accuracy of unmixing methods is limited by the variability in endmember and the complexity of the HSI structure found in natural scenes. Endmember variability refers to the variations or differences exhibited by endmembers in different locations or under varying conditions within a hyperspectral remote sensing scene. Therefore, to enhance the accuracy of unmixing results, it is crucial to fully leverage spectral, geometric, and spatial information within HSIs, comprehensively exploring the spectral characteristics of endmembers. We present a cascaded dual-constrained transformer autoencoder (AE) for HU with endmember variability and spectral geometry. The model utilizes a transformer AE network to extract the global spatial features in the HSI. Additionally, it incorporates the minimum distance constraint to account for the geometric information of the HSI. Given the similarity in shape exhibited by endmembers of each individual material, with the primary endmember variability being expressed through overall intensity fluctuations, an abundance-weighted constraint method for endmember spectral angle distance is proposed. During training, the architecture utilizes two cascaded networks to preserve the detailed information in the HSI. We evaluate the proposed model using three real datasets. The experimental results indicate that the proposed method achieves superior performance in abundance estimation and endmember extraction. Furthermore, the effectiveness of the two constraint methods was verified through ablation experiments.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delanyo Kwame Bensah Kulevome, Hong Wang, Zian Zhao, Xuegang Wang
Radar receivers are vital components in modern radar systems, and their reliable operation is crucial for accurate target detection and tracking. However, degrading receiver components can lead to reduced gain, increased noise levels, and decreased probability of detection affecting the overall radar performance. We present an efficient real-time prognostic framework for a radar receiver. The effect of the performance degradation of critical devices on the radar receiver is analyzed. A prognostic framework is developed based on the relationship between device health and receiver performance. Subsequently, an improved prognostic model based on the integration of Weibull distribution and long short-term memory network is developed and trained to accurately estimate the remaining useful life (RUL) of the receiver. Integrating survival analysis and deep learning techniques offers a robust solution for accurate RUL estimation, which can significantly enhance maintenance strategies. The proposed framework facilitates transitioning from traditional reactive maintenance practices to a predictive maintenance approach, thereby reducing downtime and improving the overall availability of radar receivers.
{"title":"Systematic prognostics framework development approach for a radar receiver","authors":"Delanyo Kwame Bensah Kulevome, Hong Wang, Zian Zhao, Xuegang Wang","doi":"10.1117/1.jrs.18.027501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.027501","url":null,"abstract":"Radar receivers are vital components in modern radar systems, and their reliable operation is crucial for accurate target detection and tracking. However, degrading receiver components can lead to reduced gain, increased noise levels, and decreased probability of detection affecting the overall radar performance. We present an efficient real-time prognostic framework for a radar receiver. The effect of the performance degradation of critical devices on the radar receiver is analyzed. A prognostic framework is developed based on the relationship between device health and receiver performance. Subsequently, an improved prognostic model based on the integration of Weibull distribution and long short-term memory network is developed and trained to accurately estimate the remaining useful life (RUL) of the receiver. Integrating survival analysis and deep learning techniques offers a robust solution for accurate RUL estimation, which can significantly enhance maintenance strategies. The proposed framework facilitates transitioning from traditional reactive maintenance practices to a predictive maintenance approach, thereby reducing downtime and improving the overall availability of radar receivers.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}