首页 > 最新文献

International journal of applied earth observation and geoinformation : ITC journal最新文献

英文 中文
The integrated application of big data and geospatial analysis in maritime transportation safety management: A comprehensive review
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-03-03 DOI: 10.1016/j.jag.2025.104444
Xiao Zhou , Zhou Huang , Tian Xia , Xinmin Zhang , Zhixin Duan , Jie Wu , Guoqing Zhou
Maritime transportation plays a pivotal role in global trade, making maritime transportation safety a longstanding priority within the maritime industry. With the growing emphasis on big data and geospatial analysis in maritime safety management, this study presents a comprehensive review of 425 academic publications on the topic from 2004 to 2023. First, publication trends, influential journals, and leading institutions in the field are revealed. Then, commonly used maritime big data and geospatial analysis methods are analyzed. Subsequently, based on a thorough and systematic content analysis, the research is categorized into five clusters: spatiotemporal analysis of marine accidents, navigation risk assessment, location of emergency facilities, allocation of emergency resources, and emergency response capability assessment. Finally, four future research directions are proposed to advance maritime transportation safety research.
{"title":"The integrated application of big data and geospatial analysis in maritime transportation safety management: A comprehensive review","authors":"Xiao Zhou ,&nbsp;Zhou Huang ,&nbsp;Tian Xia ,&nbsp;Xinmin Zhang ,&nbsp;Zhixin Duan ,&nbsp;Jie Wu ,&nbsp;Guoqing Zhou","doi":"10.1016/j.jag.2025.104444","DOIUrl":"10.1016/j.jag.2025.104444","url":null,"abstract":"<div><div>Maritime transportation plays a pivotal role in global trade, making maritime transportation safety a longstanding priority within the maritime industry. With the growing emphasis on big data and geospatial analysis in maritime safety management, this study presents a comprehensive review of 425 academic publications on the topic from 2004 to 2023. First, publication trends, influential journals, and leading institutions in the field are revealed. Then, commonly used maritime big data and geospatial analysis methods are analyzed. Subsequently, based on a thorough and systematic content analysis, the research is categorized into five clusters: spatiotemporal analysis of marine accidents, navigation risk assessment, location of emergency facilities, allocation of emergency resources, and emergency response capability assessment. Finally, four future research directions are proposed to advance maritime transportation safety research.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"138 ","pages":"Article 104444"},"PeriodicalIF":7.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143528663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Irrigation uniformity assessment with high-resolution aerial sensors
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-03-01 DOI: 10.1016/j.jag.2025.104446
Moshe Meron, Moti Peres, Valerie Levin-Orlov, Gil Shoshani, Uri Marchaim, Assaf Chen
Irrigation uniformity is a key factor in optimizing water use efficiency and maximizing crop yields, particularly in semi-arid regions. This study investigates the use of high-resolution unmanned aerial vehicle (UAV) thermal and visible light imagery, to assess irrigation uniformity in three systems: surface, linear move, and solid-set irrigation. The research aims to quantify irrigation variability, identify its sources, and propose practical solutions to improve irrigation management through UAV-based mapping technologies. Case studies were conducted in surface-irrigated vineyards in the Murray River Valley (Australia), linear move-irrigated peanut fields in the Hula Valley (Israel), and solid-set orchards in Northern Israel. Thermal imagery was used to calculate the Crop Water Stress Index (CWSI), while the Green-Red Vegetation Index (GRVI) was employed to assess long-term crop vigor. Irrigation uniformity was quantified using the Christiansen Uniformity Coefficient (CUC). The study revealed significant variability in irrigation uniformity across all systems. In surface irrigation, significant variability was detected between the furrow head and tail due to uneven water distribution, as captured by thermal imagery. For linear move systems, RTK-GNSS monitoring revealed irregularities in tower movement creating a zigzag irrigation pattern, leading to areas of over- and under-irrigation. In solid-set systems, unexpected variability in crop stress was attributed to soil heterogeneity and historical land management practices. UAV-based imagery offers precise insights into irrigation uniformity, enabling targeted interventions. Variable-rate irrigation, emitter adjustments, and customized irrigation schedules are practical solutions for improving water distribution. Future research should focus on integrating AI and multi-sensor data to further enhance irrigation efficiency and provide actionable insights for farmers.
{"title":"Irrigation uniformity assessment with high-resolution aerial sensors","authors":"Moshe Meron,&nbsp;Moti Peres,&nbsp;Valerie Levin-Orlov,&nbsp;Gil Shoshani,&nbsp;Uri Marchaim,&nbsp;Assaf Chen","doi":"10.1016/j.jag.2025.104446","DOIUrl":"10.1016/j.jag.2025.104446","url":null,"abstract":"<div><div>Irrigation uniformity is a key factor in optimizing water use efficiency and maximizing crop yields, particularly in semi-arid regions. This study investigates the use of high-resolution unmanned aerial vehicle (UAV) thermal and visible light imagery, to assess irrigation uniformity in three systems: surface, linear move, and solid-set irrigation. The research aims to quantify irrigation variability, identify its sources, and propose practical solutions to improve irrigation management through UAV-based mapping technologies. Case studies were conducted in surface-irrigated vineyards in the Murray River Valley (Australia), linear move-irrigated peanut fields in the Hula Valley (Israel), and solid-set orchards in Northern Israel. Thermal imagery was used to calculate the Crop Water Stress Index (CWSI), while the Green-Red Vegetation Index (GRVI) was employed to assess long-term crop vigor. Irrigation uniformity was quantified using the Christiansen Uniformity Coefficient (CUC). The study revealed significant variability in irrigation uniformity across all systems. In surface irrigation, significant variability was detected between the furrow head and tail due to uneven water distribution, as captured by thermal imagery. For linear move systems, RTK-GNSS monitoring revealed irregularities in tower movement creating a zigzag irrigation pattern, leading to areas of over- and under-irrigation. In solid-set systems, unexpected variability in crop stress was attributed to soil heterogeneity and historical land management practices. UAV-based imagery offers precise insights into irrigation uniformity, enabling targeted interventions. Variable-rate irrigation, emitter adjustments, and customized irrigation schedules are practical solutions for improving water distribution. Future research should focus on integrating AI and multi-sensor data to further enhance irrigation efficiency and provide actionable insights for farmers.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104446"},"PeriodicalIF":7.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the effects of spatial scaling on the relationship between urban structure and biodiversity
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-03-01 DOI: 10.1016/j.jag.2025.104441
Dennis Heejoon Choi , Lindsay Darling , Jaeyoung Ha , Jinyuan Shao , Hunsoo Song , Songlin Fei , Brady S. Hardiman
Consideration of spatial dependence in heterogeneous urban landscapes is crucial for understanding how urban landscapes shape biodiversity. However, understanding the linkage between urban landscape patterns, both vertically and horizontally, and urban-dwelling bird species at various spatial scales remains an unsolved question. Here, we investigated how patterns of vertical and horizontal urban landscape structure influence urban-dwelling bird species at various spatial scales in the Chicago Region. We utilize a high-density Airborne Laser Scanning (ALS) dataset to exam ALS-derived metrics (foliage height diversity, canopy openness, and building volume) in relation to bird diversity.
Our results show that LiDAR-derived metrics exhibited significant variation across spatial scales. The negative impact of building volume on bird species is greatest at the smallest scale (slope = -0.24 at 50 m radius), but its effect declined as the scale increased (slope = 0.00 at 500 m radius). Foliage height diversity did not influence bird diversity at small spatial scales but shows a positive effect on bird diversity over 150 m radius (slope = 0.05 to 0.11). Canopy openness changed its sign of slope from negative to positive as the buffer radius increased (between 150 and 200 m buffer radii), indicating that openness may have different roles depending on the spatial scale. Based on our findings, a buffer radius of 150–200 m was concluded to be the threshold distinguishing local and landscape-level variables in this study.
In general, horizontal landscape patterns have a stronger influence on urban biodiversity than vertical structures. However, our findings suggest that enhancing the vertical complexity of canopy structures in existing green spaces could be an effective strategy for sustaining bird diversity in urban areas, particularly where expanding green spaces is not feasible. Our study enhances the understanding of urban biodiversity dynamics and provides practical implications for urban landscape management and planning.
{"title":"Understanding the effects of spatial scaling on the relationship between urban structure and biodiversity","authors":"Dennis Heejoon Choi ,&nbsp;Lindsay Darling ,&nbsp;Jaeyoung Ha ,&nbsp;Jinyuan Shao ,&nbsp;Hunsoo Song ,&nbsp;Songlin Fei ,&nbsp;Brady S. Hardiman","doi":"10.1016/j.jag.2025.104441","DOIUrl":"10.1016/j.jag.2025.104441","url":null,"abstract":"<div><div>Consideration of spatial dependence in heterogeneous urban landscapes is crucial for understanding how urban landscapes shape biodiversity. However, understanding the linkage between urban landscape patterns, both vertically and horizontally, and urban-dwelling bird species at various spatial scales remains an unsolved question. Here, we investigated how patterns of vertical and horizontal urban landscape structure influence urban-dwelling bird species at various spatial scales in the Chicago Region. We utilize a high-density Airborne Laser Scanning (ALS) dataset to exam ALS-derived metrics (foliage height diversity, canopy openness, and building volume) in relation to bird diversity.</div><div>Our results show that LiDAR-derived metrics exhibited significant variation across spatial scales. The negative impact of building volume on bird species is greatest at the smallest scale (slope = -0.24 at 50 m radius), but its effect declined as the scale increased (slope = 0.00 at 500 m radius). Foliage height diversity did not influence bird diversity at small spatial scales but shows a positive effect on bird diversity over 150 m radius (slope = 0.05 to 0.11). Canopy openness changed its sign of slope from negative to positive as the buffer radius increased (between 150 and 200 m buffer radii), indicating that openness may have different roles depending on the spatial scale. Based on our findings, a buffer radius of 150–200 m was concluded to be the threshold distinguishing local and landscape-level variables in this study.</div><div>In general, horizontal landscape patterns have a stronger influence on urban biodiversity than vertical structures. However, our findings suggest that enhancing the vertical complexity of canopy structures in existing green spaces could be an effective strategy for sustaining bird diversity in urban areas, particularly where expanding green spaces is not feasible. Our study enhances the understanding of urban biodiversity dynamics and provides practical implications for urban landscape management and planning.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"138 ","pages":"Article 104441"},"PeriodicalIF":7.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-aware deep learning network for building height estimation
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-03-01 DOI: 10.1016/j.jag.2025.104443
Yuehong Chen , Jiayue Zhou , Congcong Xu , Qiang Ma , Xiaoxiang Zhang , Ya’nan Zhou , Yong Ge
Accurate building height information is essential for urban management and planning. However, most existing methods rely on general segmentation networks for building height estimation, often ignoring the structural characteristics of buildings. This paper proposes a novel structure-aware building height estimation (SBHE) model to address this limitation. The model is designed as a dual-branch architecture: one branch extracts building footprints from Sentinel-2 imagery, while the other estimates building heights from Sentinel-1 imagery. A structure-aware decoder and a gating mechanism are developed to integrate into SBHE to capture and account for the structural characteristics of buildings. Validation conducted in the Yangtze River Delta region of China demonstrates that SBHE achieved a more accurate building height map (RMSE = 4.62 m) than four existing methods (RMSE = 5.071 m, 7.148 m, RMSE = 10.16 m, and 13.41 m). Meanwhile, SBHE generated clearer building contours and better structural completeness. Thus, the proposed SBHE offers a robust tool for building height mapping. The source code of SBHE model can be available at: https://github.com/cheneason/SBHE-model.
{"title":"Structure-aware deep learning network for building height estimation","authors":"Yuehong Chen ,&nbsp;Jiayue Zhou ,&nbsp;Congcong Xu ,&nbsp;Qiang Ma ,&nbsp;Xiaoxiang Zhang ,&nbsp;Ya’nan Zhou ,&nbsp;Yong Ge","doi":"10.1016/j.jag.2025.104443","DOIUrl":"10.1016/j.jag.2025.104443","url":null,"abstract":"<div><div>Accurate building height information is essential for urban management and planning. However, most existing methods rely on general segmentation networks for building height estimation, often ignoring the structural characteristics of buildings. This paper proposes a novel structure-aware building height estimation (SBHE) model to address this limitation. The model is designed as a dual-branch architecture: one branch extracts building footprints from Sentinel-2 imagery, while the other estimates building heights from Sentinel-1 imagery. A structure-aware decoder and a gating mechanism are developed to integrate into SBHE to capture and account for the structural characteristics of buildings. Validation conducted in the Yangtze River Delta region of China demonstrates that SBHE achieved a more accurate building height map (RMSE = 4.62 m) than four existing methods (RMSE = 5.071 m, 7.148 m, RMSE = 10.16 m, and 13.41 m). Meanwhile, SBHE generated clearer building contours and better structural completeness. Thus, the proposed SBHE offers a robust tool for building height mapping. The source code of SBHE model can be available at: <span><span>https://github.com/cheneason/SBHE-model</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104443"},"PeriodicalIF":7.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143562957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of forest fire vulnerability prediction in Indonesia: Seasonal variability analysis using machine learning techniques
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-02-28 DOI: 10.1016/j.jag.2025.104435
Wulan Salle Karurung , Kangjae Lee , Wonhee Lee
Forest fires significantly threaten Indonesia’s tropical forests, driven by complex interactions between human activity, environmental conditions and climate variability. This research aims to identify and analyze the factors influencing forest fires in Kalimantan, Sumatra, and Papua during the rainy, dry and all-season conditions using machine learning techniques and create vulnerability prediction maps and categorize risk zones. Eight years (2015–2022) of forest fire data were combined with 15 forest fire susceptible factors that consider of human, environmental, meteorological, and land use/land cover conditioning factors. Random forest (RF) and eXtreme Gradient Boosting (XGB) machine learning models were used to train and validate the dataset through hyperparameter tuning and 10-fold cross-validation for accuracy assessment. The XGB model was selected as the best performer based on accuracy, recall, and F1-score and was used to generate probability values. The evaluation showed that the accuracies and AUC values for the nine models were greater than 0.7, with AUC values ranging from 0.71 to 0.95, indicating good performance. Papua had the highest accuracy, with 90.5%, 91.6%, and 92.5% for all, rainy, and dry seasons, respectively. Population density, elevation, precipitation, soil moisture, NDMI, NDVI, distance from roads and settlements, land surface temperature and peatlands are the key contributing factors of forest fire occurrences. Vulnerability maps categorized into five risk zones, identifying high-risk areas that aligned with observed fire occurrences. This research highlighted the diverse characteristics of factors that determine forest fires and examined their impact on fire occurrences. The findings provide actionable insights for targeted fire management strategies, though future research should incorporate additional variables to improve predictive accuracy and address long-term environmental changes.
{"title":"Assessment of forest fire vulnerability prediction in Indonesia: Seasonal variability analysis using machine learning techniques","authors":"Wulan Salle Karurung ,&nbsp;Kangjae Lee ,&nbsp;Wonhee Lee","doi":"10.1016/j.jag.2025.104435","DOIUrl":"10.1016/j.jag.2025.104435","url":null,"abstract":"<div><div>Forest fires significantly threaten Indonesia’s tropical forests, driven by complex interactions between human activity, environmental conditions and climate variability. This research aims to identify and analyze the factors influencing forest fires in Kalimantan, Sumatra, and Papua during the rainy, dry and all-season conditions using machine learning techniques and create vulnerability prediction maps and categorize risk zones. Eight years (2015–2022) of forest fire data were combined with 15 forest fire susceptible factors that consider of human, environmental, meteorological, and land use/land cover conditioning factors. Random forest (RF) and eXtreme Gradient Boosting (XGB) machine learning models were used to train and validate the dataset through hyperparameter tuning and 10-fold cross-validation for accuracy assessment. The XGB model was selected as the best performer based on accuracy, recall, and F1-score and was used to generate probability values. The evaluation showed that the accuracies and AUC values for the nine models were greater than 0.7, with AUC values ranging from 0.71 to 0.95, indicating good performance. Papua had the highest accuracy, with 90.5%, 91.6%, and 92.5% for all, rainy, and dry seasons, respectively. Population density, elevation, precipitation, soil moisture, NDMI, NDVI, distance from roads and settlements, land surface temperature and peatlands are the key contributing factors of forest fire occurrences. Vulnerability maps categorized into five risk zones, identifying high-risk areas that aligned with observed fire occurrences. This research highlighted the diverse characteristics of factors that determine forest fires and examined their impact on fire occurrences. The findings provide actionable insights for targeted fire management strategies, though future research should incorporate additional variables to improve predictive accuracy and address long-term environmental changes.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"138 ","pages":"Article 104435"},"PeriodicalIF":7.6,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhaseVSRnet: Deep complex network for phase-based satellite video super-resolution PhaseVSRnet:基于相位的卫星视频超分辨率深度复合网络
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-02-28 DOI: 10.1016/j.jag.2025.104418
Hanyun Wang , Wenke Li , Huixin Fan , Song Ji , Chenguang Dai , Yongsheng Zhang , Jin Chen , Yulan Guo , Longguang Wang
Satellite video super-resolution (SR) aims to generate high-resolution (HR) frames from multiple low-resolution (LR) frames. To exploit motion cues under complicated motion patterns, most CNN-based methods first perform motion compensation and then aggregate motion cues in aligned frames (features). However, due to the low spatial resolution of satellite videos, the moving scales are usually subtle and difficult to be captured in the spatial domain. Furthermore, various scales of moving objects challenge current satellite video SR methods in motion estimation and compensation. To address these challenges for satellite video SR, we propose PhaseVSRnet to convert satellite video frames into the phase domain. By representing the motion information with phase shifts, the subtle motions are enlarged in the phase domain. Specifically, our PhaseVSRnet employs deep complex convolutions to better exploit the inherent correlation of complex-valued decompositions obtained by complex-valued steerable pyramids. Then, we adopt a coarse-to-fine motion compensation mechanism to eliminate phase ambiguity at different levels. Finally, in hierarchical reconstruction stage, we use the multi-scale fusion module to aggregate features from multiple levels and use an upsampling layer to upsample the feature maps for resolution enhancement. With PhaseVSRnet, we effectively address the subtle motions and varying scales of moving objects in satellite videos. We assess its performance on a satellite video SR dataset from Jilin-1 satellites and evaluate its generalization ability on another SR dataset from OVS-1 satellites. The results show that PhaseVSRnet effectively captures motion cues in the phase domain and exhibits strong generalization capability across different satellite sensors in unseen scenarios.
{"title":"PhaseVSRnet: Deep complex network for phase-based satellite video super-resolution","authors":"Hanyun Wang ,&nbsp;Wenke Li ,&nbsp;Huixin Fan ,&nbsp;Song Ji ,&nbsp;Chenguang Dai ,&nbsp;Yongsheng Zhang ,&nbsp;Jin Chen ,&nbsp;Yulan Guo ,&nbsp;Longguang Wang","doi":"10.1016/j.jag.2025.104418","DOIUrl":"10.1016/j.jag.2025.104418","url":null,"abstract":"<div><div>Satellite video super-resolution (SR) aims to generate high-resolution (HR) frames from multiple low-resolution (LR) frames. To exploit motion cues under complicated motion patterns, most CNN-based methods first perform motion compensation and then aggregate motion cues in aligned frames (features). However, due to the low spatial resolution of satellite videos, the moving scales are usually subtle and difficult to be captured in the spatial domain. Furthermore, various scales of moving objects challenge current satellite video SR methods in motion estimation and compensation. To address these challenges for satellite video SR, we propose PhaseVSRnet to convert satellite video frames into the phase domain. By representing the motion information with phase shifts, the subtle motions are enlarged in the phase domain. Specifically, our PhaseVSRnet employs deep complex convolutions to better exploit the inherent correlation of complex-valued decompositions obtained by complex-valued steerable pyramids. Then, we adopt a coarse-to-fine motion compensation mechanism to eliminate phase ambiguity at different levels. Finally, in hierarchical reconstruction stage, we use the multi-scale fusion module to aggregate features from multiple levels and use an upsampling layer to upsample the feature maps for resolution enhancement. With PhaseVSRnet, we effectively address the subtle motions and varying scales of moving objects in satellite videos. We assess its performance on a satellite video SR dataset from Jilin-1 satellites and evaluate its generalization ability on another SR dataset from OVS-1 satellites. The results show that PhaseVSRnet effectively captures motion cues in the phase domain and exhibits strong generalization capability across different satellite sensors in unseen scenarios.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"138 ","pages":"Article 104418"},"PeriodicalIF":7.6,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking the hidden secrets of the 2023 Al Haouz earthquake: Coseismic model reveals intraplate reverse faulting in Morocco derived from SAR and seismic data
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-02-26 DOI: 10.1016/j.jag.2025.104420
Min Bao , Mohamed I. Abdelaal , Mohamed Saleh , Mimoun Chourak , Makkaoui Mohamed , Mengdao Xing
The 2023 Mw 6.8 Al Haouz earthquake struck Morocco’s Atlas Mountains on September 8, causing over 3000 fatalities and extensive damage, revealing hidden seismic hazards in this slowly deforming region. Despite its impact, Al Haouz earthquake has received limited scientific investigation. The absence of surface rupture, its occurrence in an intraplate seismic silence zone, and ambiguous focal mechanisms have hindered understanding of the fault’s kinematics. To address these gaps, our study employs the Interferometric Synthetic Aperture Radar (InSAR) technique to refine the coseismic deformation. We further propose two fault-dipping scenarios, northward and southward, reinforced by a unique local seismic dataset to evaluate the fault rupture characterization. Additionally, stress change analysis assessed the stress transfer effects between the mainshock and aftershocks, culminating in a comprehensive geodynamic model. Our findings reveal a northward-dipping reverse fault with a strike of 249.8, a dip of 66°, and a rake of 55°, exhibiting a maximum slip of 1.75 m. Stress change analysis demonstrates that stress transfer from the mainshock reactivated pre-existing faults, particularly the Tizi n’Test fault system, triggering shallow aftershocks in high-stress zones. We suggest that mantle upwelling, coupled with fluid injection along pre-existing faults, drives seismic dynamics in the region. The Tizi n’Test fault likely extends to the lithosphere–asthenosphere boundary, where active upwelling facilitates magma fluid intrusion, stimulating seismic activity. These findings are consistent with recent research, providing deeper insights into fault mechanics in the Atlas Mountains. They also highlight the significant contribution of satellite-based SAR techniques in uncovering hidden seismic hazards.
{"title":"Unlocking the hidden secrets of the 2023 Al Haouz earthquake: Coseismic model reveals intraplate reverse faulting in Morocco derived from SAR and seismic data","authors":"Min Bao ,&nbsp;Mohamed I. Abdelaal ,&nbsp;Mohamed Saleh ,&nbsp;Mimoun Chourak ,&nbsp;Makkaoui Mohamed ,&nbsp;Mengdao Xing","doi":"10.1016/j.jag.2025.104420","DOIUrl":"10.1016/j.jag.2025.104420","url":null,"abstract":"<div><div>The 2023 Mw 6.8 Al Haouz earthquake struck Morocco’s Atlas Mountains on September 8, causing over 3000 fatalities and extensive damage, revealing hidden seismic hazards in this slowly deforming region. Despite its impact, Al Haouz earthquake has received limited scientific investigation. The absence of surface rupture, its occurrence in an intraplate seismic silence zone, and ambiguous focal mechanisms have hindered understanding of the fault’s kinematics. To address these gaps, our study employs the Interferometric Synthetic Aperture Radar (InSAR) technique to refine the coseismic deformation. We further propose two fault-dipping scenarios, northward and southward, reinforced by a unique local seismic dataset to evaluate the fault rupture characterization. Additionally, stress change analysis assessed the stress transfer effects between the mainshock and aftershocks, culminating in a comprehensive geodynamic model. Our findings reveal a northward-dipping reverse fault with a strike of <span><math><mrow><mn>249</mn><mo>.</mo><msup><mrow><mn>8</mn></mrow><mrow><mo>∘</mo></mrow></msup></mrow></math></span>, a dip of 66°, and a rake of 55°, exhibiting a maximum slip of 1.75 m. Stress change analysis demonstrates that stress transfer from the mainshock reactivated pre-existing faults, particularly the Tizi n’Test fault system, triggering shallow aftershocks in high-stress zones. We suggest that mantle upwelling, coupled with fluid injection along pre-existing faults, drives seismic dynamics in the region. The Tizi n’Test fault likely extends to the lithosphere–asthenosphere boundary, where active upwelling facilitates magma fluid intrusion, stimulating seismic activity. These findings are consistent with recent research, providing deeper insights into fault mechanics in the Atlas Mountains. They also highlight the significant contribution of satellite-based SAR techniques in uncovering hidden seismic hazards.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104420"},"PeriodicalIF":7.6,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143488782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remote sensing image interpretation of geological lithology via a sensitive feature self-aggregation deep fusion network
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-02-26 DOI: 10.1016/j.jag.2025.104384
Kang He , Jie Dong , Haozheng Ma , Yujie Cai , Ruyi Feng , Yusen Dong , Lizhe Wang
Geological lithological interpretation is a key focus in Earth observation research, with applications in resource surveys, geological mapping, and environmental monitoring. Although deep learning (DL) methods has significantly improved the performance of lithological remote sensing interpretation, its accuracy remains far below the level achieved by visual interpretation performed by domain experts. This disparity is primarily due to the heavy reliance of current intelligent lithological interpretation methods on remote sensing imagery (RSI), coupled with insufficient exploration of sensitive features (SF) and prior knowledge (PK), resulting in low interpretation precision. Furthermore, multi-modal SF and PK exhibit significant spatiotemporal heterogeneity, which hinders their direct integration into DL networks. In this work, we propose the sensitive feature self-aggregation deep fusion network (SFA-DFNet). Inspired by the visual interpretation practices of domain experts, we selected the five most commonly used SF and one type of PK as multi-modal supplementary information. To address the spatiotemporal heterogeneity of SF and PK, we designed a self-aggregation mechanism (SA-Mechanism) that dynamically selects and optimizes beneficial information from multi-modal features for lithological interpretation. This mechanism has broad applicability and can be extended to support any number of modal data. Additionally, we introduced the cross-modal feature interaction fusion module (CM-FIFM), which enhances the effective exchange and fusion of RSI, SF, and PK by leveraging long-range contextual information. Experimental results on two datasets demonstrate that differences in lithological genesis and types are critical factors affecting interpretation accuracy. Compared with seven SOTA DL models, our method achieves more than a 3% improvement in mIoU, showcasing its effectiveness and robustness.
{"title":"Remote sensing image interpretation of geological lithology via a sensitive feature self-aggregation deep fusion network","authors":"Kang He ,&nbsp;Jie Dong ,&nbsp;Haozheng Ma ,&nbsp;Yujie Cai ,&nbsp;Ruyi Feng ,&nbsp;Yusen Dong ,&nbsp;Lizhe Wang","doi":"10.1016/j.jag.2025.104384","DOIUrl":"10.1016/j.jag.2025.104384","url":null,"abstract":"<div><div>Geological lithological interpretation is a key focus in Earth observation research, with applications in resource surveys, geological mapping, and environmental monitoring. Although deep learning (DL) methods has significantly improved the performance of lithological remote sensing interpretation, its accuracy remains far below the level achieved by visual interpretation performed by domain experts. This disparity is primarily due to the heavy reliance of current intelligent lithological interpretation methods on remote sensing imagery (RSI), coupled with insufficient exploration of sensitive features (SF) and prior knowledge (PK), resulting in low interpretation precision. Furthermore, multi-modal SF and PK exhibit significant spatiotemporal heterogeneity, which hinders their direct integration into DL networks. In this work, we propose the sensitive feature self-aggregation deep fusion network (SFA-DFNet). Inspired by the visual interpretation practices of domain experts, we selected the five most commonly used SF and one type of PK as multi-modal supplementary information. To address the spatiotemporal heterogeneity of SF and PK, we designed a self-aggregation mechanism (SA-Mechanism) that dynamically selects and optimizes beneficial information from multi-modal features for lithological interpretation. This mechanism has broad applicability and can be extended to support any number of modal data. Additionally, we introduced the cross-modal feature interaction fusion module (CM-FIFM), which enhances the effective exchange and fusion of RSI, SF, and PK by leveraging long-range contextual information. Experimental results on two datasets demonstrate that differences in lithological genesis and types are critical factors affecting interpretation accuracy. Compared with seven SOTA DL models, our method achieves more than a 3% improvement in mIoU, showcasing its effectiveness and robustness.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104384"},"PeriodicalIF":7.6,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143488781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A SAM-adapted weakly-supervised semantic segmentation method constrained by uncertainty and transformation consistency
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-02-25 DOI: 10.1016/j.jag.2025.104440
Yinxia Cao , Xin Huang , Qihao Weng
Semantic segmentation of remote sensing imagery is a fundamental task to generate pixel-wise category maps. Existing deep learning networks rely heavily on dense pixel-wise labels, incurring high acquisition costs. Given this challenge, this study introduces sparse point labels, a type of cost-effective weak labels, for semantic segmentation. Existing weakly-supervised methods often leverage low-level visual or high-level semantic features from networks to generate supervision information for unlabeled pixels, which can easily lead to the issue of label noises. Furthermore, these methods rarely explore the general-purpose foundation model, segment anything model (SAM), with strong zero-shot generalization capacity in image segmentation. In this paper, we proposed a SAM-adapted weakly-supervised method with three components: 1) an adapted EfficientViT-SAM network (AESAM) for semantic segmentation guided by point labels, 2) an uncertainty-based pseudo-label generation module to select reliable pseudo-labels for supervising unlabeled pixels, and 3) a transformation consistency constraint for enhancing AESAM’s robustness to data perturbations. The proposed method was tested on the ISPRS Vaihingen dataset (collected from airplane), the Zurich Summer dataset (satellite), and the UAVid dataset (drone). Results demonstrated a significant improvement in mean F1 (by 5.89 %–10.56 %) and mean IoU (by 5.95 %–11.13 %) compared to the baseline method. Compared to the closest competitors, there was an increase in mean F1 (by 0.83 %–5.29 %) and mean IoU (by 1.04 %–6.54 %). Furthermore, our approach requires only fine-tuning a small number of parameters (0.9 M) using cheap point labels, making it promising for scenarios with limited labeling budgets. The code is available at https://github.com/lauraset/SAM-UTC-WSSS.
遥感图像的语义分割是生成像素分类图的一项基本任务。现有的深度学习网络在很大程度上依赖于高密度的像素标签,从而产生了高昂的获取成本。考虑到这一挑战,本研究引入了稀疏点标签,一种经济高效的弱标签,用于语义分割。现有的弱监督方法通常利用网络中的低级视觉或高级语义特征来生成未标记像素的监督信息,这很容易导致标签噪声问题。此外,这些方法很少探索通用的基础模型--segment anything model(SAM),它在图像分割中具有很强的零点泛化能力。在本文中,我们提出了一种与 SAM 相适应的弱监督方法,该方法由三个部分组成:1)一个适配的 EfficientViT-SAM 网络(AESAM),用于以点标签为指导进行语义分割;2)一个基于不确定性的伪标签生成模块,用于选择可靠的伪标签来监督未标记的像素;3)一个变换一致性约束,用于增强 AESAM 对数据扰动的鲁棒性。所提出的方法在 ISPRS Vaihingen 数据集(飞机采集)、苏黎世夏季数据集(卫星)和 UAVid 数据集(无人机)上进行了测试。结果表明,与基线方法相比,平均 F1(提高 5.89 %-10.56 %)和平均 IoU(提高 5.95 %-11.13 %)均有显著提高。与最接近的竞争对手相比,平均 F1(提高 0.83 %-5.29 %)和平均 IoU(提高 1.04 %-6.54 %)均有提高。此外,我们的方法只需要使用廉价的点标签对少量参数(0.9 M)进行微调,这使其在标签预算有限的情况下大有可为。代码见 https://github.com/lauraset/SAM-UTC-WSSS。
{"title":"A SAM-adapted weakly-supervised semantic segmentation method constrained by uncertainty and transformation consistency","authors":"Yinxia Cao ,&nbsp;Xin Huang ,&nbsp;Qihao Weng","doi":"10.1016/j.jag.2025.104440","DOIUrl":"10.1016/j.jag.2025.104440","url":null,"abstract":"<div><div>Semantic segmentation of remote sensing imagery is a fundamental task to generate pixel-wise category maps. Existing deep learning networks rely heavily on dense pixel-wise labels, incurring high acquisition costs. Given this challenge, this study introduces sparse point labels, a type of cost-effective weak labels, for semantic segmentation. Existing weakly-supervised methods often leverage low-level visual or high-level semantic features from networks to generate supervision information for unlabeled pixels, which can easily lead to the issue of label noises. Furthermore, these methods rarely explore the general-purpose foundation model, segment anything model (SAM), with strong zero-shot generalization capacity in image segmentation. In this paper, we proposed a SAM-adapted weakly-supervised method with three components: 1) an adapted EfficientViT-SAM network (AESAM) for semantic segmentation guided by point labels, 2) an uncertainty-based pseudo-label generation module to select reliable pseudo-labels for supervising unlabeled pixels, and 3) a transformation consistency constraint for enhancing AESAM’s robustness to data perturbations. The proposed method was tested on the ISPRS Vaihingen dataset (collected from airplane), the Zurich Summer dataset (satellite), and the UAVid dataset (drone). Results demonstrated a significant improvement in mean F1 (by 5.89 %–10.56 %) and mean IoU (by 5.95 %–11.13 %) compared to the baseline method. Compared to the closest competitors, there was an increase in mean F1 (by 0.83 %–5.29 %) and mean IoU (by 1.04 %–6.54 %). Furthermore, our approach requires only fine-tuning a small number of parameters (0.9 M) using cheap point labels, making it promising for scenarios with limited labeling budgets. The code is available at <span><span>https://github.com/lauraset/SAM-UTC-WSSS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104440"},"PeriodicalIF":7.6,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143478958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Density uncertainty quantification with NeRF-Ensembles: Impact of data and scene constraints
IF 7.6 Q1 REMOTE SENSING Pub Date : 2025-02-24 DOI: 10.1016/j.jag.2025.104406
Miriam Jäger, Steven Landgraf, Boris Jutzi
In the fields of computer graphics, computer vision and photogrammetry, Neural Radiance Fields (NeRFs) are a major topic driving current research and development. However, the quality of NeRF-generated 3D scene reconstructions and subsequent surface reconstructions, heavily relies on the network output, particularly the density. Regarding this critical aspect, we propose to utilize NeRF-Ensembles that provide a density uncertainty estimate alongside the mean density. We demonstrate that data constraints such as low-quality images and poses lead to a degradation of the rendering quality, increased density uncertainty and decreased predicted density. Even with high-quality input data, the density uncertainty varies based on scene constraints such as acquisition constellations, occlusions and material properties. NeRF-Ensembles not only provide a tool for quantifying the uncertainty but exhibit two promising advantages: Enhanced robustness and artifact removal. Through the mean densities, small outliers are removed, yielding a smoother output with improved completeness. Furthermore, applying a density uncertainty-guided artifact removal in post-processing proves effective for the separation of object and artifact areas. We conduct our methodology on 3 different datasets: (i) synthetic benchmark dataset, (ii) real benchmark dataset, (iii) real data under realistic recording conditions and sensors.
{"title":"Density uncertainty quantification with NeRF-Ensembles: Impact of data and scene constraints","authors":"Miriam Jäger,&nbsp;Steven Landgraf,&nbsp;Boris Jutzi","doi":"10.1016/j.jag.2025.104406","DOIUrl":"10.1016/j.jag.2025.104406","url":null,"abstract":"<div><div>In the fields of computer graphics, computer vision and photogrammetry, Neural Radiance Fields (NeRFs) are a major topic driving current research and development. However, the quality of NeRF-generated 3D scene reconstructions and subsequent surface reconstructions, heavily relies on the network output, particularly the density. Regarding this critical aspect, we propose to utilize NeRF-Ensembles that provide a density uncertainty estimate alongside the mean density. We demonstrate that data constraints such as low-quality images and poses lead to a degradation of the rendering quality, increased density uncertainty and decreased predicted density. Even with high-quality input data, the density uncertainty varies based on scene constraints such as acquisition constellations, occlusions and material properties. NeRF-Ensembles not only provide a tool for quantifying the uncertainty but exhibit two promising advantages: Enhanced robustness and artifact removal. Through the mean densities, small outliers are removed, yielding a smoother output with improved completeness. Furthermore, applying a density uncertainty-guided artifact removal in post-processing proves effective for the separation of object and artifact areas. We conduct our methodology on 3 different datasets: (i) synthetic benchmark dataset, (ii) real benchmark dataset, (iii) real data under realistic recording conditions and sensors.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"137 ","pages":"Article 104406"},"PeriodicalIF":7.6,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143478957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International journal of applied earth observation and geoinformation : ITC journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1