Pub Date : 2024-07-03DOI: 10.1007/s41064-024-00295-x
Mohamed Zahlan Abdul Muthalif, Davood Shojaei, Kourosh Khoshelham
This research aims to overcome the difficulties associated with visualizing underground utilities by proposing six interactive visualization methods that utilize Mixed Reality (MR) technology. By leveraging MR technology, which enables the seamless integration of virtual and real-world content, a more immersive and authentic experience is possible. The study evaluates the proposed visualization methods based on scene complexity, parallax effect, real-world occlusion, depth perception, and overall effectiveness, aiming to identify the most effective methods for addressing visual perceptual challenges in the context of underground utilities. The findings suggest that certain MR visualization methods are more effective than others in mitigating the challenges of visualizing underground utilities. The research highlights the potential of these methods, and feedback from industry professionals suggests that each method can be valuable in specific contexts.
{"title":"Interactive Mixed Reality Methods for Visualization of Underground Utilities","authors":"Mohamed Zahlan Abdul Muthalif, Davood Shojaei, Kourosh Khoshelham","doi":"10.1007/s41064-024-00295-x","DOIUrl":"https://doi.org/10.1007/s41064-024-00295-x","url":null,"abstract":"<p>This research aims to overcome the difficulties associated with visualizing underground utilities by proposing six interactive visualization methods that utilize Mixed Reality (MR) technology. By leveraging MR technology, which enables the seamless integration of virtual and real-world content, a more immersive and authentic experience is possible. The study evaluates the proposed visualization methods based on scene complexity, parallax effect, real-world occlusion, depth perception, and overall effectiveness, aiming to identify the most effective methods for addressing visual perceptual challenges in the context of underground utilities. The findings suggest that certain MR visualization methods are more effective than others in mitigating the challenges of visualizing underground utilities. The research highlights the potential of these methods, and feedback from industry professionals suggests that each method can be valuable in specific contexts.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141527336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s41064-024-00298-8
Hakan Uzakara, Nusret Demir, Serkan Karakış
Bathymetry is the measurement of ocean depths using a variety of techniques. Available techniques include sonar systems, light detection and ranging (LIDAR), and remote sensing systems. Acoustic systems, also known as LIDAR, are inefficient in terms of both time and money. This study applied remote sensing techniques to reduce both time and cost. The objective of this study is to use freely accessible Sentinel‑2 multispectral images to extract the depth information. Temporal variation was minimized by comparing the histograms of satellite images obtained over four consecutive months. The sea topography is determined using regression analysis, utilizing samples from reference data. The reference data is adjusted with the changes in shorelines, as the alteration of shorelines serves as a parameter for these modifications. Using the regression coefficients, analyses were conducted in regions with undetermined depths. The bathymetry maps were evaluated against a reference dataset and improved by incorporating shorelines. The analyses were carried out individually over four months, and the derived bathymetric data showed significant monthly average and monthly shoreline changes. The employed methodology offers an alternative approach for bathymetry studies that require temporal resolution when the available reference bathymetric data is insufficient.
{"title":"Satellite-based Bathymetry Supported by Extracted Coastlines","authors":"Hakan Uzakara, Nusret Demir, Serkan Karakış","doi":"10.1007/s41064-024-00298-8","DOIUrl":"https://doi.org/10.1007/s41064-024-00298-8","url":null,"abstract":"<p>Bathymetry is the measurement of ocean depths using a variety of techniques. Available techniques include sonar systems, light detection and ranging (LIDAR), and remote sensing systems. Acoustic systems, also known as LIDAR, are inefficient in terms of both time and money. This study applied remote sensing techniques to reduce both time and cost. The objective of this study is to use freely accessible Sentinel‑2 multispectral images to extract the depth information. Temporal variation was minimized by comparing the histograms of satellite images obtained over four consecutive months. The sea topography is determined using regression analysis, utilizing samples from reference data. The reference data is adjusted with the changes in shorelines, as the alteration of shorelines serves as a parameter for these modifications. Using the regression coefficients, analyses were conducted in regions with undetermined depths. The bathymetry maps were evaluated against a reference dataset and improved by incorporating shorelines. The analyses were carried out individually over four months, and the derived bathymetric data showed significant monthly average and monthly shoreline changes. The employed methodology offers an alternative approach for bathymetry studies that require temporal resolution when the available reference bathymetric data is insufficient.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141527337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1007/s41064-024-00286-y
Vijay K. Kannaujiya, Abhishek K. Rai, Sukanta Malakar
The coastal regions of India have a high population density and are ecologically productive. However, they are also susceptible to both human activity and natural calamities, which can lead to erosion and accretion. As part of the sustainable management of coastal zones, these threats have taken precedence in evaluating shoreline dynamicity. This study demonstrated the effectiveness of integrating remote sensing and geographic information systems for comprehensive long-term coastal change analyses. The analysis reveals that the mean erosion rate along the Chennai coast ranges from −0.2 to −2.5 m/year. Accretion is also recorded along certain parts of the Chennai coast, with rates ranging from 1 to 4.6 m/year. The Vishakhapatnam shoreline has a consistent pattern of both erosion and accretion, with erosion rates ranging from −0.1 to −6.8 m/year and accretion from 0.2 to 5 m/year. However, most of the Puri coast exhibits an accretion pattern, with values ranging from approximately 0.1 to 3.22 m/year. The fluctuations in shorelines of these three metropolises are a matter of great concern, given that these coastal cities play a substantial part in India’s economic and cultural endeavors. The ongoing occurrence of climate change and global warming has led to an elevation in the worldwide sea level, along with a heightened intensity and frequency of extreme occurrences like tropical cyclones in the Bay of Bengal, where these three coasts are situated. The coastlines of these urban areas may experience alterations due to natural phenomena like rising sea levels and tropical cyclones, as well as a diverse array of human activity. This study may help to facilitate the formulation of suitable management strategies and regulations for the coastal areas of Vishakhapatnam, Puri, Chennai, and other Indian coastal places that have similar physical attributes.
{"title":"Coastal Shoreline Change in Eastern Indian Metropolises","authors":"Vijay K. Kannaujiya, Abhishek K. Rai, Sukanta Malakar","doi":"10.1007/s41064-024-00286-y","DOIUrl":"https://doi.org/10.1007/s41064-024-00286-y","url":null,"abstract":"<p>The coastal regions of India have a high population density and are ecologically productive. However, they are also susceptible to both human activity and natural calamities, which can lead to erosion and accretion. As part of the sustainable management of coastal zones, these threats have taken precedence in evaluating shoreline dynamicity. This study demonstrated the effectiveness of integrating remote sensing and geographic information systems for comprehensive long-term coastal change analyses. The analysis reveals that the mean erosion rate along the Chennai coast ranges from −0.2 to −2.5 m/year. Accretion is also recorded along certain parts of the Chennai coast, with rates ranging from 1 to 4.6 m/year. The Vishakhapatnam shoreline has a consistent pattern of both erosion and accretion, with erosion rates ranging from −0.1 to −6.8 m/year and accretion from 0.2 to 5 m/year. However, most of the Puri coast exhibits an accretion pattern, with values ranging from approximately 0.1 to 3.22 m/year. The fluctuations in shorelines of these three metropolises are a matter of great concern, given that these coastal cities play a substantial part in India’s economic and cultural endeavors. The ongoing occurrence of climate change and global warming has led to an elevation in the worldwide sea level, along with a heightened intensity and frequency of extreme occurrences like tropical cyclones in the Bay of Bengal, where these three coasts are situated. The coastlines of these urban areas may experience alterations due to natural phenomena like rising sea levels and tropical cyclones, as well as a diverse array of human activity. This study may help to facilitate the formulation of suitable management strategies and regulations for the coastal areas of Vishakhapatnam, Puri, Chennai, and other Indian coastal places that have similar physical attributes.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140812972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-19DOI: 10.1007/s41064-024-00284-0
Fahri Aykut, Devrim Tezcan
Coastal areas are inherently sensitive and dynamic, susceptible to natural forces like waves, winds, currents, and tides. Human activities further accelerate coastal changes, while climate change and global sea level rise add to the challenges. Recognizing and safeguarding these coasts, vital for both socioeconomic and environmental reasons, becomes imperative. The objective of this study is to categorize the coasts of the Mersin and İskenderun bays along the southeastern coast of Türkiye based on their vulnerability to natural forces and human-induced factors using the coastal vulnerability index (CVI) method. The study area encompasses approximately 520 km of coastline. The coastal vulnerability analysis reveals that the coastal zone comprises various levels of vulnerability along the total coastline: 24.7% (128 km) is categorized as very high vulnerability, 27.4% (142 km) as high vulnerability, 23.7% (123 km) as moderate vulnerability, and 24.3% (126 km) as low vulnerability. Key parameters influencing vulnerability include coastal slope, land use, and population density. High and very high vulnerability are particularly prominent in coastal plains characterized by gentle slopes, weak geological and geomorphological features, and significant socioeconomic value.
{"title":"Evaluating Sea Level Rise Impacts on the Southeastern Türkiye Coastline: a Coastal Vulnerability Perspective","authors":"Fahri Aykut, Devrim Tezcan","doi":"10.1007/s41064-024-00284-0","DOIUrl":"https://doi.org/10.1007/s41064-024-00284-0","url":null,"abstract":"<p>Coastal areas are inherently sensitive and dynamic, susceptible to natural forces like waves, winds, currents, and tides. Human activities further accelerate coastal changes, while climate change and global sea level rise add to the challenges. Recognizing and safeguarding these coasts, vital for both socioeconomic and environmental reasons, becomes imperative. The objective of this study is to categorize the coasts of the Mersin and İskenderun bays along the southeastern coast of Türkiye based on their vulnerability to natural forces and human-induced factors using the coastal vulnerability index (CVI) method. The study area encompasses approximately 520 km of coastline. The coastal vulnerability analysis reveals that the coastal zone comprises various levels of vulnerability along the total coastline: 24.7% (128 km) is categorized as very high vulnerability, 27.4% (142 km) as high vulnerability, 23.7% (123 km) as moderate vulnerability, and 24.3% (126 km) as low vulnerability. Key parameters influencing vulnerability include coastal slope, land use, and population density. High and very high vulnerability are particularly prominent in coastal plains characterized by gentle slopes, weak geological and geomorphological features, and significant socioeconomic value.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140627940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s41064-024-00281-3
Michael Kölle, Volker Walter, Uwe Sörgel
In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s crucial to recognize that the performance of these systems heavily relies on the quality of labeled training data. To address this, we propose a shift in focus towards developing sustainable methods of acquiring such data instead of solely building new classifiers in the ever-evolving ML field. Specifically, in the geospatial domain, the process of generating training data for ML systems has been largely neglected in research. Traditionally, experts have been burdened with the laborious task of labeling, which is not only time-consuming but also inefficient. In our system for the semantic interpretation of Airborne Laser Scanning point clouds, we break with this convention and completely remove labeling obligations from domain experts who have completed special training in geosciences and instead adopt a hybrid intelligence approach. This involves active and iterative collaboration between the ML model and humans through Active Learning, which identifies the most critical samples justifying manual inspection. Only these samples (typically (ll 1{%}) of Passive Learning training points) are subject to human annotation. To carry out this annotation, we choose to outsource the task to a large group of non-specialists, referred to as the crowd, which comes with the inherent challenge of guiding those inexperienced annotators (i.e., “short-term employees”) to still produce labels of sufficient quality. However, we acknowledge that attracting enough volunteers for crowdsourcing campaigns can be challenging due to the tedious nature of labeling tasks. To address this, we propose employing paid crowdsourcing and providing monetary incentives to crowdworkers. This approach ensures access to a vast pool of prospective workers through respective platforms, ensuring timely completion of jobs. Effectively, crowdworkers become human processing units in our hybrid intelligence system mirroring the functionality of electronic processing units.
近年来,卷积神经网络等有监督机器学习(ML)系统的开发取得了重大进展。然而,我们必须认识到,这些系统的性能在很大程度上依赖于标注训练数据的质量。为了解决这个问题,我们建议将重点转移到开发获取此类数据的可持续方法上,而不是仅仅在不断发展的 ML 领域构建新的分类器。具体来说,在地理空间领域,为 ML 系统生成训练数据的过程在很大程度上被研究人员所忽视。传统上,专家们一直承担着费力的标注任务,这不仅耗时,而且效率低下。在我们的机载激光扫描点云语义解释系统中,我们打破了这一传统,完全免除了已完成地理科学专门培训的领域专家的标注义务,转而采用混合智能方法。这包括通过主动学习(Active Learning)在人工智能模型和人类之间进行积极的迭代协作,从而识别出最关键的样本,证明人工检查是合理的。只有这些样本(通常是被动学习训练点的)才需要人工标注。为了进行注释,我们选择将这项任务外包给一大批非专业人员,也就是我们所说的 "群众",这就带来了一个固有的挑战,那就是如何指导这些缺乏经验的注释者(即 "短期雇员"),使他们仍然能够生成质量足够高的标签。不过,我们也承认,由于标注任务的乏味性,吸引足够的志愿者参与众包活动可能具有挑战性。为了解决这个问题,我们建议采用有偿众包,并为众包者提供金钱奖励。这种方法可以确保通过各自的平台接触到大量的潜在工作者,从而确保及时完成工作。实际上,在我们的混合智能系统中,众包工成为了人类处理单元,与电子处理单元的功能如出一辙。
{"title":"Building a Fully-Automatized Active Learning Framework for the Semantic Segmentation of Geospatial 3D Point Clouds","authors":"Michael Kölle, Volker Walter, Uwe Sörgel","doi":"10.1007/s41064-024-00281-3","DOIUrl":"https://doi.org/10.1007/s41064-024-00281-3","url":null,"abstract":"<p>In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s crucial to recognize that the performance of these systems heavily relies on the quality of labeled training data. To address this, we propose a shift in focus towards developing sustainable methods of acquiring such data instead of solely building new classifiers in the ever-evolving ML field. Specifically, in the geospatial domain, the process of generating training data for ML systems has been largely neglected in research. Traditionally, experts have been burdened with the laborious task of labeling, which is not only time-consuming but also inefficient. In our system for the semantic interpretation of Airborne Laser Scanning point clouds, we break with this convention and completely remove labeling obligations from domain experts who have completed special training in geosciences and instead adopt a hybrid intelligence approach. This involves active and iterative collaboration between the ML model and humans through Active Learning, which identifies the most critical samples justifying manual inspection. Only these samples (typically <span>(ll 1{%})</span> of Passive Learning training points) are subject to human annotation. To carry out this annotation, we choose to outsource the task to a large group of non-specialists, referred to as the crowd, which comes with the inherent challenge of guiding those inexperienced annotators (i.e., “short-term employees”) to still produce labels of sufficient quality. However, we acknowledge that attracting enough volunteers for crowdsourcing campaigns can be challenging due to the tedious nature of labeling tasks. To address this, we propose employing paid crowdsourcing and providing monetary incentives to crowdworkers. This approach ensures access to a vast pool of prospective workers through respective platforms, ensuring timely completion of jobs. Effectively, crowdworkers become <i>human processing units</i> in our hybrid intelligence system mirroring the functionality of <i>electronic processing units</i>.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140580122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-25DOI: 10.1007/s41064-024-00280-4
Steven Landgraf, Kira Wursthorn, Markus Hillemann, Markus Ulrich
The intersection of deep learning and photogrammetry unveils a critical need for balancing the power of deep neural networks with interpretability and trustworthiness, especially for safety-critical application like autonomous driving, medical imaging, or machine vision tasks with high demands on reliability. Quantifying the predictive uncertainty is a promising endeavour to open up the use of deep neural networks for such applications. Unfortunately, most current available methods are computationally expensive. In this work, we present a novel approach for efficient and reliable uncertainty estimation for semantic segmentation, which we call Deep Uncertainty Distillation using Ensembles for Segmentation (DUDES). DUDES applies student-teacher distillation with a Deep Ensemble to accurately approximate predictive uncertainties with a single forward pass while maintaining simplicity and adaptability. Experimentally, DUDES accurately captures predictive uncertainties without sacrificing performance on the segmentation task and indicates impressive capabilities of highlighting wrongly classified pixels and out-of-domain samples through high uncertainties on the Cityscapes and Pascal VOC 2012 dataset. With DUDES, we manage to simultaneously simplify and outperform previous work on Deep-Ensemble-based Uncertainty Distillation.
{"title":"DUDES: Deep Uncertainty Distillation using Ensembles for Semantic Segmentation","authors":"Steven Landgraf, Kira Wursthorn, Markus Hillemann, Markus Ulrich","doi":"10.1007/s41064-024-00280-4","DOIUrl":"https://doi.org/10.1007/s41064-024-00280-4","url":null,"abstract":"<p>The intersection of deep learning and photogrammetry unveils a critical need for balancing the power of deep neural networks with interpretability and trustworthiness, especially for safety-critical application like autonomous driving, medical imaging, or machine vision tasks with high demands on reliability. Quantifying the predictive uncertainty is a promising endeavour to open up the use of deep neural networks for such applications. Unfortunately, most current available methods are computationally expensive. In this work, we present a novel approach for efficient and reliable uncertainty estimation for semantic segmentation, which we call <b>D</b>eep <b>U</b>ncertainty <b>D</b>istillation using <b>E</b>nsembles for <b>S</b>egmentation (DUDES). DUDES applies student-teacher distillation with a Deep Ensemble to accurately approximate predictive uncertainties with a single forward pass while maintaining simplicity and adaptability. Experimentally, DUDES accurately captures predictive uncertainties without sacrificing performance on the segmentation task and indicates impressive capabilities of highlighting wrongly classified pixels and out-of-domain samples through high uncertainties on the Cityscapes and Pascal VOC 2012 dataset. With DUDES, we manage to simultaneously simplify and outperform previous work on Deep-Ensemble-based Uncertainty Distillation.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1007/s41064-024-00278-y
Mohd Waseem Naikoo, Shahfahad, Swapan Talukdar, Mohd Rihan, Ishita Afreen Ahmed, Hoang Thi Hang, M. Ishtiaq, Atiqur Rahman
Monitoring of real estate growth is essential with the increasing demand for housing and working space in cities. In this study, a new methodological framework is proposed to map the area under real estate using geospatial techniques. In this framework, the built-up area and open land at successive stages of development are used to map the area under real estate. Three machine learning algorithms were used, namely random forest (RF), support vector machine (SVM), and feedforward neural networks (FFNN), to classify the land use and land cover (LULC) map of Delhi NCR during 1990–2018, which is the basic input for real estate mapping. The results of the study show that optimized RF performed better than SVM and FFNN in LULC classification. The real estate land increased by 279% in Delhi NCR during 1990–2018. The area under real estate increased by 33%, 47%, 29%, 21%, and 22% during 1990–1996, 1996–2003, 2003–2008, 2008–2014, and 2014–2018, respectively. Among the cities surrounding Delhi, Gurgaon, Rohtak, Noida, and Faridabad have witnessed maximum real estate growth. The approach used in this study could be used for real estate mapping in other cities across the world.
{"title":"A Geospatial Approach to Mapping and Monitoring Real Estate-Induced Urban Expansion in the National Capital Region of Delhi","authors":"Mohd Waseem Naikoo, Shahfahad, Swapan Talukdar, Mohd Rihan, Ishita Afreen Ahmed, Hoang Thi Hang, M. Ishtiaq, Atiqur Rahman","doi":"10.1007/s41064-024-00278-y","DOIUrl":"https://doi.org/10.1007/s41064-024-00278-y","url":null,"abstract":"<p>Monitoring of real estate growth is essential with the increasing demand for housing and working space in cities. In this study, a new methodological framework is proposed to map the area under real estate using geospatial techniques. In this framework, the built-up area and open land at successive stages of development are used to map the area under real estate. Three machine learning algorithms were used, namely random forest (RF), support vector machine (SVM), and feedforward neural networks (FFNN), to classify the land use and land cover (LULC) map of Delhi NCR during 1990–2018, which is the basic input for real estate mapping. The results of the study show that optimized RF performed better than SVM and FFNN in LULC classification. The real estate land increased by 279% in Delhi NCR during 1990–2018. The area under real estate increased by 33%, 47%, 29%, 21%, and 22% during 1990–1996, 1996–2003, 2003–2008, 2008–2014, and 2014–2018, respectively. Among the cities surrounding Delhi, Gurgaon, Rohtak, Noida, and Faridabad have witnessed maximum real estate growth. The approach used in this study could be used for real estate mapping in other cities across the world.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1007/s41064-024-00279-x
Kanako Sawa, Ilyas Yalcin, Sultan Kocaman
The detection and continuous updating of buildings in geodatabases has long been a major research area in geographic information science and is an important theme for national mapping agencies. Advancements in machine learning techniques, particularly state-of-the-art deep learning (DL) models, offer promising solutions for extracting and modeling building rooftops from images. However, tasks such as automatic labelling of learning data and the generalizability of models remain challenging. In this study, we assessed the sensor and geographic area adaptation capabilities of a pretrained DL model implemented in the ArcGIS environment using very-high-resolution (50 cm) SkySat imagery. The model was trained for digitizing building footprints via Mask R‑CNN with a ResNet50 backbone using aerial and satellite images from parts of the USA. Here, we utilized images from three different SkySat satellites with various acquisition dates and off-nadir angles and refined the pretrained model using small numbers of buildings as training data (5–53 buildings) over Ankara. We evaluated the buildings in areas with different characteristics, such as urban transformation, slums, regular, and obtained high accuracies with F‑1 scores of 0.92, 0.94, and 0.96 from SkySat 4, 7, and 17, respectively. The study findings showed that the DL model has high transfer learning capability for Ankara using only a few buildings and that the recent SkySat satellites demonstrate superior image quality.
{"title":"Building Detection from SkySat Images with Transfer Learning: a Case Study over Ankara","authors":"Kanako Sawa, Ilyas Yalcin, Sultan Kocaman","doi":"10.1007/s41064-024-00279-x","DOIUrl":"https://doi.org/10.1007/s41064-024-00279-x","url":null,"abstract":"<p>The detection and continuous updating of buildings in geodatabases has long been a major research area in geographic information science and is an important theme for national mapping agencies. Advancements in machine learning techniques, particularly state-of-the-art deep learning (DL) models, offer promising solutions for extracting and modeling building rooftops from images. However, tasks such as automatic labelling of learning data and the generalizability of models remain challenging. In this study, we assessed the sensor and geographic area adaptation capabilities of a pretrained DL model implemented in the ArcGIS environment using very-high-resolution (50 cm) SkySat imagery. The model was trained for digitizing building footprints via Mask R‑CNN with a ResNet50 backbone using aerial and satellite images from parts of the USA. Here, we utilized images from three different SkySat satellites with various acquisition dates and off-nadir angles and refined the pretrained model using small numbers of buildings as training data (5–53 buildings) over Ankara. We evaluated the buildings in areas with different characteristics, such as urban transformation, slums, regular, and obtained high accuracies with F‑1 scores of 0.92, 0.94, and 0.96 from SkySat 4, 7, and 17, respectively. The study findings showed that the DL model has high transfer learning capability for Ankara using only a few buildings and that the recent SkySat satellites demonstrate superior image quality.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During flood events near real-time, synthetic aperture radar (SAR) satellite imagery has proven to be an efficient management tool for disaster management authorities. However, one of the challenges is accurate classification and segmentation of flooded water. A common method of SAR-based flood mapping is binary segmentation by thresholding, but this method is limited due to the effects of backscatter, geographical area, and surface characterstics. Recent advancements in deep learning algorithms for image segmentation have demonstrated excellent potential for improving flood detection. In this paper, we present a deep learning approach with a nested UNet architecture based on a backbone of EfficientNet-B7 by leveraging a publicly available Sentinel‑1 dataset provided jointly by NASA and the IEEE GRSS Committee. The performance of the nested UNet model was compared with several other UNet-based convolutional neural network architectures. The models were trained on flood events from Nebraska and North Alabama in the USA, Bangladesh, and Florence, Italy. Finally, the generalization capacity of the trained nested UNet model was compared to the other architectures by testing on Sentinel‑1 data from flood events of varied geographical regions such as Spain, India, and Vietnam. The impact of using different polarization band combinations of input data on the segmentation capabilities of the nested UNet and other models is also evaluated using Shapley scores. The results of these experiments show that the UNet model architectures perform comparably to the UNet++ with EfficientNet-B7 backbone for both the NASA dataset as well as the other test cases. Therefore, it can be inferred that these models can be trained on certain flood events provided in the dataset and used for flood detection in other geographical areas, thus proving the transferability of these models. However, the effect of polarization still varies across different test cases from around the world in terms of performance; the model trained with the combinations of individual bands, VV and VH, and polarization ratios gives the best results.
{"title":"Automatic Flood Detection from Sentinel-1 Data Using a Nested UNet Model and a NASA Benchmark Dataset","authors":"Binayak Ghosh, Shagun Garg, Mahdi Motagh, Sandro Martinis","doi":"10.1007/s41064-024-00275-1","DOIUrl":"https://doi.org/10.1007/s41064-024-00275-1","url":null,"abstract":"<p>During flood events near real-time, synthetic aperture radar (SAR) satellite imagery has proven to be an efficient management tool for disaster management authorities. However, one of the challenges is accurate classification and segmentation of flooded water. A common method of SAR-based flood mapping is binary segmentation by thresholding, but this method is limited due to the effects of backscatter, geographical area, and surface characterstics. Recent advancements in deep learning algorithms for image segmentation have demonstrated excellent potential for improving flood detection. In this paper, we present a deep learning approach with a nested UNet architecture based on a backbone of EfficientNet-B7 by leveraging a publicly available Sentinel‑1 dataset provided jointly by NASA and the IEEE GRSS Committee. The performance of the nested UNet model was compared with several other UNet-based convolutional neural network architectures. The models were trained on flood events from Nebraska and North Alabama in the USA, Bangladesh, and Florence, Italy. Finally, the generalization capacity of the trained nested UNet model was compared to the other architectures by testing on Sentinel‑1 data from flood events of varied geographical regions such as Spain, India, and Vietnam. The impact of using different polarization band combinations of input data on the segmentation capabilities of the nested UNet and other models is also evaluated using Shapley scores. The results of these experiments show that the UNet model architectures perform comparably to the UNet++ with EfficientNet-B7 backbone for both the NASA dataset as well as the other test cases. Therefore, it can be inferred that these models can be trained on certain flood events provided in the dataset and used for flood detection in other geographical areas, thus proving the transferability of these models. However, the effect of polarization still varies across different test cases from around the world in terms of performance; the model trained with the combinations of individual bands, VV and VH, and polarization ratios gives the best results.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140114914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s41064-024-00277-z
Abstract
The aim of this study is to conduct a risk analysis of fluvial and pluvial flood disasters, focusing on the vulnerability of those residing in the river basin in coastal regions. However, there are numerous factors and indicators that need to be considered for this type of analysis. Swift and precise acquisition and evaluation of such data is an arduous task, necessitating significant public investment. Remote sensing offers unique data and information flow solutions in areas where access to information is restricted. The Google Earth Engine (GEE), a remote sensing platform, offers strong support to users and researchers in this context. A data-based and informative case study has been conducted to evaluate the disaster risk analysis capacity of the platform. Data on three factors and 17 indicators for assessing disaster risk were determined using coding techniques and web geographic information system (web GIS) applications. The study focused on the Filyos River basin in Turkey. Various satellite images and datasets were utilized to identify indicators, while land use was determined using classification studies employing machine learning algorithms on the GEE platform. Using various applications, we obtained information on ecological vulnerability, fluvial and pluvial flooding analyses, and the value of indicators related to construction and population density. Within the scope of the analysis, it has been determined that the disaster risk index (DRI) value for the basin is 4. This DRI value indicates that an unacceptable risk level exists for the 807,889 individuals residing in the basin.
{"title":"Disaster Risk Assessment of Fluvial and Pluvial Flood Using the Google Earth Engine Platform: a Case Study for the Filyos River Basin","authors":"","doi":"10.1007/s41064-024-00277-z","DOIUrl":"https://doi.org/10.1007/s41064-024-00277-z","url":null,"abstract":"<h3>Abstract</h3> <p>The aim of this study is to conduct a risk analysis of fluvial and pluvial flood disasters, focusing on the vulnerability of those residing in the river basin in coastal regions. However, there are numerous factors and indicators that need to be considered for this type of analysis. Swift and precise acquisition and evaluation of such data is an arduous task, necessitating significant public investment. Remote sensing offers unique data and information flow solutions in areas where access to information is restricted. The Google Earth Engine (GEE), a remote sensing platform, offers strong support to users and researchers in this context. A data-based and informative case study has been conducted to evaluate the disaster risk analysis capacity of the platform. Data on three factors and 17 indicators for assessing disaster risk were determined using coding techniques and web geographic information system (web GIS) applications. The study focused on the Filyos River basin in Turkey. Various satellite images and datasets were utilized to identify indicators, while land use was determined using classification studies employing machine learning algorithms on the GEE platform. Using various applications, we obtained information on ecological vulnerability, fluvial and pluvial flooding analyses, and the value of indicators related to construction and population density. Within the scope of the analysis, it has been determined that the disaster risk index (DRI) value for the basin is 4. This DRI value indicates that an unacceptable risk level exists for the 807,889 individuals residing in the basin.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140075774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}