Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-153-2023
I. Petrovska, M. Jäger, D. Haitz, B. Jutzi
Abstract. Neural Radiance Fields (NeRFs) use a set of camera poses with associated images to represent a scene through a position-dependent density and radiance at given spatial location. Generating a geometric representation in form of a point cloud is gained by ray tracing and sampling 3D points with density and color along the rays. In this contribution we evaluate object reconstruction by NeRFs in 3D metric space against Terrestrial Laser Scanning (TLS) using ground truth data in form of a Structured Light Imaging (SLI) mesh and investigate the influence of the density to the reconstruction’s accuracy. We extend the accuracy assessment from 2D to 3D space and perform high resolution investigations on NeRFs by using camera images with 36MP resolution as well as comparison among point clouds of more than 20 million points against a 0.1mm ground truth mesh. TLS achieves the highest geometric accuracy results with a standard deviation of 1.68mm, while NeRFδt=300 diverges 18.61mm from the ground truth. All NeRF reconstructions contain 3D points inside the object which have the highest displacements from the ground truth, thus contribute the most to the accuracy results. NeRFs accuracy improves with increasing the density threshold as a consequence of completeness, since beside noise and outliers the object points are also being removed.
{"title":"GEOMETRIC ACCURACY ANALYSIS BETWEEN NEURAL RADIANCE FIELDS (NERFS) AND TERRESTRIAL LASER SCANNING (TLS)","authors":"I. Petrovska, M. Jäger, D. Haitz, B. Jutzi","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-153-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-153-2023","url":null,"abstract":"Abstract. Neural Radiance Fields (NeRFs) use a set of camera poses with associated images to represent a scene through a position-dependent density and radiance at given spatial location. Generating a geometric representation in form of a point cloud is gained by ray tracing and sampling 3D points with density and color along the rays. In this contribution we evaluate object reconstruction by NeRFs in 3D metric space against Terrestrial Laser Scanning (TLS) using ground truth data in form of a Structured Light Imaging (SLI) mesh and investigate the influence of the density to the reconstruction’s accuracy. We extend the accuracy assessment from 2D to 3D space and perform high resolution investigations on NeRFs by using camera images with 36MP resolution as well as comparison among point clouds of more than 20 million points against a 0.1mm ground truth mesh. TLS achieves the highest geometric accuracy results with a standard deviation of 1.68mm, while NeRFδt=300 diverges 18.61mm from the ground truth. All NeRF reconstructions contain 3D points inside the object which have the highest displacements from the ground truth, thus contribute the most to the accuracy results. NeRFs accuracy improves with increasing the density threshold as a consequence of completeness, since beside noise and outliers the object points are also being removed.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-63-2023
M. Goebel, D. Iwaszczuk
Abstract. Plants signal their health in a broader spectrum than we can see with our eyes. We compared sunlight reflectance on plants at spectral wavelengths ranging from 430 nm to 870 nm in our study. These are based on multispectral images captured at a distance of 2 m. Indoor plants were observed over a period of 18 days and stressed due to a lack of sunlight or water. Wild sedge photographed on the forest floor at close range and with a difficult capture setup produced results comparable to published multispectral signatures derived from aerial imagery. Changes of leaf reflectance were noticed in spectral signatures and in vegetation indices. When calculating vegetation indices, our results show that comparing red and red edge reflectance values is superior to comparing red and NIR reflectance values.
{"title":"SPECTRAL ANALYSIS OF IMAGES OF PLANTS UNDER STRESS USING A CLOSE-RANGE CAMERA","authors":"M. Goebel, D. Iwaszczuk","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-63-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-63-2023","url":null,"abstract":"Abstract. Plants signal their health in a broader spectrum than we can see with our eyes. We compared sunlight reflectance on plants at spectral wavelengths ranging from 430 nm to 870 nm in our study. These are based on multispectral images captured at a distance of 2 m. Indoor plants were observed over a period of 18 days and stressed due to a lack of sunlight or water. Wild sedge photographed on the forest floor at close range and with a difficult capture setup produced results comparable to published multispectral signatures derived from aerial imagery. Changes of leaf reflectance were noticed in spectral signatures and in vegetation indices. When calculating vegetation indices, our results show that comparing red and red edge reflectance values is superior to comparing red and NIR reflectance values.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-47-2023
E. M. Farella, F. Remondino, C. Cahalane, R. Qin, A. M. Loghin, M. Di Tullio, N. Haala, J. Mills
Abstract. In recent decades, the geospatial domain has benefitted from technological advances in sensors, methodologies, and processing tools to expand capabilities in mapping applications. Airborne techniques (LiDAR and aerial photogrammetry) generally provide most of the data used for this purpose. However, despite the relevant accuracy of these technologies and the high spatial resolution of airborne data, updates are not sufficiently regular due to significant flight costs and logistics. New possibilities to fill this information gap have emerged with the advent of Very High Resolution (VHR) optical satellite images in the early 2000s. In addition to the high temporal resolution of the cost-effective datasets and their sub-meter geometric resolutions, the synoptic coverage is an unprecedented opportunity for mapping remote areas, multi-temporal analyses, updating datasets and disaster management. For all these reasons, VHR satellite imagery is clearly a relevant study for National Mapping and Cadastral Agencies (NMCAs). This work, supported by EuroSDR, summarises a series of experimental analyses carried out over diverse landscapes to explore the potential of VHR imagery for large-scale mapping.
{"title":"GEOMETRIC PROCESSING OF VERY HIGH-RESOLUTION SATELLITE IMAGERY: QUALITY ASSESSMENT FOR 3D MAPPING NEEDS","authors":"E. M. Farella, F. Remondino, C. Cahalane, R. Qin, A. M. Loghin, M. Di Tullio, N. Haala, J. Mills","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-47-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-47-2023","url":null,"abstract":"Abstract. In recent decades, the geospatial domain has benefitted from technological advances in sensors, methodologies, and processing tools to expand capabilities in mapping applications. Airborne techniques (LiDAR and aerial photogrammetry) generally provide most of the data used for this purpose. However, despite the relevant accuracy of these technologies and the high spatial resolution of airborne data, updates are not sufficiently regular due to significant flight costs and logistics. New possibilities to fill this information gap have emerged with the advent of Very High Resolution (VHR) optical satellite images in the early 2000s. In addition to the high temporal resolution of the cost-effective datasets and their sub-meter geometric resolutions, the synoptic coverage is an unprecedented opportunity for mapping remote areas, multi-temporal analyses, updating datasets and disaster management. For all these reasons, VHR satellite imagery is clearly a relevant study for National Mapping and Cadastral Agencies (NMCAs). This work, supported by EuroSDR, summarises a series of experimental analyses carried out over diverse landscapes to explore the potential of VHR imagery for large-scale mapping.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-199-2023
J. Wajs, D. Kasza
Abstract. The BATDRON Unmanned Surface Vessel is an original structure developed at Wrocław University of Science and Technology, intended to conduct measurements using a diverse range of sensors. Initially, it was used to measure the geometry of reservoirs and waterways, however, owing to the constant development of the platform, it was possible to use a new sensor – a non-metric camera, and to carry out photogrammetric measurements. The first tests were carried out using a NIKON D800 camera and included measuring the Grunwaldzki Bridge in Wrocław.The analysis of the trajectory of the BATDRON remotely controlled (RC) vessel demonstrated that it maintains the programmed course (the average value of the distance from the set mission route was -0.04 m), which is of great importance in the case of photogrammetric measurement sessions. Also, the internal quality control (IQC) of data obtained from the non-metric camera in relation to reference data obtained with a Riegl laser scanner showed satisfactory accuracy of photogrammetric data.
{"title":"A MULTI-PURPOSE USV PROTOTYPE FOR PHOTOGRAMMETRY APPLICATIONS – CASE STUDY OF A 3D MODEL OF THE GRUNWALDZKI BRIDGE (WROCŁAW, POLAND)","authors":"J. Wajs, D. Kasza","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-199-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-199-2023","url":null,"abstract":"Abstract. The BATDRON Unmanned Surface Vessel is an original structure developed at Wrocław University of Science and Technology, intended to conduct measurements using a diverse range of sensors. Initially, it was used to measure the geometry of reservoirs and waterways, however, owing to the constant development of the platform, it was possible to use a new sensor – a non-metric camera, and to carry out photogrammetric measurements. The first tests were carried out using a NIKON D800 camera and included measuring the Grunwaldzki Bridge in Wrocław.The analysis of the trajectory of the BATDRON remotely controlled (RC) vessel demonstrated that it maintains the programmed course (the average value of the distance from the set mission route was -0.04 m), which is of great importance in the case of photogrammetric measurement sessions. Also, the internal quality control (IQC) of data obtained from the non-metric camera in relation to reference data obtained with a Riegl laser scanner showed satisfactory accuracy of photogrammetric data.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-9-2023
R. Beber, G. Perda, N. Takhtkeshha, F. Remondino, T. Maffei, D. Poli, K. Moe, P. Cipriano, M. Ciliberti
Abstract. The Urban Data Space for Green Deal - USAGE - project is founded by the European Union (EU) to support the green transition of cities. Within USAGE, a series of geospatial, thematic and other datasets have been newly acquired or created to test and evaluate solutions (i) to better understand issues and trends on how our planet and its climate are changing and (ii) to address the role that humans play in these changes, e.g., with behaviour adaptation and mitigation actions. The paper aims to provide some relevant datasets collected in two urban areas, reporting processing methodologies and applications of analysis-ready and decision-ready geospatial data. The shared data are unique urban datasets due to their resolutions and sensors type and could boost progresses of geospatial procedures to create and use data useful for climate change adaptation, renewable energy monitoring and management, etc.
摘要Urban Data Space for Green Deal - USAGE项目由欧盟(EU)发起,旨在支持城市的绿色转型。在USAGE范围内,新获取或创建了一系列地理空间、专题和其他数据集,以测试和评估解决方案:(一)更好地了解我们的地球及其气候如何变化的问题和趋势,以及(二)解决人类在这些变化中发挥的作用,例如通过行为适应和减缓行动。本文旨在提供在两个城市地区收集的一些相关数据集,报告处理方法以及可供分析和决策的地理空间数据的应用。由于其分辨率和传感器类型,共享数据是独特的城市数据集,可以促进地理空间程序的进展,以创建和使用对气候变化适应、可再生能源监测和管理等有用的数据。
{"title":"MULTI-MODAL GEOSPATIAL AND THEMATIC DATA TO FOSTER GREEN DEAL APPLICATIONS","authors":"R. Beber, G. Perda, N. Takhtkeshha, F. Remondino, T. Maffei, D. Poli, K. Moe, P. Cipriano, M. Ciliberti","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-9-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-9-2023","url":null,"abstract":"Abstract. The Urban Data Space for Green Deal - USAGE - project is founded by the European Union (EU) to support the green transition of cities. Within USAGE, a series of geospatial, thematic and other datasets have been newly acquired or created to test and evaluate solutions (i) to better understand issues and trends on how our planet and its climate are changing and (ii) to address the role that humans play in these changes, e.g., with behaviour adaptation and mitigation actions. The paper aims to provide some relevant datasets collected in two urban areas, reporting processing methodologies and applications of analysis-ready and decision-ready geospatial data. The shared data are unique urban datasets due to their resolutions and sensors type and could boost progresses of geospatial procedures to create and use data useful for climate change adaptation, renewable energy monitoring and management, etc.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-39-2023
Y. Castillo-Campo, X. Monteys, A. L. Beck, C. Cahalane
Abstract. Accurate and consistent mapping of the boundary between land and water (the ‘waterline’) is critical for tracking coastal change and coastal management. Earth Observation satellite remote sensing provides a unique cost-effective alternative to traditional methods. Waterlines from satellites are often derived by methods based on spectral indices that lead to the separation between land and water. The validation strategy for these products requires a complex approach from accuracy assessment (quantifying error) to verification of its suitability for monitoring applications. Traditionally the accuracy of EO products is reduced and simplified to the resolution of the sensor or satellite that collects the data. However, environmental variables (sea conditions, weather, vegetation, anthropic) that may have a direct effect on the sensor and on the coastline that we are trying to monitor are not taken into consideration. Segments of Sentinel-2-derived waterlines were selected in North Bull Island for further analysis in the creation of a new benchmark dataset for understanding the waterline models of eastern Ireland. In our novel approach, we propose that horizontal accuracy assessment is performed by using the mean absolute distance between the GNSS reference line and the Sentinel-2-derived waterline. The vertical accuracy assessment was then calculated by the difference between the attributed waterline height compared with the mean GNSS elevation at the intersection points. Results were then compared with Dublin Port tide gauge height record. The development of reference validation models can allow more efficient application of satellite data for monitoring, and understanding how environmental variables affect each case study.
{"title":"SENTINEL-2 DERIVED WATERLINES FOR COASTAL MONITORING APPLICATIONS: A NEW APPROACH FOR QUANTIFYING VERTICAL AND HORIZONTAL ACCURACIES","authors":"Y. Castillo-Campo, X. Monteys, A. L. Beck, C. Cahalane","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-39-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-39-2023","url":null,"abstract":"Abstract. Accurate and consistent mapping of the boundary between land and water (the ‘waterline’) is critical for tracking coastal change and coastal management. Earth Observation satellite remote sensing provides a unique cost-effective alternative to traditional methods. Waterlines from satellites are often derived by methods based on spectral indices that lead to the separation between land and water. The validation strategy for these products requires a complex approach from accuracy assessment (quantifying error) to verification of its suitability for monitoring applications. Traditionally the accuracy of EO products is reduced and simplified to the resolution of the sensor or satellite that collects the data. However, environmental variables (sea conditions, weather, vegetation, anthropic) that may have a direct effect on the sensor and on the coastline that we are trying to monitor are not taken into consideration. Segments of Sentinel-2-derived waterlines were selected in North Bull Island for further analysis in the creation of a new benchmark dataset for understanding the waterline models of eastern Ireland. In our novel approach, we propose that horizontal accuracy assessment is performed by using the mean absolute distance between the GNSS reference line and the Sentinel-2-derived waterline. The vertical accuracy assessment was then calculated by the difference between the attributed waterline height compared with the mean GNSS elevation at the intersection points. Results were then compared with Dublin Port tide gauge height record. The development of reference validation models can allow more efficient application of satellite data for monitoring, and understanding how environmental variables affect each case study.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135778424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-79-2023
J. Koszyk, P. Łabędź, K. Grzelka, A. Jasińska, K. Pargieła, A. Malczewska, K. Strząbała, M. Michalczak, Ł. Ambroziński
Abstract. Simultaneous localization and mapping (SLAM) is essential for the robot to operate in an unknown, vast environment. LiDAR-based SLAM can be utilizable in environments where other sensors cannot deliver reliable measurements. Providing accurate map results requires particular attention due to deviations originating from the device. This study is aimed to assess LiDAR-based mapping quality in a vast environment. The measurements are conducted on a mobile platform. Accuracy of the map collected with the LeGO-LOAM method was performed by making a comparison to a map gathered with geodetic scanning using ICP. The results provided 60% of fitted points in a distance lower than 5 cm and 80% in a distance lower than 10 cm. The findings prove the mileage of the map created with this method for other tasks, including autonomous driving and semantic segmentation.
{"title":"EVALUATION OF LIDAR ODOMETRY AND MAPPING BASED ON REFERENCE LASER SCANNING","authors":"J. Koszyk, P. Łabędź, K. Grzelka, A. Jasińska, K. Pargieła, A. Malczewska, K. Strząbała, M. Michalczak, Ł. Ambroziński","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-79-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-79-2023","url":null,"abstract":"Abstract. Simultaneous localization and mapping (SLAM) is essential for the robot to operate in an unknown, vast environment. LiDAR-based SLAM can be utilizable in environments where other sensors cannot deliver reliable measurements. Providing accurate map results requires particular attention due to deviations originating from the device. This study is aimed to assess LiDAR-based mapping quality in a vast environment. The measurements are conducted on a mobile platform. Accuracy of the map collected with the LeGO-LOAM method was performed by making a comparison to a map gathered with geodetic scanning using ICP. The results provided 60% of fitted points in a distance lower than 5 cm and 80% in a distance lower than 10 cm. The findings prove the mileage of the map created with this method for other tasks, including autonomous driving and semantic segmentation.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-55-2023
C. R. Fol, A. Murtiyoso, D. Kükenbrink, F. Remondino, V. C. Griess
Abstract. Terrestrial 3D reconstruction is a research topic that has recently received significant attention in the forestry sector. This practice enables the acquisition of high-quality 3D data, which can be used not only to derive physical forest criteria such as tree positions and diameters, but also more detailed analyses related to ecological parameters such as habitat availability and biomass. However, several challenges must be addressed before fully integrating this technology into forestry practices. The primary challenge is accurately georeferencing surveyed 3D data acquired in the same location and placing them into a national projection reference system. Unfortunately, due to the forest canopy, the GNSS signal is often obstructed, and it cannot guarantee sub-meter accuracy. In this paper, we have implemented an indirect georeferencing methodology based on spheres with known coordinates placed at the forest’s edge where GNSS reception was more reliable and accurate than under the canopy. We evaluated its performance through three analyses that confirmed the validity of our approach. Indeed, the accuracy of the TLS point cloud, georeferenced using our method, is within a centimetre level (4.7 cm), whereas mobile scanning methods demonstrate accuracy within the decimetre range but still less than a metre. Additionally, we have initiated the analysis of a potential future application for mixed reality headsets, which could enable real-time acquisition and visualisation of 3D data.
{"title":"TERRESTRIAL 3D MAPPING OF FORESTS: GEOREFERENCING CHALLENGES AND SENSORS COMPARISONS","authors":"C. R. Fol, A. Murtiyoso, D. Kükenbrink, F. Remondino, V. C. Griess","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-55-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-55-2023","url":null,"abstract":"Abstract. Terrestrial 3D reconstruction is a research topic that has recently received significant attention in the forestry sector. This practice enables the acquisition of high-quality 3D data, which can be used not only to derive physical forest criteria such as tree positions and diameters, but also more detailed analyses related to ecological parameters such as habitat availability and biomass. However, several challenges must be addressed before fully integrating this technology into forestry practices. The primary challenge is accurately georeferencing surveyed 3D data acquired in the same location and placing them into a national projection reference system. Unfortunately, due to the forest canopy, the GNSS signal is often obstructed, and it cannot guarantee sub-meter accuracy. In this paper, we have implemented an indirect georeferencing methodology based on spheres with known coordinates placed at the forest’s edge where GNSS reception was more reliable and accurate than under the canopy. We evaluated its performance through three analyses that confirmed the validity of our approach. Indeed, the accuracy of the TLS point cloud, georeferenced using our method, is within a centimetre level (4.7 cm), whereas mobile scanning methods demonstrate accuracy within the decimetre range but still less than a metre. Additionally, we have initiated the analysis of a potential future application for mixed reality headsets, which could enable real-time acquisition and visualisation of 3D data.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-1-2023
O. C. Bayrak, F. Remondino, M. Uzar
Abstract. Urban landscapes are characterized by a multitude of diverse objects, each bearing unique significance in urban management and development. With the rapid evolution and deployment of Unmanned Aerial Vehicle (UAV) technologies, the 3D surveying of urban areas through high resolution point clouds and orthoimages has become more feasible. This technological leap enhances our capacity to comprehensively capture and analyze urban spaces. This contribution introduces a new urban dataset, called YTU3D, which covers an area of approximately 2 km2 and encompasses 45 distinct classes. Notably, YTU3D exceeds the class diversity of existing datasets, thereby enhancing its suitability for detailed urban analysis tasks. The paper presents also the application of three popular deep learning methods in the context of 3D semantic segmentation, along with a multi-level multi-resolution (MLMR) integration. Significantly, our work marks the first application of deep learning with MLMR in the literature and shows that a MLMR approach can improve the classification accuracy. The YTU3D dataset and research findings are publicly available at https://github.com/3DOM-FBK/YTU3D.
{"title":"A NEW DATASET AND METHODOLOGY FOR URBAN-SCALE 3D POINT CLOUD CLASSIFICATION","authors":"O. C. Bayrak, F. Remondino, M. Uzar","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-1-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-1-2023","url":null,"abstract":"Abstract. Urban landscapes are characterized by a multitude of diverse objects, each bearing unique significance in urban management and development. With the rapid evolution and deployment of Unmanned Aerial Vehicle (UAV) technologies, the 3D surveying of urban areas through high resolution point clouds and orthoimages has become more feasible. This technological leap enhances our capacity to comprehensively capture and analyze urban spaces. This contribution introduces a new urban dataset, called YTU3D, which covers an area of approximately 2 km2 and encompasses 45 distinct classes. Notably, YTU3D exceeds the class diversity of existing datasets, thereby enhancing its suitability for detailed urban analysis tasks. The paper presents also the application of three popular deep learning methods in the context of 3D semantic segmentation, along with a multi-level multi-resolution (MLMR) integration. Significantly, our work marks the first application of deep learning with MLMR in the literature and shows that a MLMR approach can improve the classification accuracy. The YTU3D dataset and research findings are publicly available at https://github.com/3DOM-FBK/YTU3D.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.5194/isprs-archives-xlviii-1-w3-2023-85-2023
P. Kramarczyk, B. Hejmanowska
Abstract. The article discusses a method for classifying land cover types in rural areas using a trained neural network. The focus is on distinguishing agriculturally cultivated areas and differentiating bare soil from quarry areas. This distinction is not present in publicly available databases like CORINE, UrbanAtlas, EuroSAT, or BigEarthNet. The research involves training a neural network on multi-temporal patches to classify Sentinel-2 images rapidly. This approach allows automated monitoring of cultivated areas, determining periods of bare soil vulnerability to erosion, and identifying open-pit areas with similar spectral characteristics to bare soil. After training the U-Net network, it achieved an average classification accuracy of 90% (OA) in the test areas, highlighting the importance of using OA for multi-class classifications, instead of ACC. Analysis of our main classes revealed high accuracy, 99.01% for quarries, 92.3% for bare soil, and an average of 94.8% for annual crops, demonstrating the model's capability to differentiate between crops at various growth stages and assess land cover categories effectively.
{"title":"UNET NEURAL NETWORK IN AGRICULTURAL LAND COVER CLASSIFICATION USING SENTINEL-2","authors":"P. Kramarczyk, B. Hejmanowska","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-85-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-85-2023","url":null,"abstract":"Abstract. The article discusses a method for classifying land cover types in rural areas using a trained neural network. The focus is on distinguishing agriculturally cultivated areas and differentiating bare soil from quarry areas. This distinction is not present in publicly available databases like CORINE, UrbanAtlas, EuroSAT, or BigEarthNet. The research involves training a neural network on multi-temporal patches to classify Sentinel-2 images rapidly. This approach allows automated monitoring of cultivated areas, determining periods of bare soil vulnerability to erosion, and identifying open-pit areas with similar spectral characteristics to bare soil. After training the U-Net network, it achieved an average classification accuracy of 90% (OA) in the test areas, highlighting the importance of using OA for multi-class classifications, instead of ACC. Analysis of our main classes revealed high accuracy, 99.01% for quarries, 92.3% for bare soil, and an average of 94.8% for annual crops, demonstrating the model's capability to differentiate between crops at various growth stages and assess land cover categories effectively.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}