Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-145-2024
Keita Hiraoka, G. Takahashi, Hiroshi Masuda
Abstract. Point clouds acquired by Mobile Mapping System (MMS) are useful for creating 3D maps that can be used for autonomous driving and infrastructure development. However, many applications require semantic labels to each point of the point clouds, and the manual labeling process is very time consuming and expensive. Therefore, there is a strong need to develop a method to automatically assigning semantic labels. For automatic labeling tasks, classification methods using multiscale features are effective because multiscale features include features of various scales of roadside objects. Multiscale features are calculated using points inside spheres of multiscale radii centered at each point in a point cloud. When calculating multiscale features that are useful for classifying MMS point clouds, it is necessary to calculate features using relatively large radii. However, when calculating multiscale features using wide range of neighbor points, existing methods, such as kd-tree, require unacceptably long computation time for neighbor search. In this paper, we propose a method to calculate multiscale features in practical time for semantic labeling of large-scale point clouds. In our method, an MMS point cloud is first divided into small spherical regions. Then, radius search using multiscale radii is performed, and multiscale features are calculated using those neighbor points. Our experimental results showed that our method achieved significantly faster computational performance than conventional methods, and multiscale features could be calculated from large-scale point clouds in practical time.
{"title":"Efficient Calculation of Multi-Scale Features for MMS Point Clouds","authors":"Keita Hiraoka, G. Takahashi, Hiroshi Masuda","doi":"10.5194/isprs-archives-xlviii-2-2024-145-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-145-2024","url":null,"abstract":"Abstract. Point clouds acquired by Mobile Mapping System (MMS) are useful for creating 3D maps that can be used for autonomous driving and infrastructure development. However, many applications require semantic labels to each point of the point clouds, and the manual labeling process is very time consuming and expensive. Therefore, there is a strong need to develop a method to automatically assigning semantic labels. For automatic labeling tasks, classification methods using multiscale features are effective because multiscale features include features of various scales of roadside objects. Multiscale features are calculated using points inside spheres of multiscale radii centered at each point in a point cloud. When calculating multiscale features that are useful for classifying MMS point clouds, it is necessary to calculate features using relatively large radii. However, when calculating multiscale features using wide range of neighbor points, existing methods, such as kd-tree, require unacceptably long computation time for neighbor search. In this paper, we propose a method to calculate multiscale features in practical time for semantic labeling of large-scale point clouds. In our method, an MMS point cloud is first divided into small spherical regions. Then, radius search using multiscale radii is performed, and multiscale features are calculated using those neighbor points. Our experimental results showed that our method achieved significantly faster computational performance than conventional methods, and multiscale features could be calculated from large-scale point clouds in practical time.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"46 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-341-2024
Katja Richter, D. Mader, H. Sardemann, Hans-Gerd Maas
Abstract. LiDAR bathymetry provides an efficient and comprehensive way to capture the topography of water bodies in shallow water areas. However, the penetration depth of this measurement method into the water column is limited by the medium water and water turbidity, resulting in a limited detectability of the bottom topography in deeper waters. An increase of the analyzable water depth is possible by the use of extended evaluation methods, in detail full-waveform stacking methods. So far, however, this has only been investigated for water depths of up to 3.50 m due to water turbidity. In this article, the potential of these extended data processing methods is investigated on an alpine mountain lake with low water turbidity and thus high analyzable water depth. Compared to the standard data processing, the penetration depth could be significantly increased by 58%. In addition, methods for depth-resolved water turbidity parameter determination on the basis of LiDAR bathymetry data were successfully tested.
{"title":"UAV-based LiDAR Bathymetry at an Alpine Mountain Lake","authors":"Katja Richter, D. Mader, H. Sardemann, Hans-Gerd Maas","doi":"10.5194/isprs-archives-xlviii-2-2024-341-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-341-2024","url":null,"abstract":"Abstract. LiDAR bathymetry provides an efficient and comprehensive way to capture the topography of water bodies in shallow water areas. However, the penetration depth of this measurement method into the water column is limited by the medium water and water turbidity, resulting in a limited detectability of the bottom topography in deeper waters. An increase of the analyzable water depth is possible by the use of extended evaluation methods, in detail full-waveform stacking methods. So far, however, this has only been investigated for water depths of up to 3.50 m due to water turbidity. In this article, the potential of these extended data processing methods is investigated on an alpine mountain lake with low water turbidity and thus high analyzable water depth. Compared to the standard data processing, the penetration depth could be significantly increased by 58%. In addition, methods for depth-resolved water turbidity parameter determination on the basis of LiDAR bathymetry data were successfully tested.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"16 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141360704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-427-2024
P. Trybała, Simone Rigon, F. Remondino, A. Banasiewicz, Adam Wróblewski, Arkadiusz Macek, P. Kujawa, K. Romanczukiewicz, Carlos Redondo, Fran Espada
Abstract. Ventilation systems constitute an important piece of the industrial facility ecosystems. Creating proper working environmental conditions for humans is crucial, especially in hazardous sites with presence of various gases, such as underground mines. Combined with the vast amount of space to be ventilated in large mines, designing and maintaining such a system is challenging and costly. To alleviate these issues, the EIT-RM project VOT3D (Ventilation Optimizing Technology based on 3D scanning) proposes conducting advanced airflow modeling in the underground tunnel networks, utilizing computational fluid dynamics (CFD) simulations, modern surveying and 3D modeling approaches to reverse engineer a reliable geometric model of the mine and estimate the 3D airflow field inside it. In this paper, we present the challenges to be solved in this task and the proposed workflow to address them. An example related to an active industrial mine in Poland is reported as a basis for performing experimental data processing using the full, highly automatized procedure. Developments and results of underground mobile mapping (with a drone and a handheld system), point cloud processing and filtering, surface reconstruction and CFD modeling are presented. The detailed results of airflow field estimation show the advantages of the proposed solution and promise its high practical usefulness.
{"title":"Optimizing Mining Ventilation Using 3D Technologies","authors":"P. Trybała, Simone Rigon, F. Remondino, A. Banasiewicz, Adam Wróblewski, Arkadiusz Macek, P. Kujawa, K. Romanczukiewicz, Carlos Redondo, Fran Espada","doi":"10.5194/isprs-archives-xlviii-2-2024-427-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-427-2024","url":null,"abstract":"Abstract. Ventilation systems constitute an important piece of the industrial facility ecosystems. Creating proper working environmental conditions for humans is crucial, especially in hazardous sites with presence of various gases, such as underground mines. Combined with the vast amount of space to be ventilated in large mines, designing and maintaining such a system is challenging and costly. To alleviate these issues, the EIT-RM project VOT3D (Ventilation Optimizing Technology based on 3D scanning) proposes conducting advanced airflow modeling in the underground tunnel networks, utilizing computational fluid dynamics (CFD) simulations, modern surveying and 3D modeling approaches to reverse engineer a reliable geometric model of the mine and estimate the 3D airflow field inside it. In this paper, we present the challenges to be solved in this task and the proposed workflow to address them. An example related to an active industrial mine in Poland is reported as a basis for performing experimental data processing using the full, highly automatized procedure. Developments and results of underground mobile mapping (with a drone and a handheld system), point cloud processing and filtering, surface reconstruction and CFD modeling are presented. The detailed results of airflow field estimation show the advantages of the proposed solution and promise its high practical usefulness.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"100 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-99-2024
N. Genzano, D. Fugazza, R. Eskandari, M. Scaioni
Abstract. The application of Structure-from-Motion (SfM) and Multi-View-Stereo matching with aerial images can be successfully used for deriving dense point clouds to analyse changes in the mountain environment, which is characterized by changes due to the action of natural process. The comparison of multiple datasets requires to setup a stable reference system, task that is generally implemented by means of ground control points (GCPs). On the other hand, their positioning may be sometimes difficult in mountains. To cope with this drawback an approach termed as Multitemporal SfM (MSfM) is presented: multiple blocks are oriented together within a unique SfM project, where GCPs are used in only one epoch for establishing the absolute datum. Accurate coregistration between different epochs depends on the automatic extraction of tie points in stable areas. To verify the application of MSfM in real cases, this paper presents three case studies where different types of photogrammetric data are adopted, including images from drones and manned aircrafts. Applications to glacier and mountain river erosion are entailed.
{"title":"Multitemporal Structure-from-Motion: A Flexible Tool to Cope with Aerial Blocks in Changing Mountain Environment","authors":"N. Genzano, D. Fugazza, R. Eskandari, M. Scaioni","doi":"10.5194/isprs-archives-xlviii-2-2024-99-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-99-2024","url":null,"abstract":"Abstract. The application of Structure-from-Motion (SfM) and Multi-View-Stereo matching with aerial images can be successfully used for deriving dense point clouds to analyse changes in the mountain environment, which is characterized by changes due to the action of natural process. The comparison of multiple datasets requires to setup a stable reference system, task that is generally implemented by means of ground control points (GCPs). On the other hand, their positioning may be sometimes difficult in mountains. To cope with this drawback an approach termed as Multitemporal SfM (MSfM) is presented: multiple blocks are oriented together within a unique SfM project, where GCPs are used in only one epoch for establishing the absolute datum. Accurate coregistration between different epochs depends on the automatic extraction of tie points in stable areas. To verify the application of MSfM in real cases, this paper presents three case studies where different types of photogrammetric data are adopted, including images from drones and manned aircrafts. Applications to glacier and mountain river erosion are entailed.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"8 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-249-2024
M. Medici, G. Perda, Andrea Sterpin, E. M. Farella, Stefano Settimo, F. Remondino
Abstract. In the last few years, data fusion has been an active research topic for the expected advantages of exploiting and combining different but complementary techniques for 3D documentation. The data fusion process consists of merging data coming from different sensors and platforms, intrinsically different, to produce complete, coherent, and precise 3D reconstructions. Although extensive research has been dedicated to this task, we still have many gaps in the integration process, and the quality of the results is hardly sufficient in several cases. This is especially evident when the integration occurs in a later stage, e.g., merging the results of separate data processing. New opportunities are emerging, with the possibility offered by some proprietary tools to jointly process heterogeneous data, particularly image and range-based data. The article investigates the benefits of data integration at different processing levels: raw, middle, and high levels. The experiments are targeted to explore, in particular, the results of the integration on large and complex architectures.
{"title":"Separate and Integrated Data Processing for the 3D Reconstruction of a Complex Architecture","authors":"M. Medici, G. Perda, Andrea Sterpin, E. M. Farella, Stefano Settimo, F. Remondino","doi":"10.5194/isprs-archives-xlviii-2-2024-249-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-249-2024","url":null,"abstract":"Abstract. In the last few years, data fusion has been an active research topic for the expected advantages of exploiting and combining different but complementary techniques for 3D documentation. The data fusion process consists of merging data coming from different sensors and platforms, intrinsically different, to produce complete, coherent, and precise 3D reconstructions. Although extensive research has been dedicated to this task, we still have many gaps in the integration process, and the quality of the results is hardly sufficient in several cases. This is especially evident when the integration occurs in a later stage, e.g., merging the results of separate data processing. New opportunities are emerging, with the possibility offered by some proprietary tools to jointly process heterogeneous data, particularly image and range-based data. The article investigates the benefits of data integration at different processing levels: raw, middle, and high levels. The experiments are targeted to explore, in particular, the results of the integration on large and complex architectures.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"29 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-421-2024
Rami Tamimi, C. Toth
Abstract. Accurate surveying of vegetated areas presents significant challenges due to obstructions that obscure visibility and compromise the precision of measurements. This paper introduces a methodology employing the DJI Zenmuse L2 Light Detection and Ranging (LiDAR) sensor, which is mounted on a Matrice 350 RTK drone. The DJI Zenmuse L2 sensor excels at capturing detailed terrain data under heavy foliage, capable of collecting 1.2 million points per second and offering five returns, thus enhancing the sensor's ability to detect multiple surface responses from a single laser pulse. In a case study conducted near a creek heavily obscured by tree coverage, traditional aerial imaging techniques are found insufficient for capturing critical topographic features, such as the creek banks. Employing LiDAR, the study aims to map these obscured features effectively. The collected data is processed using DJI Terra software, which supports the accurate projection and analysis of the LiDAR data. To validate the accuracy of the data collected from the LiDAR sensor, traditional survey methods are deployed to ground truth the data and provide an accuracy assessment. Ground control points (GCPs) are established using a GNSS receiver to provide geodetic coordinates, which then assist in setting up a total station. This total station measures vertical and horizontal angles, as well as the slope distance from the instrument to positions underneath the tree coverage on the ground. These measurements serve as checkpoints to validate the accuracy of the LiDAR data, thus ensuring the reliability of the survey. This paper discusses the potential of integrating LiDAR data with traditional surveying data, which is expected to enhance the ability of surveyors to map environmental features efficiently and accurately in complex and vegetated terrains. Through detailed procedural descriptions and expected outcomes, the study aims to provide valuable insights into the strategic application of geospatial technologies to overcome common surveying challenges.
{"title":"Accuracy Assessment of UAV LiDAR Compared to Traditional Total Station for Geospatial Data Collection in Land Surveying Contexts","authors":"Rami Tamimi, C. Toth","doi":"10.5194/isprs-archives-xlviii-2-2024-421-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-421-2024","url":null,"abstract":"Abstract. Accurate surveying of vegetated areas presents significant challenges due to obstructions that obscure visibility and compromise the precision of measurements. This paper introduces a methodology employing the DJI Zenmuse L2 Light Detection and Ranging (LiDAR) sensor, which is mounted on a Matrice 350 RTK drone. The DJI Zenmuse L2 sensor excels at capturing detailed terrain data under heavy foliage, capable of collecting 1.2 million points per second and offering five returns, thus enhancing the sensor's ability to detect multiple surface responses from a single laser pulse. In a case study conducted near a creek heavily obscured by tree coverage, traditional aerial imaging techniques are found insufficient for capturing critical topographic features, such as the creek banks. Employing LiDAR, the study aims to map these obscured features effectively. The collected data is processed using DJI Terra software, which supports the accurate projection and analysis of the LiDAR data. To validate the accuracy of the data collected from the LiDAR sensor, traditional survey methods are deployed to ground truth the data and provide an accuracy assessment. Ground control points (GCPs) are established using a GNSS receiver to provide geodetic coordinates, which then assist in setting up a total station. This total station measures vertical and horizontal angles, as well as the slope distance from the instrument to positions underneath the tree coverage on the ground. These measurements serve as checkpoints to validate the accuracy of the LiDAR data, thus ensuring the reliability of the survey. This paper discusses the potential of integrating LiDAR data with traditional surveying data, which is expected to enhance the ability of surveyors to map environmental features efficiently and accurately in complex and vegetated terrains. Through detailed procedural descriptions and expected outcomes, the study aims to provide valuable insights into the strategic application of geospatial technologies to overcome common surveying challenges.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"100 27","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-415-2024
Christoph Strecha, Martin Rehak, Davide Cucci
Abstract. We presented a mobile phone scanning solution that offers a workflow for scanning not only small spaces, where drift can be neglected, but also larger spaces where it becomes a major accuracy issue. The LiDAR and image data is combined to build 3D representations of indoor spaces. The paper does focus on the drift compensation for larger scans on the mobile phone by using AutoTags detections. We show that those can also be used to combine scans from multiple independent scans.
{"title":"Mobile Phone Based Indoor Mapping","authors":"Christoph Strecha, Martin Rehak, Davide Cucci","doi":"10.5194/isprs-archives-xlviii-2-2024-415-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-415-2024","url":null,"abstract":"Abstract. We presented a mobile phone scanning solution that offers a workflow for scanning not only small spaces, where drift can be neglected, but also larger spaces where it becomes a major accuracy issue. The LiDAR and image data is combined to build 3D representations of indoor spaces. The paper does focus on the drift compensation for larger scans on the mobile phone by using AutoTags detections. We show that those can also be used to combine scans from multiple independent scans.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"24 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-203-2024
Yuting Lin, Kumiko Suzuki, Shinichiro Sogo
Abstract. Traditional semantic segmentation models often struggle with poor generalizability in zero-shot scenarios such as recognizing attributes unseen in the training labels. On the other hands, language-vision models (VLMs) have shown promise in improving performance on zero-shot tasks by leveraging semantic information from textual inputs and fusing this information with visual features. However, existing VLM-based methods do not perform as effectively on remote sensing data due to the lack of such data in their training datasets. In this paper, we introduce a two-stage fine-tuning approach for a VLM-based segmentation model using a large remote sensing image-caption dataset, which we created using an existing image-caption model. Additionally, we propose a modified decoder and a visual prompt technique using a saliency map to enhance segmentation results. Through these methods, we achieve superior segmentation performance on remote sensing data, demonstrating the effectiveness of our approach.
{"title":"Practical Techniques for Vision-Language Segmentation Model in Remote Sensing","authors":"Yuting Lin, Kumiko Suzuki, Shinichiro Sogo","doi":"10.5194/isprs-archives-xlviii-2-2024-203-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-203-2024","url":null,"abstract":"Abstract. Traditional semantic segmentation models often struggle with poor generalizability in zero-shot scenarios such as recognizing attributes unseen in the training labels. On the other hands, language-vision models (VLMs) have shown promise in improving performance on zero-shot tasks by leveraging semantic information from textual inputs and fusing this information with visual features. However, existing VLM-based methods do not perform as effectively on remote sensing data due to the lack of such data in their training datasets. In this paper, we introduce a two-stage fine-tuning approach for a VLM-based segmentation model using a large remote sensing image-caption dataset, which we created using an existing image-caption model. Additionally, we propose a modified decoder and a visual prompt technique using a saliency map to enhance segmentation results. Through these methods, we achieve superior segmentation performance on remote sensing data, demonstrating the effectiveness of our approach.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"73 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-321-2024
Kei Otomo, Kiichiro Ishikawa
Abstract. The objective of this study is to develop a system to support rapid ground vehicle activities by planning safe travel routes for ground vehicles from point clouds of wide-area uneven terrain environments measured using UAVs. However, fast path planning is difficult in complex environments such as large, uneven terrain environments. Therefore, this paper proposes a new RRT method based on the RRT algorithm that can perform fast path planning, even in complex environments. In the proposed method, narrow areas that are difficult to be explored by ordinary RRTs are first identified in advance, and nodes are placed in these areas to guide the search. When searching with RRTs, the tree is extended via these guide nodes to efficiently traverse the narrow area. In the validation of the proposed method, a comparison was made with RRT and RRT-Connect in two environments, including narrow areas. The results show that the proposed method has a higher route discovery capability, at least two times fewer search nodes and five times faster path planning capability than other RRTs.
{"title":"Ground vehicle path planning on Uneven terrain Using UAV Measurement point clouds","authors":"Kei Otomo, Kiichiro Ishikawa","doi":"10.5194/isprs-archives-xlviii-2-2024-321-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-321-2024","url":null,"abstract":"Abstract. The objective of this study is to develop a system to support rapid ground vehicle activities by planning safe travel routes for ground vehicles from point clouds of wide-area uneven terrain environments measured using UAVs. However, fast path planning is difficult in complex environments such as large, uneven terrain environments. Therefore, this paper proposes a new RRT method based on the RRT algorithm that can perform fast path planning, even in complex environments. In the proposed method, narrow areas that are difficult to be explored by ordinary RRTs are first identified in advance, and nodes are placed in these areas to guide the search. When searching with RRTs, the tree is extended via these guide nodes to efficiently traverse the narrow area. In the validation of the proposed method, a comparison was made with RRT and RRT-Connect in two environments, including narrow areas. The results show that the proposed method has a higher route discovery capability, at least two times fewer search nodes and five times faster path planning capability than other RRTs.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"56 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. This study explores the efficacy of vehicle-assisted monitoring for bridge damage assessment, emphasizing the integration of diverse sensor data sources. A novel method utilizing a deep neural network is proposed, enabling the fusion of fixed sensors on bridges and onboard vehicle sensors for damage assessment. The network offers scalability, robustness, and implementability, accommodating various measurement types while handling noise and dynamic loading conditions. The main novel aspect of our work is its ability to extract damage-sensitive features without signal preprocessing for future bridge health monitoring systems. Through numerical evaluations, considering realistic operational conditions, the proposed method demonstrates the capability to detect subtle damage under varying traffic conditions. Findings underscore the importance of integrating vehicle and bridge sensor data for reliable damage assessment, recommending strategies for optimal monitoring implementation by road authorities and bridge owners.
{"title":"Smart Bridge Damage Assessment through Integrated Multi-Sensor Fusion Vehicle Monitoring","authors":"Aminreza Karamoozian, Masood Varshosaz, Amirhossein Karamoozian, Huxiong Li, Zhaoxi Fang","doi":"10.5194/isprs-archives-xlviii-1-2024-937-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-2024-937-2024","url":null,"abstract":"Abstract. This study explores the efficacy of vehicle-assisted monitoring for bridge damage assessment, emphasizing the integration of diverse sensor data sources. A novel method utilizing a deep neural network is proposed, enabling the fusion of fixed sensors on bridges and onboard vehicle sensors for damage assessment. The network offers scalability, robustness, and implementability, accommodating various measurement types while handling noise and dynamic loading conditions. The main novel aspect of our work is its ability to extract damage-sensitive features without signal preprocessing for future bridge health monitoring systems. Through numerical evaluations, considering realistic operational conditions, the proposed method demonstrates the capability to detect subtle damage under varying traffic conditions. Findings underscore the importance of integrating vehicle and bridge sensor data for reliable damage assessment, recommending strategies for optimal monitoring implementation by road authorities and bridge owners.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"1 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140970006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}