Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-241-2024
Scott McAvoy, B. Tanduo, A. Spreafico, F. Chiabrando, D. Rissolo, J. Ristevski, F. Kuester
Abstract. Photogrammetry and LiDAR have become increasingly accessible methods for documentation of Cultural Heritage sites. Academic and government agencies recognize the utility of high-resolution 3D models supporting long-term asset management through visualization, conservation planning, and change detection. Though detailed models can be created with increasing ease, their potential for future use can be constrained by a lack of accompanying topographic data, data collector skill level, and incomplete recording of the key metadata and paradata which make such survey data useful to future endeavors. In this paper, informed by various international survey organizations and data archives, we present a framework to record and communicate Cultural Heritage - focusing on architectures based on 3D metric survey - to first describe the data and metadata which should be included by surveyors to enable data usage and to communicate the expected utility of this data.
{"title":"An Archival Framework for Sharing of Cultural Heritage 3D Survey Data: OpenHeritage3D.org","authors":"Scott McAvoy, B. Tanduo, A. Spreafico, F. Chiabrando, D. Rissolo, J. Ristevski, F. Kuester","doi":"10.5194/isprs-archives-xlviii-2-2024-241-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-241-2024","url":null,"abstract":"Abstract. Photogrammetry and LiDAR have become increasingly accessible methods for documentation of Cultural Heritage sites. Academic and government agencies recognize the utility of high-resolution 3D models supporting long-term asset management through visualization, conservation planning, and change detection. Though detailed models can be created with increasing ease, their potential for future use can be constrained by a lack of accompanying topographic data, data collector skill level, and incomplete recording of the key metadata and paradata which make such survey data useful to future endeavors. In this paper, informed by various international survey organizations and data archives, we present a framework to record and communicate Cultural Heritage - focusing on architectures based on 3D metric survey - to first describe the data and metadata which should be included by surveyors to enable data usage and to communicate the expected utility of this data.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"85 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-379-2024
Vandita Shukla, Luca Morelli, F. Remondino, Andrea Micheli, D. Tuia, Benjamin Risse
Abstract. Wildlife research in both terrestrial and aquatic ecosystems now deploys drone technology for tasks such as monitoring, census counts and habitat analysis. Unlike camera traps, drones offer real-time flexibility for adaptable flight paths and camera views, thus making them ideal for capturing multi-view data on wildlife like zebras or lions. With recent advancements in animals’ 3D shape & pose estimation, there is an increasing interest in bringing 3D analysis from ground to sky by means of drones. The paper reports some activities of the EU-funded WildDrone project and performs, for the first time, 3D analyses of animals exploiting oblique drone imagery. Using parametric model fitting, we estimate 3D shape and pose of animals from frames of a monocular RGB video. With the goal of appending metric information to parametric animal models using photogrammetric evidence, we propose a pipeline where we perform a point cloud reconstruction of the scene to scale and localize the animal within the 3D scene. Challenges, planned next steps and future directions are also reported.
{"title":"Towards Estimation of 3D Poses and Shapes of Animals from Oblique Drone Imagery","authors":"Vandita Shukla, Luca Morelli, F. Remondino, Andrea Micheli, D. Tuia, Benjamin Risse","doi":"10.5194/isprs-archives-xlviii-2-2024-379-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-379-2024","url":null,"abstract":"Abstract. Wildlife research in both terrestrial and aquatic ecosystems now deploys drone technology for tasks such as monitoring, census counts and habitat analysis. Unlike camera traps, drones offer real-time flexibility for adaptable flight paths and camera views, thus making them ideal for capturing multi-view data on wildlife like zebras or lions. With recent advancements in animals’ 3D shape & pose estimation, there is an increasing interest in bringing 3D analysis from ground to sky by means of drones. The paper reports some activities of the EU-funded WildDrone project and performs, for the first time, 3D analyses of animals exploiting oblique drone imagery. Using parametric model fitting, we estimate 3D shape and pose of animals from frames of a monocular RGB video. With the goal of appending metric information to parametric animal models using photogrammetric evidence, we propose a pipeline where we perform a point cloud reconstruction of the scene to scale and localize the animal within the 3D scene. Challenges, planned next steps and future directions are also reported.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"95 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-249-2024
M. Medici, G. Perda, Andrea Sterpin, E. M. Farella, Stefano Settimo, F. Remondino
Abstract. In the last few years, data fusion has been an active research topic for the expected advantages of exploiting and combining different but complementary techniques for 3D documentation. The data fusion process consists of merging data coming from different sensors and platforms, intrinsically different, to produce complete, coherent, and precise 3D reconstructions. Although extensive research has been dedicated to this task, we still have many gaps in the integration process, and the quality of the results is hardly sufficient in several cases. This is especially evident when the integration occurs in a later stage, e.g., merging the results of separate data processing. New opportunities are emerging, with the possibility offered by some proprietary tools to jointly process heterogeneous data, particularly image and range-based data. The article investigates the benefits of data integration at different processing levels: raw, middle, and high levels. The experiments are targeted to explore, in particular, the results of the integration on large and complex architectures.
{"title":"Separate and Integrated Data Processing for the 3D Reconstruction of a Complex Architecture","authors":"M. Medici, G. Perda, Andrea Sterpin, E. M. Farella, Stefano Settimo, F. Remondino","doi":"10.5194/isprs-archives-xlviii-2-2024-249-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-249-2024","url":null,"abstract":"Abstract. In the last few years, data fusion has been an active research topic for the expected advantages of exploiting and combining different but complementary techniques for 3D documentation. The data fusion process consists of merging data coming from different sensors and platforms, intrinsically different, to produce complete, coherent, and precise 3D reconstructions. Although extensive research has been dedicated to this task, we still have many gaps in the integration process, and the quality of the results is hardly sufficient in several cases. This is especially evident when the integration occurs in a later stage, e.g., merging the results of separate data processing. New opportunities are emerging, with the possibility offered by some proprietary tools to jointly process heterogeneous data, particularly image and range-based data. The article investigates the benefits of data integration at different processing levels: raw, middle, and high levels. The experiments are targeted to explore, in particular, the results of the integration on large and complex architectures.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"29 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-99-2024
N. Genzano, D. Fugazza, R. Eskandari, M. Scaioni
Abstract. The application of Structure-from-Motion (SfM) and Multi-View-Stereo matching with aerial images can be successfully used for deriving dense point clouds to analyse changes in the mountain environment, which is characterized by changes due to the action of natural process. The comparison of multiple datasets requires to setup a stable reference system, task that is generally implemented by means of ground control points (GCPs). On the other hand, their positioning may be sometimes difficult in mountains. To cope with this drawback an approach termed as Multitemporal SfM (MSfM) is presented: multiple blocks are oriented together within a unique SfM project, where GCPs are used in only one epoch for establishing the absolute datum. Accurate coregistration between different epochs depends on the automatic extraction of tie points in stable areas. To verify the application of MSfM in real cases, this paper presents three case studies where different types of photogrammetric data are adopted, including images from drones and manned aircrafts. Applications to glacier and mountain river erosion are entailed.
{"title":"Multitemporal Structure-from-Motion: A Flexible Tool to Cope with Aerial Blocks in Changing Mountain Environment","authors":"N. Genzano, D. Fugazza, R. Eskandari, M. Scaioni","doi":"10.5194/isprs-archives-xlviii-2-2024-99-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-99-2024","url":null,"abstract":"Abstract. The application of Structure-from-Motion (SfM) and Multi-View-Stereo matching with aerial images can be successfully used for deriving dense point clouds to analyse changes in the mountain environment, which is characterized by changes due to the action of natural process. The comparison of multiple datasets requires to setup a stable reference system, task that is generally implemented by means of ground control points (GCPs). On the other hand, their positioning may be sometimes difficult in mountains. To cope with this drawback an approach termed as Multitemporal SfM (MSfM) is presented: multiple blocks are oriented together within a unique SfM project, where GCPs are used in only one epoch for establishing the absolute datum. Accurate coregistration between different epochs depends on the automatic extraction of tie points in stable areas. To verify the application of MSfM in real cases, this paper presents three case studies where different types of photogrammetric data are adopted, including images from drones and manned aircrafts. Applications to glacier and mountain river erosion are entailed.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"8 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-421-2024
Rami Tamimi, C. Toth
Abstract. Accurate surveying of vegetated areas presents significant challenges due to obstructions that obscure visibility and compromise the precision of measurements. This paper introduces a methodology employing the DJI Zenmuse L2 Light Detection and Ranging (LiDAR) sensor, which is mounted on a Matrice 350 RTK drone. The DJI Zenmuse L2 sensor excels at capturing detailed terrain data under heavy foliage, capable of collecting 1.2 million points per second and offering five returns, thus enhancing the sensor's ability to detect multiple surface responses from a single laser pulse. In a case study conducted near a creek heavily obscured by tree coverage, traditional aerial imaging techniques are found insufficient for capturing critical topographic features, such as the creek banks. Employing LiDAR, the study aims to map these obscured features effectively. The collected data is processed using DJI Terra software, which supports the accurate projection and analysis of the LiDAR data. To validate the accuracy of the data collected from the LiDAR sensor, traditional survey methods are deployed to ground truth the data and provide an accuracy assessment. Ground control points (GCPs) are established using a GNSS receiver to provide geodetic coordinates, which then assist in setting up a total station. This total station measures vertical and horizontal angles, as well as the slope distance from the instrument to positions underneath the tree coverage on the ground. These measurements serve as checkpoints to validate the accuracy of the LiDAR data, thus ensuring the reliability of the survey. This paper discusses the potential of integrating LiDAR data with traditional surveying data, which is expected to enhance the ability of surveyors to map environmental features efficiently and accurately in complex and vegetated terrains. Through detailed procedural descriptions and expected outcomes, the study aims to provide valuable insights into the strategic application of geospatial technologies to overcome common surveying challenges.
{"title":"Accuracy Assessment of UAV LiDAR Compared to Traditional Total Station for Geospatial Data Collection in Land Surveying Contexts","authors":"Rami Tamimi, C. Toth","doi":"10.5194/isprs-archives-xlviii-2-2024-421-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-421-2024","url":null,"abstract":"Abstract. Accurate surveying of vegetated areas presents significant challenges due to obstructions that obscure visibility and compromise the precision of measurements. This paper introduces a methodology employing the DJI Zenmuse L2 Light Detection and Ranging (LiDAR) sensor, which is mounted on a Matrice 350 RTK drone. The DJI Zenmuse L2 sensor excels at capturing detailed terrain data under heavy foliage, capable of collecting 1.2 million points per second and offering five returns, thus enhancing the sensor's ability to detect multiple surface responses from a single laser pulse. In a case study conducted near a creek heavily obscured by tree coverage, traditional aerial imaging techniques are found insufficient for capturing critical topographic features, such as the creek banks. Employing LiDAR, the study aims to map these obscured features effectively. The collected data is processed using DJI Terra software, which supports the accurate projection and analysis of the LiDAR data. To validate the accuracy of the data collected from the LiDAR sensor, traditional survey methods are deployed to ground truth the data and provide an accuracy assessment. Ground control points (GCPs) are established using a GNSS receiver to provide geodetic coordinates, which then assist in setting up a total station. This total station measures vertical and horizontal angles, as well as the slope distance from the instrument to positions underneath the tree coverage on the ground. These measurements serve as checkpoints to validate the accuracy of the LiDAR data, thus ensuring the reliability of the survey. This paper discusses the potential of integrating LiDAR data with traditional surveying data, which is expected to enhance the ability of surveyors to map environmental features efficiently and accurately in complex and vegetated terrains. Through detailed procedural descriptions and expected outcomes, the study aims to provide valuable insights into the strategic application of geospatial technologies to overcome common surveying challenges.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"100 27","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-427-2024
P. Trybała, Simone Rigon, F. Remondino, A. Banasiewicz, Adam Wróblewski, Arkadiusz Macek, P. Kujawa, K. Romanczukiewicz, Carlos Redondo, Fran Espada
Abstract. Ventilation systems constitute an important piece of the industrial facility ecosystems. Creating proper working environmental conditions for humans is crucial, especially in hazardous sites with presence of various gases, such as underground mines. Combined with the vast amount of space to be ventilated in large mines, designing and maintaining such a system is challenging and costly. To alleviate these issues, the EIT-RM project VOT3D (Ventilation Optimizing Technology based on 3D scanning) proposes conducting advanced airflow modeling in the underground tunnel networks, utilizing computational fluid dynamics (CFD) simulations, modern surveying and 3D modeling approaches to reverse engineer a reliable geometric model of the mine and estimate the 3D airflow field inside it. In this paper, we present the challenges to be solved in this task and the proposed workflow to address them. An example related to an active industrial mine in Poland is reported as a basis for performing experimental data processing using the full, highly automatized procedure. Developments and results of underground mobile mapping (with a drone and a handheld system), point cloud processing and filtering, surface reconstruction and CFD modeling are presented. The detailed results of airflow field estimation show the advantages of the proposed solution and promise its high practical usefulness.
{"title":"Optimizing Mining Ventilation Using 3D Technologies","authors":"P. Trybała, Simone Rigon, F. Remondino, A. Banasiewicz, Adam Wróblewski, Arkadiusz Macek, P. Kujawa, K. Romanczukiewicz, Carlos Redondo, Fran Espada","doi":"10.5194/isprs-archives-xlviii-2-2024-427-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-427-2024","url":null,"abstract":"Abstract. Ventilation systems constitute an important piece of the industrial facility ecosystems. Creating proper working environmental conditions for humans is crucial, especially in hazardous sites with presence of various gases, such as underground mines. Combined with the vast amount of space to be ventilated in large mines, designing and maintaining such a system is challenging and costly. To alleviate these issues, the EIT-RM project VOT3D (Ventilation Optimizing Technology based on 3D scanning) proposes conducting advanced airflow modeling in the underground tunnel networks, utilizing computational fluid dynamics (CFD) simulations, modern surveying and 3D modeling approaches to reverse engineer a reliable geometric model of the mine and estimate the 3D airflow field inside it. In this paper, we present the challenges to be solved in this task and the proposed workflow to address them. An example related to an active industrial mine in Poland is reported as a basis for performing experimental data processing using the full, highly automatized procedure. Developments and results of underground mobile mapping (with a drone and a handheld system), point cloud processing and filtering, surface reconstruction and CFD modeling are presented. The detailed results of airflow field estimation show the advantages of the proposed solution and promise its high practical usefulness.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"100 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-415-2024
Christoph Strecha, Martin Rehak, Davide Cucci
Abstract. We presented a mobile phone scanning solution that offers a workflow for scanning not only small spaces, where drift can be neglected, but also larger spaces where it becomes a major accuracy issue. The LiDAR and image data is combined to build 3D representations of indoor spaces. The paper does focus on the drift compensation for larger scans on the mobile phone by using AutoTags detections. We show that those can also be used to combine scans from multiple independent scans.
{"title":"Mobile Phone Based Indoor Mapping","authors":"Christoph Strecha, Martin Rehak, Davide Cucci","doi":"10.5194/isprs-archives-xlviii-2-2024-415-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-415-2024","url":null,"abstract":"Abstract. We presented a mobile phone scanning solution that offers a workflow for scanning not only small spaces, where drift can be neglected, but also larger spaces where it becomes a major accuracy issue. The LiDAR and image data is combined to build 3D representations of indoor spaces. The paper does focus on the drift compensation for larger scans on the mobile phone by using AutoTags detections. We show that those can also be used to combine scans from multiple independent scans.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"24 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-203-2024
Yuting Lin, Kumiko Suzuki, Shinichiro Sogo
Abstract. Traditional semantic segmentation models often struggle with poor generalizability in zero-shot scenarios such as recognizing attributes unseen in the training labels. On the other hands, language-vision models (VLMs) have shown promise in improving performance on zero-shot tasks by leveraging semantic information from textual inputs and fusing this information with visual features. However, existing VLM-based methods do not perform as effectively on remote sensing data due to the lack of such data in their training datasets. In this paper, we introduce a two-stage fine-tuning approach for a VLM-based segmentation model using a large remote sensing image-caption dataset, which we created using an existing image-caption model. Additionally, we propose a modified decoder and a visual prompt technique using a saliency map to enhance segmentation results. Through these methods, we achieve superior segmentation performance on remote sensing data, demonstrating the effectiveness of our approach.
{"title":"Practical Techniques for Vision-Language Segmentation Model in Remote Sensing","authors":"Yuting Lin, Kumiko Suzuki, Shinichiro Sogo","doi":"10.5194/isprs-archives-xlviii-2-2024-203-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-203-2024","url":null,"abstract":"Abstract. Traditional semantic segmentation models often struggle with poor generalizability in zero-shot scenarios such as recognizing attributes unseen in the training labels. On the other hands, language-vision models (VLMs) have shown promise in improving performance on zero-shot tasks by leveraging semantic information from textual inputs and fusing this information with visual features. However, existing VLM-based methods do not perform as effectively on remote sensing data due to the lack of such data in their training datasets. In this paper, we introduce a two-stage fine-tuning approach for a VLM-based segmentation model using a large remote sensing image-caption dataset, which we created using an existing image-caption model. Additionally, we propose a modified decoder and a visual prompt technique using a saliency map to enhance segmentation results. Through these methods, we achieve superior segmentation performance on remote sensing data, demonstrating the effectiveness of our approach.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"73 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.5194/isprs-archives-xlviii-2-2024-321-2024
Kei Otomo, Kiichiro Ishikawa
Abstract. The objective of this study is to develop a system to support rapid ground vehicle activities by planning safe travel routes for ground vehicles from point clouds of wide-area uneven terrain environments measured using UAVs. However, fast path planning is difficult in complex environments such as large, uneven terrain environments. Therefore, this paper proposes a new RRT method based on the RRT algorithm that can perform fast path planning, even in complex environments. In the proposed method, narrow areas that are difficult to be explored by ordinary RRTs are first identified in advance, and nodes are placed in these areas to guide the search. When searching with RRTs, the tree is extended via these guide nodes to efficiently traverse the narrow area. In the validation of the proposed method, a comparison was made with RRT and RRT-Connect in two environments, including narrow areas. The results show that the proposed method has a higher route discovery capability, at least two times fewer search nodes and five times faster path planning capability than other RRTs.
{"title":"Ground vehicle path planning on Uneven terrain Using UAV Measurement point clouds","authors":"Kei Otomo, Kiichiro Ishikawa","doi":"10.5194/isprs-archives-xlviii-2-2024-321-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-321-2024","url":null,"abstract":"Abstract. The objective of this study is to develop a system to support rapid ground vehicle activities by planning safe travel routes for ground vehicles from point clouds of wide-area uneven terrain environments measured using UAVs. However, fast path planning is difficult in complex environments such as large, uneven terrain environments. Therefore, this paper proposes a new RRT method based on the RRT algorithm that can perform fast path planning, even in complex environments. In the proposed method, narrow areas that are difficult to be explored by ordinary RRTs are first identified in advance, and nodes are placed in these areas to guide the search. When searching with RRTs, the tree is extended via these guide nodes to efficiently traverse the narrow area. In the validation of the proposed method, a comparison was made with RRT and RRT-Connect in two environments, including narrow areas. The results show that the proposed method has a higher route discovery capability, at least two times fewer search nodes and five times faster path planning capability than other RRTs.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"56 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. This study explores the efficacy of vehicle-assisted monitoring for bridge damage assessment, emphasizing the integration of diverse sensor data sources. A novel method utilizing a deep neural network is proposed, enabling the fusion of fixed sensors on bridges and onboard vehicle sensors for damage assessment. The network offers scalability, robustness, and implementability, accommodating various measurement types while handling noise and dynamic loading conditions. The main novel aspect of our work is its ability to extract damage-sensitive features without signal preprocessing for future bridge health monitoring systems. Through numerical evaluations, considering realistic operational conditions, the proposed method demonstrates the capability to detect subtle damage under varying traffic conditions. Findings underscore the importance of integrating vehicle and bridge sensor data for reliable damage assessment, recommending strategies for optimal monitoring implementation by road authorities and bridge owners.
{"title":"Smart Bridge Damage Assessment through Integrated Multi-Sensor Fusion Vehicle Monitoring","authors":"Aminreza Karamoozian, Masood Varshosaz, Amirhossein Karamoozian, Huxiong Li, Zhaoxi Fang","doi":"10.5194/isprs-archives-xlviii-1-2024-937-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-2024-937-2024","url":null,"abstract":"Abstract. This study explores the efficacy of vehicle-assisted monitoring for bridge damage assessment, emphasizing the integration of diverse sensor data sources. A novel method utilizing a deep neural network is proposed, enabling the fusion of fixed sensors on bridges and onboard vehicle sensors for damage assessment. The network offers scalability, robustness, and implementability, accommodating various measurement types while handling noise and dynamic loading conditions. The main novel aspect of our work is its ability to extract damage-sensitive features without signal preprocessing for future bridge health monitoring systems. Through numerical evaluations, considering realistic operational conditions, the proposed method demonstrates the capability to detect subtle damage under varying traffic conditions. Findings underscore the importance of integrating vehicle and bridge sensor data for reliable damage assessment, recommending strategies for optimal monitoring implementation by road authorities and bridge owners.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"1 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140970006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}