首页 > 最新文献

ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences最新文献

英文 中文
Calculating of Taftan Volcano Displacement Using PSI Technique and Sentinel 1 Images 利用 PSI 技术和哨兵 1 号图像计算塔夫坦火山位移
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-213-2024
Mahdieh Shirmohammadi, S. Pirasteh, Jie Shen, Jonathan Li
Abstract. Taftan is a semi-active volcano located in southeast of Iran with a number of craters. The main objective of this study is to investigate whether subsidence or uplift in Taftan peak. A total number of 58 images of Sentinel 1-A acquired between January 2015 to December 2020 in the ascending orbit mode and 102 images of both Sentinel 1-A, Sentinel 1-B acquired between October 2014 to June 2020 in the descending orbit mode were pre-processed for this purpose. The interferograms with the permanent scatterer interferometry (PSI) method were created using SARPROZ and StaMPS software in which atmospheric corrections were made automatically and following that get surface displacement of Taftan volcano. The results of the Line Of Sight (LOS) displacement corresponding to the uplift was observed to be 0.5 mm to 1 mm yr-1 for the ascending orbit and 1 mm yr-1 for the descending orbit. Because of GPS station lack close to Taftan volcano, the GPS measurements of one station located in the study area (Saravan station) was used to check the accuracy of PSI method, the GPS station of SARAVAN has been located inside of town and it is appropriate to use and analyze PSI technique in this station. As a result, it was found that the PSI method is in good agreement with the GPS data.
摘要塔夫坦是一座半活火山,位于伊朗东南部,有许多火山口。本研究的主要目的是调查塔夫坦峰是下沉还是隆起。为此,对 2015 年 1 月至 2020 年 12 月期间以上升轨道模式获取的哨兵 1-A 的 58 幅图像以及 2014 年 10 月至 2020 年 6 月期间以下降轨道模式获取的哨兵 1-A 和哨兵 1-B 的 102 幅图像进行了预处理。使用 SARPROZ 和 StaMPS 软件创建了永久散射体干涉测量法(PSI)干涉图,其中自动进行了大气校正,随后获得了塔夫坦火山的表面位移。根据观测结果,上升轨道的视线(LOS)位移对应的隆起为 0.5 毫米至 1 毫米/年,下降轨道的视线位移对应的隆起为 1 毫米/年。由于塔夫坦火山附近没有全球定位系统站,因此使用了研究区域内一个站点(萨拉万站)的全球定位系统测量结果来检验 PSI 方法的准确性,萨拉万的全球定位系统站位于镇内,因此在该站使用和分析 PSI 技术是合适的。结果发现,PSI 方法与 GPS 数据非常吻合。
{"title":"Calculating of Taftan Volcano Displacement Using PSI Technique and Sentinel 1 Images","authors":"Mahdieh Shirmohammadi, S. Pirasteh, Jie Shen, Jonathan Li","doi":"10.5194/isprs-annals-x-1-2024-213-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-213-2024","url":null,"abstract":"Abstract. Taftan is a semi-active volcano located in southeast of Iran with a number of craters. The main objective of this study is to investigate whether subsidence or uplift in Taftan peak. A total number of 58 images of Sentinel 1-A acquired between January 2015 to December 2020 in the ascending orbit mode and 102 images of both Sentinel 1-A, Sentinel 1-B acquired between October 2014 to June 2020 in the descending orbit mode were pre-processed for this purpose. The interferograms with the permanent scatterer interferometry (PSI) method were created using SARPROZ and StaMPS software in which atmospheric corrections were made automatically and following that get surface displacement of Taftan volcano. The results of the Line Of Sight (LOS) displacement corresponding to the uplift was observed to be 0.5 mm to 1 mm yr-1 for the ascending orbit and 1 mm yr-1 for the descending orbit. Because of GPS station lack close to Taftan volcano, the GPS measurements of one station located in the study area (Saravan station) was used to check the accuracy of PSI method, the GPS station of SARAVAN has been located inside of town and it is appropriate to use and analyze PSI technique in this station. As a result, it was found that the PSI method is in good agreement with the GPS data.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Land Cover Classification Based on Multimodal Remote Sensing Fusion 基于多模态遥感融合的土地覆盖分类
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-35-2024
Wei Chen, Jiage Chen, Yuewu Wan, Xining Liu, Mengya Cai, Jingguo Xu, Hongbo Cui, Mengdie Duan
Abstract. Global high-precision and high timeliness land cover data is a fundamental and strategic resource for global strategic interest maintenance, global environmental change research, and sustainable development planning. However, due to difficulties in obtaining control and reference information from overseas, a single data source cannot effectively cover, and surface coverage classification faces significant challenges in information extraction. Based on this, this article proposes an intelligent interpretation method for typical elements based on multimodal fusion, starting from the characteristics of domestic remote sensing images. It also develops an optical SAR data conversion and complementarity strategy based on convolutional translation networks, as well as a typical element extraction algorithm. This solves the problems of sparse remote sensing images, limited effective observations, and difficult information recognition, thereby achieving automation of typical element information under dense observation time series High precision extraction and analysis.
摘要全球高精度、高时效的土地覆被数据是全球战略利益维护、全球环境变化研究和可持续发展规划的基础性、战略性资源。然而,由于难以从国外获取对照和参考信息,单一数据源无法有效覆盖,地表覆盖分类在信息提取方面面临巨大挑战。基于此,本文从国内遥感影像的特点出发,提出了基于多模态融合的典型要素智能解译方法。同时开发了基于卷积翻译网络的光学 SAR 数据转换与互补策略,以及典型要素提取算法。这解决了遥感影像稀疏、有效观测有限、信息识别困难等问题,从而实现了密集观测时间序列下典型要素信息的自动化高精度提取和分析。
{"title":"Land Cover Classification Based on Multimodal Remote Sensing Fusion","authors":"Wei Chen, Jiage Chen, Yuewu Wan, Xining Liu, Mengya Cai, Jingguo Xu, Hongbo Cui, Mengdie Duan","doi":"10.5194/isprs-annals-x-1-2024-35-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-35-2024","url":null,"abstract":"Abstract. Global high-precision and high timeliness land cover data is a fundamental and strategic resource for global strategic interest maintenance, global environmental change research, and sustainable development planning. However, due to difficulties in obtaining control and reference information from overseas, a single data source cannot effectively cover, and surface coverage classification faces significant challenges in information extraction. Based on this, this article proposes an intelligent interpretation method for typical elements based on multimodal fusion, starting from the characteristics of domestic remote sensing images. It also develops an optical SAR data conversion and complementarity strategy based on convolutional translation networks, as well as a typical element extraction algorithm. This solves the problems of sparse remote sensing images, limited effective observations, and difficult information recognition, thereby achieving automation of typical element information under dense observation time series High precision extraction and analysis.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
360-Degree Tri-Modal Scanning: Engineering a Modular Multi-Sensor Platform for Semantic Enrichment of BIM Models 360 度三模扫描:为丰富 BIM 模型语义而设计的模块化多传感器平台
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-41-2024
Fiona C. Collins, F. Noichl, Martin Slepicka, Gerda Cones, André Borrmann
Abstract. Point clouds, image data, and corresponding processing algorithms are intensively investigated to create and enrich Building Information Models (BIM) with as-is information and maintain their value across the building lifecycle. Point clouds can be captured using LiDAR and enriched with color information from images. Complementary to such dual-sensor systems, thermography captures the infrared light spectrum, giving insight into the temperature distribution on an object’s surface and allowing a diagnosis of the as-is energetic health of buildings beyond what humans can see. Although the three sensor modes are commonly used in pair-wise combinations, only a few systems leveraging the power of tri-modal sensor fusion have been proposed. This paper introduces a sensor system comprising LiDAR, RGB, and a radiometric thermal infrared sensor that can capture a 360-degree range through bi-axial rotation. The resulting tri-modal data is fused to a thermo-color point cloud from which temperature values are derived for a standard indoor building setting. Qualitative data analysis shows the potential for unlocking further object semantics in a state-of-the-art Scan-to-BIM pipeline. Furthermore, an outlook is provided on the cross-modal usage of semantic segmentation for automatic, accurate temperature calculations.
摘要点云、图像数据和相应的处理算法正在被深入研究,以创建和丰富包含现状信息的建筑信息模型(BIM),并在整个建筑生命周期内保持其价值。点云可以使用激光雷达采集,并利用图像中的色彩信息加以丰富。作为此类双传感器系统的补充,热成像技术可捕捉红外光谱,深入了解物体表面的温度分布情况,并可诊断出人类无法看到的建筑物现状能量健康状况。虽然这三种传感器模式通常是成对组合使用,但利用三模式传感器融合功能的系统却寥寥无几。本文介绍了一种由激光雷达、RGB 和辐射热红外传感器组成的传感器系统,可通过双轴旋转捕捉 360 度的范围。由此产生的三模态数据被融合到热彩色点云中,并从中得出标准室内建筑环境的温度值。定性数据分析显示了在最先进的扫描到 BIM 管道中进一步解锁对象语义的潜力。此外,还对跨模态使用语义分割进行自动、准确的温度计算进行了展望。
{"title":"360-Degree Tri-Modal Scanning: Engineering a Modular Multi-Sensor Platform for Semantic Enrichment of BIM Models","authors":"Fiona C. Collins, F. Noichl, Martin Slepicka, Gerda Cones, André Borrmann","doi":"10.5194/isprs-annals-x-1-2024-41-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-41-2024","url":null,"abstract":"Abstract. Point clouds, image data, and corresponding processing algorithms are intensively investigated to create and enrich Building Information Models (BIM) with as-is information and maintain their value across the building lifecycle. Point clouds can be captured using LiDAR and enriched with color information from images. Complementary to such dual-sensor systems, thermography captures the infrared light spectrum, giving insight into the temperature distribution on an object’s surface and allowing a diagnosis of the as-is energetic health of buildings beyond what humans can see. Although the three sensor modes are commonly used in pair-wise combinations, only a few systems leveraging the power of tri-modal sensor fusion have been proposed. This paper introduces a sensor system comprising LiDAR, RGB, and a radiometric thermal infrared sensor that can capture a 360-degree range through bi-axial rotation. The resulting tri-modal data is fused to a thermo-color point cloud from which temperature values are derived for a standard indoor building setting. Qualitative data analysis shows the potential for unlocking further object semantics in a state-of-the-art Scan-to-BIM pipeline. Furthermore, an outlook is provided on the cross-modal usage of semantic segmentation for automatic, accurate temperature calculations.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 42","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Entity Relationships in the Knowledge Graph of Disease Monitoring in Grotto Temples 石窟寺疾病监测知识图谱中的实体关系研究
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-249-2024
Yiran Wang
Abstract. The grotto temple, carved into cliffs and widely distributed, is a significant cultural heritage in China. However, it faces severe damage and collapse threats due to natural disaster risks in its environment. Nearly seventy percent of grotto temples are located in regions prone to earthquakes and water hazards, leading to varying degrees of damage to cultural artifacts. Therefore, preventive measures are necessary to reduce the impact of natural disasters on grotto temples. A knowledge graph, a structured semantic knowledge base describing concepts and their relationships in the physical world, plays a crucial role in knowledge organization and content representation. Entity relationships are the core of knowledge, serving as both foundational data and a key task in constructing knowledge graphs and processing unstructured text. In the field of grotto temple disease monitoring, while data continues to grow, research on the correlation between textual data remains underexplored. This paper adopts the BiLSTM-CRF method to extract entity relationships, matching them with the grotto temple monitoring knowledge graph. Finally, the Neo4j software is utilized to program and display the knowledge graph, aiming to enhance the efficiency of natural disaster risk management and cultural heritage protection for grotto temples.
摘要石窟寺雕凿于悬崖峭壁之上,分布广泛,是中国重要的文化遗产。然而,由于其所处环境的自然灾害风险,它面临着严重的破坏和倒塌威胁。近七成的石窟寺位于地震和水灾多发地区,导致文物受到不同程度的破坏。因此,有必要采取预防措施,减少自然灾害对石窟寺的影响。知识图谱是描述物理世界中概念及其关系的结构化语义知识库,在知识组织和内容表示方面发挥着至关重要的作用。实体关系是知识的核心,既是基础数据,也是构建知识图谱和处理非结构化文本的关键任务。在石窟寺疾病监测领域,虽然数据不断增长,但对文本数据之间关联性的研究仍然不足。本文采用 BiLSTM-CRF 方法提取实体关系,并将其与石窟寺监测知识图谱进行匹配。最后,利用 Neo4j 软件对知识图谱进行编程和展示,旨在提高石窟寺自然灾害风险管理和文化遗产保护的效率。
{"title":"Research on Entity Relationships in the Knowledge Graph of Disease Monitoring in Grotto Temples","authors":"Yiran Wang","doi":"10.5194/isprs-annals-x-1-2024-249-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-249-2024","url":null,"abstract":"Abstract. The grotto temple, carved into cliffs and widely distributed, is a significant cultural heritage in China. However, it faces severe damage and collapse threats due to natural disaster risks in its environment. Nearly seventy percent of grotto temples are located in regions prone to earthquakes and water hazards, leading to varying degrees of damage to cultural artifacts. Therefore, preventive measures are necessary to reduce the impact of natural disasters on grotto temples. A knowledge graph, a structured semantic knowledge base describing concepts and their relationships in the physical world, plays a crucial role in knowledge organization and content representation. Entity relationships are the core of knowledge, serving as both foundational data and a key task in constructing knowledge graphs and processing unstructured text. In the field of grotto temple disease monitoring, while data continues to grow, research on the correlation between textual data remains underexplored. This paper adopts the BiLSTM-CRF method to extract entity relationships, matching them with the grotto temple monitoring knowledge graph. Finally, the Neo4j software is utilized to program and display the knowledge graph, aiming to enhance the efficiency of natural disaster risk management and cultural heritage protection for grotto temples.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Deep Learning and MCWST Approaches for Individual Tree Crown Segmentation 比较深度学习和 MCWST 方法在单个树冠分割中的应用
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-67-2024
Wen Fan, Jiaojiao Tian, Jonas Troles, Martin Döllerer, Mengistie Kindu, T. Knoke
Abstract. Accurate segmentation of individual tree crowns (ITC) segmentation is essential for investigating tree-level based growth trends and assessing tree vitality. ITC segmentation using remote sensing data faces challenges due to crown heterogeneity, overlapping crowns and data quality. Currently, both classical and deep learning methods have been employed for crown detection and segmentation. However, the effectiveness of deep learning based approaches is limited by the need for high-quality annotated datasets. Benefiting from the BaKIM project, a high-quality annotated dataset can be provided and tested with a Mask Region-based Convolutional Neural Network (Mask R-CNN). In addition, we have used the deep learning based approach to detect the tree locations thus refining the previous Marker controlled Watershed Transformation (MCWST) segmentation approach. The experimental results show that the Mask R-CNN model exhibits better model performance and less time cost compared to the MCWST algorithm for ITC segmentation. In summary, the proposed framework can achieve robust and fast ITC segmentation, which has the potential to support various forest applications such as tree vitality estimation.
摘要单个树冠的精确分割(ITC)对于研究基于树木层面的生长趋势和评估树木的生命力至关重要。由于树冠异质性、树冠重叠和数据质量等原因,利用遥感数据进行 ITC 分割面临着挑战。目前,经典方法和深度学习方法都被用于树冠检测和分割。然而,由于需要高质量的注释数据集,基于深度学习的方法的有效性受到了限制。得益于 BaKIM 项目,我们可以提供高质量的注释数据集,并使用基于掩膜区域的卷积神经网络(Mask R-CNN)进行测试。此外,我们还使用了基于深度学习的方法来检测树的位置,从而改进了之前的标记控制流域转换(MCWST)分割方法。实验结果表明,与用于 ITC 分割的 MCWST 算法相比,掩码 R-CNN 模型具有更好的模型性能和更少的时间成本。总之,所提出的框架可以实现稳健、快速的 ITC 分割,有望支持各种森林应用,如树木生命力评估。
{"title":"Comparing Deep Learning and MCWST Approaches for Individual Tree Crown Segmentation","authors":"Wen Fan, Jiaojiao Tian, Jonas Troles, Martin Döllerer, Mengistie Kindu, T. Knoke","doi":"10.5194/isprs-annals-x-1-2024-67-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-67-2024","url":null,"abstract":"Abstract. Accurate segmentation of individual tree crowns (ITC) segmentation is essential for investigating tree-level based growth trends and assessing tree vitality. ITC segmentation using remote sensing data faces challenges due to crown heterogeneity, overlapping crowns and data quality. Currently, both classical and deep learning methods have been employed for crown detection and segmentation. However, the effectiveness of deep learning based approaches is limited by the need for high-quality annotated datasets. Benefiting from the BaKIM project, a high-quality annotated dataset can be provided and tested with a Mask Region-based Convolutional Neural Network (Mask R-CNN). In addition, we have used the deep learning based approach to detect the tree locations thus refining the previous Marker controlled Watershed Transformation (MCWST) segmentation approach. The experimental results show that the Mask R-CNN model exhibits better model performance and less time cost compared to the MCWST algorithm for ITC segmentation. In summary, the proposed framework can achieve robust and fast ITC segmentation, which has the potential to support various forest applications such as tree vitality estimation.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M-GCLO: Multiple Ground Constrained LiDAR Odometry M-GCLO:多重地面约束激光雷达测距仪
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-283-2024
Yandi Yang, N. El-Sheimy
Abstract. Accurate LiDAR odometry results contribute directly to high-quality point cloud maps. However, traditional LiDAR odometry methods drift easily upward, leading to inaccuracies and inconsistencies in the point cloud maps. Considering abundant and reliable ground points in the Mobile Mapping System(MMS), ground points can be extracted, and constraints can be built to eliminate pose drifts. However, existing LiDAR-based odometry methods either do not use ground point cloud constraints or consider the ground plane as an infinite plane (i.e., single ground constraint), making pose estimation prone to errors. Therefore, this paper is dedicated to developing a Multiple Ground Constrained LiDAR Odometry(M-GCLO) method, which extracts multiple grounds and optimizes those plane parameters for better accuracy and robustness. M-GCLO includes three modules. Firstly, the original point clouds will be classified into the ground and non-ground points. Ground points are voxelized, and multiple ground planes are extracted, parameterized, and optimized to constrain the pose errors. All the non-ground point clouds are used for point-to-distribution matching by maintaining an NDT voxel map. Secondly, a novel method for weighting the residuals is proposed by considering the uncertainties of each point in a scan. Finally, the jacobians and residuals are given along with the weightings for estimating LiDAR states. Experimental results in KITTI and M2DGR datasets show that M-GCLO outperforms state-of-the-art LiDAR odometry methods in large-scale outdoor and indoor scenarios.
摘要精确的激光雷达测距结果可直接生成高质量的点云图。然而,传统的激光雷达里程测量方法很容易向上漂移,导致点云图的不准确和不一致。考虑到移动测绘系统(MMS)中有大量可靠的地面点,可以提取地面点,并建立约束以消除姿态漂移。然而,现有的基于激光雷达的里程测量方法要么不使用地面点云约束,要么将地平面视为无限平面(即单一地面约束),导致姿态估计容易出错。因此,本文致力于开发一种多地面约束激光雷达姿态测量(M-GCLO)方法,该方法可提取多个地面并优化这些地面参数,以获得更高的精度和鲁棒性。M-GCLO 包括三个模块。首先,原始点云将被分为地面点和非地面点。对地面点进行体素化处理,并提取多个地面平面,对其进行参数化和优化,以限制姿势误差。通过维护无损检测体素图,将所有非地面点云用于点到分布匹配。其次,考虑到扫描中每个点的不确定性,提出了一种新的残差加权方法。最后,给出了用于估算激光雷达状态的提琴和残差以及权重。KITTI 和 M2DGR 数据集的实验结果表明,在大规模室外和室内场景中,M-GCLO 优于最先进的激光雷达里程测量方法。
{"title":"M-GCLO: Multiple Ground Constrained LiDAR Odometry","authors":"Yandi Yang, N. El-Sheimy","doi":"10.5194/isprs-annals-x-1-2024-283-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-283-2024","url":null,"abstract":"Abstract. Accurate LiDAR odometry results contribute directly to high-quality point cloud maps. However, traditional LiDAR odometry methods drift easily upward, leading to inaccuracies and inconsistencies in the point cloud maps. Considering abundant and reliable ground points in the Mobile Mapping System(MMS), ground points can be extracted, and constraints can be built to eliminate pose drifts. However, existing LiDAR-based odometry methods either do not use ground point cloud constraints or consider the ground plane as an infinite plane (i.e., single ground constraint), making pose estimation prone to errors. Therefore, this paper is dedicated to developing a Multiple Ground Constrained LiDAR Odometry(M-GCLO) method, which extracts multiple grounds and optimizes those plane parameters for better accuracy and robustness. M-GCLO includes three modules. Firstly, the original point clouds will be classified into the ground and non-ground points. Ground points are voxelized, and multiple ground planes are extracted, parameterized, and optimized to constrain the pose errors. All the non-ground point clouds are used for point-to-distribution matching by maintaining an NDT voxel map. Secondly, a novel method for weighting the residuals is proposed by considering the uncertainties of each point in a scan. Finally, the jacobians and residuals are given along with the weightings for estimating LiDAR states. Experimental results in KITTI and M2DGR datasets show that M-GCLO outperforms state-of-the-art LiDAR odometry methods in large-scale outdoor and indoor scenarios.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Weakly Supervised Vehicle Detection Method from LiDAR Point Clouds 从激光雷达点云检测车辆的弱监督方法
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-123-2024
Yiyuan Li, Yuhang Lu, Xun Huang, Siqi Shen, Cheng Wang, Chenglu Wen
Abstract. Training LiDAR point clouds object detectors requires a significant amount of annotated data, which is time-consuming and effort-demanding. Although weakly supervised 3D LiDAR-based methods have been proposed to reduce the annotation cost, their performance could be further improved. In this work, we propose a weakly supervised LiDAR-based point clouds vehicle detector that does not require any labels for the proposal generation stage and needs only a few labels for the refinement stage. It comprises two primary modules. The first is an unsupervised proposal generation module based on the geometry of point clouds. The second is the pseudo-label refinement module. We validate our method on two point clouds based object detection datasets, namely KITTI and ONCE, and compare it with various existing weakly supervised point clouds object detection methods. The experimental results demonstrate the method’s effectiveness with a small amount of labeled LiDAR point clouds.
摘要训练激光雷达点云物体探测器需要大量标注数据,耗时耗力。虽然已经提出了基于三维激光雷达的弱监督方法来降低标注成本,但其性能仍有待进一步提高。在这项工作中,我们提出了一种基于激光雷达的弱监督点云车辆检测器,该检测器在建议生成阶段不需要任何标签,在细化阶段只需要少量标签。它由两个主要模块组成。第一个是基于点云几何的无监督建议生成模块。第二个是伪标签完善模块。我们在两个基于点云的物体检测数据集(即 KITTI 和 ONCE)上验证了我们的方法,并将其与现有的各种弱监督点云物体检测方法进行了比较。实验结果表明,该方法在使用少量标注的激光雷达点云时非常有效。
{"title":"A Weakly Supervised Vehicle Detection Method from LiDAR Point Clouds","authors":"Yiyuan Li, Yuhang Lu, Xun Huang, Siqi Shen, Cheng Wang, Chenglu Wen","doi":"10.5194/isprs-annals-x-1-2024-123-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-123-2024","url":null,"abstract":"Abstract. Training LiDAR point clouds object detectors requires a significant amount of annotated data, which is time-consuming and effort-demanding. Although weakly supervised 3D LiDAR-based methods have been proposed to reduce the annotation cost, their performance could be further improved. In this work, we propose a weakly supervised LiDAR-based point clouds vehicle detector that does not require any labels for the proposal generation stage and needs only a few labels for the refinement stage. It comprises two primary modules. The first is an unsupervised proposal generation module based on the geometry of point clouds. The second is the pseudo-label refinement module. We validate our method on two point clouds based object detection datasets, namely KITTI and ONCE, and compare it with various existing weakly supervised point clouds object detection methods. The experimental results demonstrate the method’s effectiveness with a small amount of labeled LiDAR point clouds.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-scale features-based cloud detection method for Suomi-NPP VIIRS day and night imagery 针对 Suomi-NPP VIIRS 昼夜图像的基于多尺度特征的云层探测方法
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-115-2024
Jun Li, Chengjie Hu, Qinghong Sheng, Jiawei Xu, Chongrui Zhu, Weili Zhang
Abstract. Cloud detection is a necessary step before the application of remote sensing images. However, most methods focus on cloud detection in daytime remote sensing images. The ignored nighttime remote sensing images play more and more important role in many fields such as urban monitoring, population estimation and disaster assessment. The radiation intensity similarity between artificial lights and clouds is higher in nighttime remote sensing images than in daytime remote sensing images, which makes it difficult to distinguish artificial lights from clouds. Therefore, this paper proposes a deep learning-based method (MFFCD-Net) to detect clouds for day and nighttime remote sensing images. MFFCD-Net is designed based on the encoder-decoder structure. The encoder adopts Resnet-50 as the backbone network for better feature extraction, and a dilated residual up-sampling module (DR-UP) is designed in the decoder for up-sampling feature maps while enlarging the receptive field. A multi-scale feature extraction fusion module (MFEF) is designed to enhance the ability of the MFFCD-Net to distinguish regular textures of artificial lights and random textures of clouds. An Global Feature Recovery Fusion Module (GFRF Module) is designed to select and fuse the feature in the encoding stage and the feature in the decoding stage, thus to achieve better cloud detection accuracy. This is the first time that a deep learning-based method is designed for cloud detection both in day and nighttime remote sensing images. The experimental results on Suomi-NPP VIIRS DNB images show that MFFCD-Net achieves higher accuracy than baseline methods both in day and nighttime remote sensing images. Results on daytime remote sensing images indicate that MFFCD-Net can obtain better balance on commission and omission rates than baseline methods (92.3% versus 90.5% on F1-score). Although artificial lights introduced strong interference in cloud detection in nighttime remote sensing images, the accuracy values of MFFCD-Net on OA, Precision, Recall, and F1-score are still higher than 90%. This demonstrates that MFFCD-Net can better distinguish artificial lights from clouds than baseline methods in nighttime remote sensing images. The effectiveness of MFFCD-Net proves that it is very promising for cloud detection both in day and nighttime remote sensing images.
摘要云检测是遥感图像应用前的必要步骤。然而,大多数方法侧重于白天遥感图像中的云检测。被忽视的夜间遥感图像在城市监测、人口估计和灾害评估等许多领域发挥着越来越重要的作用。与日间遥感图像相比,夜间遥感图像中人造光和云的辐射强度相似度更高,这使得人造光和云难以区分。因此,本文提出了一种基于深度学习的方法(MFFCD-Net)来检测昼夜遥感图像中的云。MFFCD-Net 基于编码器-解码器结构设计。编码器采用 Resnet-50 作为骨干网络,以实现更好的特征提取;解码器中设计了一个扩张残差上采样模块(DR-UP),用于在扩大感受野的同时对特征图进行上采样。设计了一个多尺度特征提取融合模块(MFEF),以增强 MFFCD-Net 区分人造光的规则纹理和云的随机纹理的能力。设计了全局特征恢复融合模块(GFRF 模块),用于选择和融合编码阶段的特征和解码阶段的特征,从而实现更高的云检测精度。这是首次针对白天和夜间遥感图像的云检测设计基于深度学习的方法。Suomi-NPP VIIRS DNB 图像的实验结果表明,MFFCD-Net 在日间和夜间遥感图像中都比基准方法获得了更高的精度。日间遥感图像的结果表明,MFFCD-Net 比基准方法能更好地平衡委托率和遗漏率(在 F1 分数上分别为 92.3% 和 90.5%)。虽然人造光对夜间遥感图像中的云检测产生了强烈干扰,但 MFFCD-Net 在 OA、Precision、Recall 和 F1-score 方面的准确度值仍高于 90%。这表明,在夜间遥感图像中,MFFCD-Net 比基准方法能更好地将人造光与云区分开来。MFFCD-Net 的有效性证明,它在日间和夜间遥感图像的云检测方面都大有可为。
{"title":"A Multi-scale features-based cloud detection method for Suomi-NPP VIIRS day and night imagery","authors":"Jun Li, Chengjie Hu, Qinghong Sheng, Jiawei Xu, Chongrui Zhu, Weili Zhang","doi":"10.5194/isprs-annals-x-1-2024-115-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-115-2024","url":null,"abstract":"Abstract. Cloud detection is a necessary step before the application of remote sensing images. However, most methods focus on cloud detection in daytime remote sensing images. The ignored nighttime remote sensing images play more and more important role in many fields such as urban monitoring, population estimation and disaster assessment. The radiation intensity similarity between artificial lights and clouds is higher in nighttime remote sensing images than in daytime remote sensing images, which makes it difficult to distinguish artificial lights from clouds. Therefore, this paper proposes a deep learning-based method (MFFCD-Net) to detect clouds for day and nighttime remote sensing images. MFFCD-Net is designed based on the encoder-decoder structure. The encoder adopts Resnet-50 as the backbone network for better feature extraction, and a dilated residual up-sampling module (DR-UP) is designed in the decoder for up-sampling feature maps while enlarging the receptive field. A multi-scale feature extraction fusion module (MFEF) is designed to enhance the ability of the MFFCD-Net to distinguish regular textures of artificial lights and random textures of clouds. An Global Feature Recovery Fusion Module (GFRF Module) is designed to select and fuse the feature in the encoding stage and the feature in the decoding stage, thus to achieve better cloud detection accuracy. This is the first time that a deep learning-based method is designed for cloud detection both in day and nighttime remote sensing images. The experimental results on Suomi-NPP VIIRS DNB images show that MFFCD-Net achieves higher accuracy than baseline methods both in day and nighttime remote sensing images. Results on daytime remote sensing images indicate that MFFCD-Net can obtain better balance on commission and omission rates than baseline methods (92.3% versus 90.5% on F1-score). Although artificial lights introduced strong interference in cloud detection in nighttime remote sensing images, the accuracy values of MFFCD-Net on OA, Precision, Recall, and F1-score are still higher than 90%. This demonstrates that MFFCD-Net can better distinguish artificial lights from clouds than baseline methods in nighttime remote sensing images. The effectiveness of MFFCD-Net proves that it is very promising for cloud detection both in day and nighttime remote sensing images.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing and Improving Automated Viewpoint Planning for Static Laser Scanning Using Optimization Methods 利用优化方法评估和改进静态激光扫描的自动视点规划
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-177-2024
F. Noichl, Maximilian Stuecke, Clemens Thielen, André Borrmann
Abstract. The preparation of laser scanning missions is important for efficiency and data quality. Furthermore, it is a prerequisite for automated data acquisition, which has numerous applications in the built environment, including autonomous inspections and monitoring of construction progress and quality criteria. The scene and potential scanning locations can be discretized to facilitate the analysis of visibility and quality aspects. The remaining mathematical problem to generate an economic scan strategy is the Viewpoint Planning Problem (VPP), which asks for a minimum number of scanning locations within the given scene to cover the scene under pre-defined requirements. Solutions for this problem are most commonly found using heuristics. While these efficient methods scale well, they cannot generally return globally optimal solutions. This paper investigates the VPP based on a problem description that considers quality-constrained visibility in 3D scenes and suitable overlaps between individual viewpoints for targetless registration of acquired point clouds. The methodology includes the introduction of a preprocessing method designed to simplify the input data without losing information about the problem. The paper details various solution methods for the VPP, encompassing conventional heuristics and a mixed-integer linear programming formulation, which is solved using Benders decomposition. Experiments are carried out on two case study datasets, varying in specifications and sizes, to evaluate these methods. The results show the actual quality of the obtained solutions and their deviation from optimality (in terms of the estimated optimality gap) for instances where exact solutions can not be achieved.
摘要激光扫描任务的准备工作对于提高效率和数据质量非常重要。此外,它还是自动数据采集的先决条件,在建筑环境中应用广泛,包括自主检查和监控施工进度和质量标准。场景和潜在扫描位置可以离散化,以便于对可见度和质量方面进行分析。生成经济扫描策略的其余数学问题是视点规划问题(VPP),该问题要求在给定场景内找到最少数量的扫描位置,以便在预定要求下覆盖整个场景。该问题的解决方案通常采用启发式方法。虽然这些高效的方法具有良好的扩展性,但一般无法返回全局最优解。本文研究了基于问题描述的 VPP,该问题描述考虑了三维场景中质量受限的可见度和单个视点之间的适当重叠,以实现获取的点云的无目标注册。该方法包括引入一种预处理方法,旨在简化输入数据而不丢失问题信息。论文详细介绍了 VPP 的各种求解方法,包括传统的启发式方法和混合整数线性规划公式,该公式使用本德斯分解法求解。为了对这些方法进行评估,本文在两个规格和规模各不相同的案例研究数据集上进行了实验。结果显示了所获解决方案的实际质量,以及在无法获得精确解决方案的情况下,它们与最优性的偏差(以估计的最优性差距表示)。
{"title":"Assessing and Improving Automated Viewpoint Planning for Static Laser Scanning Using Optimization Methods","authors":"F. Noichl, Maximilian Stuecke, Clemens Thielen, André Borrmann","doi":"10.5194/isprs-annals-x-1-2024-177-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-177-2024","url":null,"abstract":"Abstract. The preparation of laser scanning missions is important for efficiency and data quality. Furthermore, it is a prerequisite for automated data acquisition, which has numerous applications in the built environment, including autonomous inspections and monitoring of construction progress and quality criteria. The scene and potential scanning locations can be discretized to facilitate the analysis of visibility and quality aspects. The remaining mathematical problem to generate an economic scan strategy is the Viewpoint Planning Problem (VPP), which asks for a minimum number of scanning locations within the given scene to cover the scene under pre-defined requirements. Solutions for this problem are most commonly found using heuristics. While these efficient methods scale well, they cannot generally return globally optimal solutions. This paper investigates the VPP based on a problem description that considers quality-constrained visibility in 3D scenes and suitable overlaps between individual viewpoints for targetless registration of acquired point clouds. The methodology includes the introduction of a preprocessing method designed to simplify the input data without losing information about the problem. The paper details various solution methods for the VPP, encompassing conventional heuristics and a mixed-integer linear programming formulation, which is solved using Benders decomposition. Experiments are carried out on two case study datasets, varying in specifications and sizes, to evaluate these methods. The results show the actual quality of the obtained solutions and their deviation from optimality (in terms of the estimated optimality gap) for instances where exact solutions can not be achieved.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Scientific Mechanism of Tree Structure Network based on LiDAR Point Cloud Data 基于激光雷达点云数据的树木结构网络科学机制探索
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-27-2024
Haoliang Chen, Yi Lin
Abstract. To explore how trees optimize their structure, we developed a method based on Pareto optimality theory. This method consists of the following operations. Firstly, we utilize Quantitative Structure Models for Single Trees from Laser Scanner Data (TreeQSM) to extract tree structures from point clouds acquired through Light Detection and Ranging (LiDAR). Subsequently, we utilize a graph-theoretical model to characterize the natural tree structure networks and implement a greedy algorithm to generate Pareto optimal tree structure networks. Finally, based on the Pareto optimality theory, we explore whether tree structures are multi-objective optimized. This paper demonstrates that tree structures lie along the Pareto front between minimizing "transport distance" and minimizing "total length". The growth pattern of trees, which produces multi-objective optimized structures, is likely an intrinsic mechanism in the generation of tree structure networks. The location of tree structures along the Pareto front varies under different environmental conditions, reflecting their diverse survival strategies.
摘要为了探索树木如何优化其结构,我们开发了一种基于帕累托最优理论的方法。该方法包括以下操作。首先,我们利用激光扫描仪数据中的单棵树木定量结构模型(TreeQSM)从光探测和测距(LiDAR)获取的点云中提取树木结构。随后,我们利用图论模型来描述自然树木结构网络的特征,并采用贪婪算法生成帕累托最优树木结构网络。最后,基于帕累托最优理论,我们探讨了树结构是否具有多目标优化性。本文证明,树结构位于帕累托前沿,介于最小化 "传输距离 "和最小化 "总长度 "之间。树的生长模式会产生多目标优化结构,这可能是树结构网络生成的内在机制。在不同的环境条件下,树木结构沿着帕累托前线的位置各不相同,这反映了树木多样化的生存策略。
{"title":"Exploring the Scientific Mechanism of Tree Structure Network based on LiDAR Point Cloud Data","authors":"Haoliang Chen, Yi Lin","doi":"10.5194/isprs-annals-x-1-2024-27-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-27-2024","url":null,"abstract":"Abstract. To explore how trees optimize their structure, we developed a method based on Pareto optimality theory. This method consists of the following operations. Firstly, we utilize Quantitative Structure Models for Single Trees from Laser Scanner Data (TreeQSM) to extract tree structures from point clouds acquired through Light Detection and Ranging (LiDAR). Subsequently, we utilize a graph-theoretical model to characterize the natural tree structure networks and implement a greedy algorithm to generate Pareto optimal tree structure networks. Finally, based on the Pareto optimality theory, we explore whether tree structures are multi-objective optimized. This paper demonstrates that tree structures lie along the Pareto front between minimizing \"transport distance\" and minimizing \"total length\". The growth pattern of trees, which produces multi-objective optimized structures, is likely an intrinsic mechanism in the generation of tree structure networks. The location of tree structures along the Pareto front varies under different environmental conditions, reflecting their diverse survival strategies.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1