首页 > 最新文献

ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors 利用 M3C2 和跟踪运动矢量对阿尔卑斯环境中的落石进行摄影测量监测
Pub Date : 2024-04-01 Epub Date: 2024-02-06 DOI: 10.1016/j.ophoto.2024.100058
Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst

This paper introduces methods for monitoring rock slope movements in Alpine environments based on terrestrial images. The first method is a photogrammtric point cloud-based deformation analysis, relying on M3C2. Although effective in identifying large changes, the method has a tendency to underestimate smaller-scale movements. A feature-based method is presented to address this limitation, using SIFT features to track keypoints in images from different epochs. These automatically detected 3D vectors offer high spatial density and enable small-scale movement detection in the order of a few millimeters. The results are incorporated into a deformation analysis that allows statistically based conclusions about the ongoing movements. The workflow relies on georegistration using Ground Control Points. To investigate the possibility of avoiding these points, a registration method based on the ICP algorithm and M3C2 is tested. The study utilizes data from an active landslide site at Hochvogel Mountain in the Alps, analyzing changes and deformations from 2018 to 2021, revealing an average motion of 75 mm.

本文介绍了基于地面图像监测阿尔卑斯环境中岩石斜坡移动的方法。第一种方法是基于 M3C2 的摄影点云变形分析。这种方法虽然能有效识别大的变化,但往往会低估较小范围的移动。为了解决这个问题,我们提出了一种基于特征的方法,利用 SIFT 特征来跟踪不同年代图像中的关键点。这些自动检测到的三维矢量具有很高的空间密度,能够检测到几毫米量级的小范围移动。检测结果将被纳入变形分析,从而对正在发生的运动得出基于统计的结论。工作流程依赖于使用地面控制点进行的地理注册。为了研究避开这些点的可能性,测试了一种基于 ICP 算法和 M3C2 的注册方法。该研究利用阿尔卑斯山霍赫沃格尔山活动滑坡点的数据,分析了 2018 年至 2021 年的变化和变形,发现平均运动量为 75 毫米。
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks ,&nbsp;Uwe Stilla ,&nbsp;Ludwig Hoegner ,&nbsp;Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"10.1016/j.ophoto.2024.100058","url":null,"abstract":"<div><p>This paper introduces methods for monitoring rock slope movements in Alpine environments based on terrestrial images. The first method is a photogrammtric point cloud-based deformation analysis, relying on M3C2. Although effective in identifying large changes, the method has a tendency to underestimate smaller-scale movements. A feature-based method is presented to address this limitation, using SIFT features to track keypoints in images from different epochs. These automatically detected 3D vectors offer high spatial density and enable small-scale movement detection in the order of a few millimeters. The results are incorporated into a deformation analysis that allows statistically based conclusions about the ongoing movements. The workflow relies on georegistration using Ground Control Points. To investigate the possibility of avoiding these points, a registration method based on the ICP algorithm and M3C2 is tested. The study utilizes data from an active landslide site at Hochvogel Mountain in the Alps, analyzing changes and deformations from 2018 to 2021, revealing an average motion of 75 mm.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000012/pdfft?md5=5c428099c72948419171303ad7c14d16&pid=1-s2.0-S2667393224000012-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139826629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks 利用深度神经网络对来自城市环境的原始多光谱激光扫描数据进行语义分割
Pub Date : 2024-04-01 Epub Date: 2024-03-01 DOI: 10.1016/j.ophoto.2024.100061
Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko

Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.

点云实时语义分割在三维城市建模和制图、森林自动清查、自动驾驶和移动机器人等相关应用中的重要性与日俱增。目前最先进的点云语义分割方法在很大程度上依赖于三维激光扫描数据的可用性。这对于使用高精度移动激光扫描仪数据的低延迟实时应用来说是个问题,因为这些扫描仪通常是二维线扫描设备。在本研究中,我们尝试使用编码器-解码器卷积神经网络架构,对城市环境中二维线性扫描仪收集的高密度多光谱点云进行实时语义分割。我们引入了一种光栅化多扫描输入格式,该格式可完全由原始(非地理参照剖面)二维激光扫描仪测量流构建,不含里程信息。此外,我们还研究了多光谱数据对分割精度的影响。用于训练、验证和测试的数据集是使用多光谱 FGI AkhkaR4-DW 背包激光扫描系统收集的,波长为 905 nm 和 1550 nm,总共包括 2.28 亿个点(39583 次扫描)。数据被分为 13 类,代表了城市环境中的各种目标。结果表明,多扫描格式增加的空间范围提高了单波长激光雷达数据集的分割性能,从 45.4 mIoU(单次扫描)提高到 62.1 mIoU(24 次连续扫描)。在多光谱点云实验中,我们的分割 mIoU(43.5 mIoU)比纯粹的单波长参考实验分别提高了 71% 和 28%,单波长参考实验的分割 mIoU 分别为 25.4 mIoU(905 nm)和 34.1 mIoU(1550 nm)。我们的研究结果表明,通过结合连续扫描而无需里程信息,可以对二维直线扫描仪数据进行语义分割,并取得良好效果。这些结果也为开发可用于具有挑战性的城市勘测的多光谱移动激光扫描系统提供了动力。
{"title":"Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks","authors":"Mikael Reichler ,&nbsp;Josef Taher ,&nbsp;Petri Manninen ,&nbsp;Harri Kaartinen ,&nbsp;Juha Hyyppä ,&nbsp;Antero Kukko","doi":"10.1016/j.ophoto.2024.100061","DOIUrl":"10.1016/j.ophoto.2024.100061","url":null,"abstract":"<div><p>Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100061"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000048/pdfft?md5=6faf1ff37f867c363f5ed0c6399534c9&pid=1-s2.0-S2667393224000048-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140090915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving spatial transferability of deep learning models for small-field crop yield prediction 提高深度学习模型在小田块作物产量预测中的空间可转移性
Pub Date : 2024-04-01 Epub Date: 2024-04-23 DOI: 10.1016/j.ophoto.2024.100064
Stefan Stiller , Kathrin Grahmann , Gohar Ghazaryan , Masahiro Ryo

Predicting crop yield using deep learning (DL) and remote sensing is a promising technique in agriculture. In smallholder agriculture (<2 ha), where 84% of the farms operate globally, it is crucial to build a model that can be useful across several fields (high spatial transferability). However, enhancing spatial model transferability in a small-scale setting faces significant challenges, including spatial autocorrelation, heterogeneity and scale dependence of spatial dynamics, as well as the need to address limited data points. This study aimed to test the hypothesis that spatial cross validation (SCV) is a more suitable model validation practice than random cross validation (RCV) to enhance model transferability for spatial prediction in a small-scale farming setting. We compared the performances of DL models that predict crop yield for several settings including three crop types and two DL architectures based on RCV with and without overlapping samples and SCV. Notably, we conducted model performance tests on external, equally sized fields instead of the field used for training. We used high resolution RGB imagery taken with a drone as input. Our results show that the models using SCV outperformed those using RCV when the models were tested on external fields (on average r = 0.37 for SCV, r = 0.18 for RCV with overlap and r = 0.07 without), even though the models using SCV showed a substantially lower performance for cross validation (CV) than those using RCV (r with SCV and RCV w/o overlap = 0.73 and 0.98/0.73, respectively). The results suggest that RCV leads to over-optimism by overfitting the spatial structure and remembering image-specific information (so called memorization). Our study offers the first empirical evidence in agriculture that SCV is preferable to RCV in small field settings for making DL models more transferable.

利用深度学习(DL)和遥感技术预测作物产量是一项前景广阔的农业技术。在全球 84% 的农场经营的小农农业(2 公顷)中,建立一个可在多个农田中使用的模型(高空间转移性)至关重要。然而,在小规模环境中提高空间模型的可转移性面临着巨大挑战,包括空间自相关性、空间动态的异质性和规模依赖性,以及需要处理有限的数据点。本研究旨在验证一个假设,即与随机交叉验证(RCV)相比,空间交叉验证(SCV)是一种更合适的模型验证方法,可提高模型在小规模农业环境中的空间预测可转移性。我们比较了预测作物产量的 DL 模型在几种情况下的性能,包括三种作物类型和两种基于 RCV 的 DL 架构(有无重叠样本)以及 SCV。值得注意的是,我们在外部同等大小的田地上进行了模型性能测试,而不是在用于训练的田地上。我们使用无人机拍摄的高分辨率 RGB 图像作为输入。我们的结果表明,在外部区域测试模型时,使用 SCV 的模型性能优于使用 RCV 的模型(SCV 的平均 r = 0.37,RCV(有重叠)的平均 r = 0.18,RCV(无重叠)的平均 r = 0.07),尽管使用 SCV 的模型在交叉验证(CV)中的性能大大低于使用 RCV 的模型(SCV 和 RCV(无重叠)的平均 r 分别为 0.73 和 0.98/0.73)。结果表明,RCV 通过过度拟合空间结构和记忆特定图像信息(即所谓的记忆)而导致过度乐观。我们的研究首次在农业领域提供了实证证据,证明在小型田间环境中,SCV 比 RCV 更适合使 DL 模型更具可移植性。
{"title":"Improving spatial transferability of deep learning models for small-field crop yield prediction","authors":"Stefan Stiller ,&nbsp;Kathrin Grahmann ,&nbsp;Gohar Ghazaryan ,&nbsp;Masahiro Ryo","doi":"10.1016/j.ophoto.2024.100064","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100064","url":null,"abstract":"<div><p>Predicting crop yield using deep learning (DL) and remote sensing is a promising technique in agriculture. In smallholder agriculture (&lt;2 ha), where 84% of the farms operate globally, it is crucial to build a model that can be useful across several fields (high spatial transferability). However, enhancing spatial model transferability in a small-scale setting faces significant challenges, including spatial autocorrelation, heterogeneity and scale dependence of spatial dynamics, as well as the need to address limited data points. This study aimed to test the hypothesis that spatial cross validation (SCV) is a more suitable model validation practice than random cross validation (RCV) to enhance model transferability for spatial prediction in a small-scale farming setting. We compared the performances of DL models that predict crop yield for several settings including three crop types and two DL architectures based on RCV with and without overlapping samples and SCV. Notably, we conducted model performance tests on external, equally sized fields instead of the field used for training. We used high resolution RGB imagery taken with a drone as input. Our results show that the models using SCV outperformed those using RCV when the models were tested on external fields (on average r = 0.37 for SCV, r = 0.18 for RCV with overlap and r = 0.07 without), even though the models using SCV showed a substantially lower performance for cross validation (CV) than those using RCV (r with SCV and RCV w/o overlap = 0.73 and 0.98/0.73, respectively). The results suggest that RCV leads to over-optimism by overfitting the spatial structure and remembering image-specific information (so called memorization). Our study offers the first empirical evidence in agriculture that SCV is preferable to RCV in small field settings for making DL models more transferable.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100064"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000073/pdfft?md5=50c355dd3d3f1275fbe75dfa9e3ceab5&pid=1-s2.0-S2667393224000073-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140643663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors 利用 M3C2 和跟踪运动矢量对阿尔卑斯环境中的落石进行摄影测量监测
Pub Date : 2024-02-01 DOI: 10.1016/j.ophoto.2024.100058
Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100058","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139886772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning 利用卡尔曼滤波和机器学习相结合的方法,整合多用户数字化行动,绘制沟壑轮廓图
Pub Date : 2024-02-01 DOI: 10.1016/j.ophoto.2024.100059
Miguel Vallejo, K. Anders, O. Ajayi, Olaf Bubenzer, B. Höfle
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo, K. Anders, O. Ajayi, Olaf Bubenzer, B. Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100059","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"83 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139815198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting the Past: A comparative study for semantic segmentation of historical images of Adelaide Island using U-nets 重温过去:使用 U 型网对阿德莱德岛历史图像进行语义分割的比较研究
Pub Date : 2024-01-01 Epub Date: 2023-12-25 DOI: 10.1016/j.ophoto.2023.100056
Felix Dahle, Roderik Lindenbergh, Bert Wouters

The TriMetrogon Aerial (TMA) archive is an archive of historical images of Antarctica taken by the US Navy between 1940 and 2000 with analogue cameras. The analysis of such historic data can give a view of Antarctica's glaciers predating modern satellite imagery and provide unique insights into the long-term impact of changing climate conditions with essential validation data for climate modelling. However, the lack of semantic information for these images presents a challenge for large-scale computer-driven analysis.

Such information can be added to the data using semantic segmentation, but traditional algorithms fail on these scanned historical grayscale images, due to varying image quality, lack of colour information and artefacts in the images. To address this, we present a deep-learning-based U-net workflow. Our approach includes creating training data by pre-processing and labelling the raw images. Furthermore, different versions of the U-net are trained to optimize its hyperparameters and augmentation methods. With the optimal hyper-parameters and augmentation methods, a final model has been trained for a use-case to segment 118 images covering Adelaide Island.

We tested our approach by segmenting challenging historical images using a U-net model with just 80 training images, achieving an accuracy of 73% for 20 validation images. While no test data is available for our use case, a visual examination of the segmented images shows that our method performs effectively.

The comparison of the hyper-parameters and augmentation methods provides directions for training other U-net-based models so that the presented workflow can be used to segment other archives with historical imagery. Additionally, the labelled training data and the segmented images of the test are publicly available at https://github.com/fdahle/antarctic_segmentation.

TriMetrogon Aerial (TMA) 档案是美国海军在 1940 年至 2000 年期间使用模拟相机拍摄的南极洲历史图像档案。通过对这些历史数据的分析,可以了解现代卫星图像之前的南极洲冰川情况,并为气候建模提供重要的验证数据,从而对气候条件变化的长期影响有独特的见解。这些信息可以通过语义分割添加到数据中,但由于图像质量参差不齐、缺乏色彩信息以及图像中的人工痕迹,传统算法在这些扫描的历史灰度图像上失效。为了解决这个问题,我们提出了一种基于深度学习的 U-net 工作流程。我们的方法包括通过预处理和标记原始图像来创建训练数据。此外,对不同版本的 U-net 进行训练,以优化其超参数和增强方法。使用最佳超参数和增强方法,我们训练出了一个最终模型,用于分割覆盖阿德莱德岛的 118 幅图像。我们仅用 80 幅训练图像就使用 U-net 模型分割了具有挑战性的历史图像,并对 20 幅验证图像进行了测试,准确率达到 73%。对超参数和增强方法的比较为训练其他基于 U-net 的模型提供了方向,因此所介绍的工作流程可用于分割其他档案中的历史图像。此外,标注的训练数据和测试的分割图像可在 https://github.com/fdahle/antarctic_segmentation 上公开获取。
{"title":"Revisiting the Past: A comparative study for semantic segmentation of historical images of Adelaide Island using U-nets","authors":"Felix Dahle,&nbsp;Roderik Lindenbergh,&nbsp;Bert Wouters","doi":"10.1016/j.ophoto.2023.100056","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100056","url":null,"abstract":"<div><p>The TriMetrogon Aerial (TMA) archive is an archive of historical images of Antarctica taken by the US Navy between 1940 and 2000 with analogue cameras. The analysis of such historic data can give a view of Antarctica's glaciers predating modern satellite imagery and provide unique insights into the long-term impact of changing climate conditions with essential validation data for climate modelling. However, the lack of semantic information for these images presents a challenge for large-scale computer-driven analysis.</p><p>Such information can be added to the data using semantic segmentation, but traditional algorithms fail on these scanned historical grayscale images, due to varying image quality, lack of colour information and artefacts in the images. To address this, we present a deep-learning-based U-net workflow. Our approach includes creating training data by pre-processing and labelling the raw images. Furthermore, different versions of the U-net are trained to optimize its hyperparameters and augmentation methods. With the optimal hyper-parameters and augmentation methods, a final model has been trained for a use-case to segment 118 images covering Adelaide Island.</p><p>We tested our approach by segmenting challenging historical images using a U-net model with just 80 training images, achieving an accuracy of 73% for 20 validation images. While no test data is available for our use case, a visual examination of the segmented images shows that our method performs effectively.</p><p>The comparison of the hyper-parameters and augmentation methods provides directions for training other U-net-based models so that the presented workflow can be used to segment other archives with historical imagery. Additionally, the labelled training data and the segmented images of the test are publicly available at <span>https://github.com/fdahle/antarctic_segmentation</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000273/pdfft?md5=d102ce83a2ff8228dd333428f7d3bf8e&pid=1-s2.0-S2667393223000273-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139107227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principled bundle block adjustment with multi-head cameras 使用多云台摄像机进行有原则的束块调整
Pub Date : 2024-01-01 Epub Date: 2023-11-24 DOI: 10.1016/j.ophoto.2023.100051
Eleonora Maset , Luca Magri , Andrea Fusiello

This paper examines the effects of implementing relative orientation constraints on bundle adjustment, as well as provides a full derivation of the Jacobian matrix for such an adjustment, that can be used to facilitate other implementations of bundle adjustment with constrained cameras. We present empirical evidence demonstrating improved accuracy and reduced computational load when these constraints are imposed.

本文研究了实施相对方向约束对相机捆绑调整的影响,并提供了这种调整的雅各布矩阵的完整推导,可用于促进其他使用约束相机的相机捆绑调整的实施。我们提出的经验证据表明,在施加这些约束时,精度得到了提高,计算量也有所减少。
{"title":"Principled bundle block adjustment with multi-head cameras","authors":"Eleonora Maset ,&nbsp;Luca Magri ,&nbsp;Andrea Fusiello","doi":"10.1016/j.ophoto.2023.100051","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100051","url":null,"abstract":"<div><p>This paper examines the effects of implementing relative orientation constraints on bundle adjustment, as well as provides a full derivation of the Jacobian matrix for such an adjustment, that can be used to facilitate other implementations of bundle adjustment with constrained cameras. We present empirical evidence demonstrating improved accuracy and reduced computational load when these constraints are imposed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000224/pdfft?md5=104b2b21116c9955ace52700652a666b&pid=1-s2.0-S2667393223000224-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139111422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICESat-2 noise filtering using a point cloud neural network 利用点云神经网络过滤 ICESat-2 噪音
Pub Date : 2024-01-01 Epub Date: 2023-12-06 DOI: 10.1016/j.ophoto.2023.100053
Mariya Velikova, Juan Fernandez-Diaz, Craig Glennie

The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.

搭载在ICESat-2卫星上的ATLAS传感器是一个光子计数激光雷达(PCL),其主要任务是绘制地球冰盖地图。该任务的第二个目标是提供植被和地形海拔,这对计算地球的生物量碳储量至关重要。ATLAS的一个缺点是,在密集的高封闭森林中,传感器不能提供可靠的地形高度估计,因为只有少数光子通过树冠到达地面并返回探测器。这种低穿透转化为较低的精度为所得的地形模型。使用ATLAS进行热带森林测量在估算冠层顶部时还存在另外一个问题,因为雾和低云等频繁的大气现象可能被误解为冠层顶部。为了缓解这些问题,我们建议使用三维点云的ConvPoint神经网络和高密度机载激光雷达作为训练数据,对ATLAS的植被和地形返回进行分类。语义分割网络提供了很好的效果,可以与现有的ATL08噪声滤波算法并行使用,特别是在植被密集的地区。我们使用沿着中美洲森林的ICESat-2样带获取的高密度机载激光雷达数据作为训练神经网络的地面参考,以区分噪声光子和位于地形和树冠顶部之间的光子。在测试阶段,每个光子事件接收一个标签(噪声或信号),提供ATL03数据的自动噪声滤波。随后,地形和冠层顶部海拔通过一系列迭代平滑滤波器聚合成100米的分段。与ATL08 100 m段的估计相比,我们证明了地形和冠层顶部高度的改进估计。神经网络(NN)噪声滤波可靠地消除了由低云引起的冠层估计的异常值顶,并且聚集的均方根误差(RMSE)从ATL08的7.7 m降低到NN预测的3.7 m(聚合了18个测试剖面)。对于地形高度,与机载激光雷达参考剖面相比,在NN预测中,RMSE从ATL08的5.2 m下降到3.3 m。
{"title":"ICESat-2 noise filtering using a point cloud neural network","authors":"Mariya Velikova,&nbsp;Juan Fernandez-Diaz,&nbsp;Craig Glennie","doi":"10.1016/j.ophoto.2023.100053","DOIUrl":"10.1016/j.ophoto.2023.100053","url":null,"abstract":"<div><p>The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100053"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000248/pdfft?md5=90f41b323182f63f9bad036a38f7b9ea&pid=1-s2.0-S2667393223000248-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistically assessing vertical change on a sandy beach from permanent laser scanning time series 根据永久激光扫描时间序列对沙滩的垂直变化进行统计评估
Pub Date : 2024-01-01 Epub Date: 2023-12-27 DOI: 10.1016/j.ophoto.2023.100055
Mieke Kuschnerus , Roderik Lindenbergh , Sander Vos , Ramon Hanssen

In the view of climate change, understanding and managing effects on coastal areas and adjacent cities is essential. Permanent Laser Scanning (PLS) is a successful technique to not only observe notably sandy coasts incidentally or once every year, but (nearly) continuously over extended periods of time. The collected point cloud observations form a 4D point cloud data set representing the evolution of the coast provide the opportunity to assess change processes at high level of detail. For an exemplary location in Noordwijk, The Netherlands, three years of hourly point clouds were acquired on a 1 km long section of a typical Dutch urban sandy beach. Often, the so-called level of detection is used to assess point cloud differences from two epochs. To explicitly incorporate the temporal dimension of the height estimates from the point cloud data set, we revisit statistical testing theory. We apply multiple hypothesis testing on elevation time series in order to identify different coastal processes, like aeolian sand transport or bulldozer works. We then estimate the minimal detectable bias for different alternative hypotheses, to quantify the minimal elevation change that can be estimated from the PLS observations over a certain period of time. Additionally, we analyse potential error sources and influences on the elevation estimations and provide orders of magnitudes and possible ways to deal with them. Finally we conclude that elevation time series from a long term PLS data set are a suitable input to identify aeolian sand transport with the help of multiple hypothesis testing. In our example case, slopes of 0.032 m/day and sudden changes of 0.031 m can be identified with statistical power of 80% and with 95% significance in 24-h time series on the upper beach. In the intertidal area the presented method allows to classify daily elevation time series over one month according to the dominating model (sudden change or linear trend) in either eroding or accreting behaviour.

鉴于气候变化,了解和管理对沿海地区和邻近城市的影响至关重要。永久性激光扫描(PLS)是一种成功的技术,不仅可以偶然或每年一次观测显著的沙质海岸,还可以(几乎)长时间连续观测。收集到的点云观测数据形成了代表海岸演变过程的 4D 点云数据集,为评估变化过程的细节提供了机会。在荷兰 Noordwijk 的一个示例地点,对典型的荷兰城市沙滩上 1 公里长的部分采集了三年的每小时点云。所谓的检测水平通常用于评估两个时间点的点云差异。为了明确纳入点云数据集高度估算的时间维度,我们重新审视了统计检验理论。我们对海拔高度时间序列进行多重假设检验,以识别不同的沿岸过程,如风沙运移或推土机工程。然后,我们估算不同替代假设的最小可检测偏差,以量化在一定时期内可从 PLS 观测中估算出的最小海拔变化。此外,我们还分析了海拔估算的潜在误差来源和影响因素,并提供了误差大小顺序和可能的处理方法。最后,我们得出结论,来自长期 PLS 数据集的海拔时间序列是借助多重假设检验识别风沙迁移的合适输入。在我们的示例中,在上部海滩的 24 小时时间序列中,每天 0.032 米的斜率和 0.031 米的突变可以被识别,统计功率为 80%,显著性为 95%。在潮间带地区,所提出的方法可以根据侵蚀或增生行为的主要模式(突变或线性趋势)对一个月内的日海拔时间序列进行分类。
{"title":"Statistically assessing vertical change on a sandy beach from permanent laser scanning time series","authors":"Mieke Kuschnerus ,&nbsp;Roderik Lindenbergh ,&nbsp;Sander Vos ,&nbsp;Ramon Hanssen","doi":"10.1016/j.ophoto.2023.100055","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100055","url":null,"abstract":"<div><p>In the view of climate change, understanding and managing effects on coastal areas and adjacent cities is essential. Permanent Laser Scanning (PLS) is a successful technique to not only observe notably sandy coasts incidentally or once every year, but (nearly) continuously over extended periods of time. The collected point cloud observations form a 4D point cloud data set representing the evolution of the coast provide the opportunity to assess change processes at high level of detail. For an exemplary location in Noordwijk, The Netherlands, three years of hourly point clouds were acquired on a 1 km long section of a typical Dutch urban sandy beach. Often, the so-called level of detection is used to assess point cloud differences from two epochs. To explicitly incorporate the temporal dimension of the height estimates from the point cloud data set, we revisit statistical testing theory. We apply multiple hypothesis testing on elevation time series in order to identify different coastal processes, like aeolian sand transport or bulldozer works. We then estimate the minimal detectable bias for different alternative hypotheses, to quantify the minimal elevation change that can be estimated from the PLS observations over a certain period of time. Additionally, we analyse potential error sources and influences on the elevation estimations and provide orders of magnitudes and possible ways to deal with them. Finally we conclude that elevation time series from a long term PLS data set are a suitable input to identify aeolian sand transport with the help of multiple hypothesis testing. In our example case, slopes of 0.032 m/day and sudden changes of 0.031 m can be identified with statistical power of 80% and with 95% significance in 24-h time series on the upper beach. In the intertidal area the presented method allows to classify daily elevation time series over one month according to the dominating model (sudden change or linear trend) in either eroding or accreting behaviour.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000261/pdfft?md5=2b715eedb9e8c262b3b531332998a270&pid=1-s2.0-S2667393223000261-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139107208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated data-driven approach to monitor and estimate plant-scale growth using UAV 利用无人机监测和估算植物生长的综合数据驱动方法
Pub Date : 2024-01-01 Epub Date: 2023-11-30 DOI: 10.1016/j.ophoto.2023.100052
Philippe Vigneault , Joël Lafond-Lapalme , Arianne Deshaies , Kosal Khun , Samuel de la Sablonnière , Martin Filion , Louis Longchamps , Benjamin Mimee

UAV-mounted sensors can be used to estimate crop biophysical traits, offering an alternative to traditional field scouting. However, the high temporal resolution offered by UAV platforms, critical for identifying small differences in crop conditions, is rarely exploited throughout the entire growing season. This limits growers' ability to obtain timely information for real-time interventions. New findings support that it is possible to parametrize an entire crop growth cycle under different conditions by accumulating sufficient data over time and using logistic growth models to highlight growth patterns. A step forward would be to model crop growth cycle at the plant-level in order to anticipate the optimal harvest dates in each plot or quickly identify growth problematics. Individual plant monitoring can be achieved by combining high spatial resolution images with accurate segmentation algorithms. The main objective of the study was therefore to develop and validate an integrated pipeline based on multidimensional data to extract predictive growth metrics for crop monitoring at the plant-level under various field conditions. The plant growth monitoring workflow was based on a three-step design ultimately leading to decision-making and reporting. Lettuce (Lactuca sativa L.) was chosen as a model plant due to its simple geometry, rapid growth and simple cultivation method. Treatments were composed of contrasting cover crops. Overall, correlation analysis showed that UAV-derived morphological metrics are reliable proxies for harvested biomass throughout the growing season, especially in later stages (Spearman's ρ > 0.9) and can be used as growth indicators. Therefore, Logistic Growth Curves (LGCs) were fitted to Crop Object Area (COA) values for each individual lettuce, using data up to 26 (generating G26 LGCs), 30 (G30) and 37 (G37) Days After Transplant (DAT). To assess the quality of their projections, G26 and G30 were compared to the reference LGC G37. The results indicated that Mean Absolute Percentage Error (MAPE) of projected COA was 9.6% and 6.8% for G26 and G30 respectively. Overall, the LGC parameters were close to the reference and highly correlated with the harvested biomass. The study also demonstrated the potential of having very good insight on plant maturity level by modeling the LGC 13 days before harvest. Furthermore, a dashboard was proposed to monitor current and projected maturity level, highlighting areas for further investigation. This novel integrated pipeline has the potential to become a valuable tool for research, on-farm decision making, and field interventions by providing data on plant biomass, maturity, and growth stages under different conditions, used as crop growth indicators.

无人机安装的传感器可用于估测作物的生物物理特征,为传统的田间侦察提供了一种替代方法。然而,无人机平台提供的高时间分辨率对于识别作物状况的微小差异至关重要,但在整个生长季节却很少得到利用。这限制了种植者及时获取信息进行实时干预的能力。新的研究结果证明,通过长期积累足够的数据并使用逻辑生长模型来突出生长模式,有可能对不同条件下的整个作物生长周期进行参数化。更进一步的做法是在植物层面建立作物生长周期模型,以便预测每块地的最佳收获期或快速识别生长问题。通过将高空间分辨率图像与精确的分割算法相结合,可以实现对单株植物的监测。因此,本研究的主要目标是开发和验证一个基于多维数据的综合管道,以提取预测性生长指标,用于在各种田间条件下的植物级作物监测。植物生长监测工作流程基于三步设计,最终实现决策和报告。生菜(Lactuca sativa L.)因其简单的几何形状、快速的生长和简单的栽培方法而被选为示范植物。处理由对比鲜明的覆盖作物组成。总体而言,相关性分析表明,无人机获得的形态指标是整个生长期收获生物量的可靠替代指标,尤其是在后期(Spearman's ρ >0.9),可用作生长指标。因此,利用移栽后 26 天(生成 G26 LGC)、30 天(G30)和 37 天(G37)前的数据,对每株生菜的作物目标面积(COA)值进行了逻辑生长曲线(LGC)拟合。为评估其预测质量,将 G26 和 G30 与参考 LGC G37 进行了比较。结果显示,G26 和 G30 预测 COA 的平均绝对百分比误差(MAPE)分别为 9.6% 和 6.8%。总体而言,LGC 参数接近参考值,并与收获的生物量高度相关。该研究还表明,通过对收获前 13 天的 LGC 进行建模,可以很好地了解植物的成熟度。此外,还提出了一个仪表板来监测当前和预测的成熟度,突出了需要进一步研究的领域。通过提供不同条件下植物生物量、成熟度和生长阶段的数据,作为作物生长指标,这种新型集成管道有望成为研究、农场决策和田间干预的宝贵工具。
{"title":"An integrated data-driven approach to monitor and estimate plant-scale growth using UAV","authors":"Philippe Vigneault ,&nbsp;Joël Lafond-Lapalme ,&nbsp;Arianne Deshaies ,&nbsp;Kosal Khun ,&nbsp;Samuel de la Sablonnière ,&nbsp;Martin Filion ,&nbsp;Louis Longchamps ,&nbsp;Benjamin Mimee","doi":"10.1016/j.ophoto.2023.100052","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100052","url":null,"abstract":"<div><p>UAV-mounted sensors can be used to estimate crop biophysical traits, offering an alternative to traditional field scouting. However, the high temporal resolution offered by UAV platforms, critical for identifying small differences in crop conditions, is rarely exploited throughout the entire growing season. This limits growers' ability to obtain timely information for real-time interventions. New findings support that it is possible to parametrize an entire crop growth cycle under different conditions by accumulating sufficient data over time and using logistic growth models to highlight growth patterns. A step forward would be to model crop growth cycle at the plant-level in order to anticipate the optimal harvest dates in each plot or quickly identify growth problematics. Individual plant monitoring can be achieved by combining high spatial resolution images with accurate segmentation algorithms. The main objective of the study was therefore to develop and validate an integrated pipeline based on multidimensional data to extract predictive growth metrics for crop monitoring at the plant-level under various field conditions. The plant growth monitoring workflow was based on a three-step design ultimately leading to decision-making and reporting. Lettuce (<em>Lactuca sativa</em> L.) was chosen as a model plant due to its simple geometry, rapid growth and simple cultivation method. Treatments were composed of contrasting cover crops. Overall, correlation analysis showed that UAV-derived morphological metrics are reliable proxies for harvested biomass throughout the growing season, especially in later stages (Spearman's ρ &gt; 0.9) and can be used as growth indicators. Therefore, Logistic Growth Curves (LGCs) were fitted to Crop Object Area (COA) values for each individual lettuce, using data up to 26 (generating G<sub>26</sub> LGCs), 30 (G<sub>30</sub>) and 37 (G<sub>37</sub>) Days After Transplant (DAT). To assess the quality of their projections, G<sub>26</sub> and G<sub>30</sub> were compared to the reference LGC G<sub>37</sub>. The results indicated that Mean Absolute Percentage Error (MAPE) of projected COA was 9.6% and 6.8% for G<sub>26</sub> and G<sub>30</sub> respectively. Overall, the LGC parameters were close to the reference and highly correlated with the harvested biomass. The study also demonstrated the potential of having very good insight on plant maturity level by modeling the LGC 13 days before harvest. Furthermore, a dashboard was proposed to monitor current and projected maturity level, highlighting areas for further investigation. This novel integrated pipeline has the potential to become a valuable tool for research, on-farm decision making, and field interventions by providing data on plant biomass, maturity, and growth stages under different conditions, used as crop growth indicators.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100052"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000236/pdfft?md5=d5a0738ff9505d3deb1b9b7a25a6d55e&pid=1-s2.0-S2667393223000236-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138549946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Open Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1