首页 > 最新文献

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences最新文献

英文 中文
Concepts for compensation of wave effects when measuring through water surfaces in photogrammetric applications 在摄影测量应用中通过水面进行测量时补偿波浪效应的概念
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-289-2024
C. Mulsow, H. Sardemann, Laure-Anne Gueguen, Gottfried Mandelburger, Hans-Gerd Maas
Abstract. A common problem when imaging and measuring through moving water surfaces is the quasi-random refraction caused by waves. The article presents two strategies to overcome this problem by lowering the complexity down to a planer air/water interface problem. In general, the methods assume that the shape of the water surface changes randomly over time and that the water surface moves around an idle-state (calm planar water surface). Thus, moments at which the surface normal is orientated vertically should occur more frequently than others should. By analysing a sequence of images taken from a stable camera position these moments could be identified – this can be done in the image or object space. It will be shown, that a simple median filtering of grey values in each pixel position can provide a corrected image freed from wave and glint effects. This should have the geometry of an image taken through calm water surface. However, in case of multi camera setups, the problem can be analysed in object space. By tracking homological underwater features, sets of image rays hitting accidently horizontal orientated water surface areas can be identified. Both methods are described in depth and evaluated on real and simulated data.
摘要在通过运动的水面进行成像和测量时,一个常见的问题是波浪造成的准随机折射。文章介绍了两种策略,通过将复杂性降低到平面空气/水界面问题来克服这一问题。一般来说,这两种方法假定水面形状随时间随机变化,水面围绕空闲状态(平静的平面水面)移动。因此,水面法线垂直方向的时刻应该比其他时刻出现得更频繁。通过分析从一个稳定的摄像机位置拍摄的一系列图像,可以确定这些时刻--这可以在图像空间或物体空间中进行。图中将会显示,对每个像素位置的灰度值进行简单的中值滤波,就可以得到没有波浪和闪烁效应的校正图像。这应该具有通过平静水面拍摄的图像的几何效果。不过,在多摄像机设置的情况下,可以从物体空间来分析问题。通过跟踪同源水下特征,可以识别出意外击中水平方向水面区域的图像光线集。本文对这两种方法进行了深入介绍,并对真实数据和模拟数据进行了评估。
{"title":"Concepts for compensation of wave effects when measuring through water surfaces in photogrammetric applications","authors":"C. Mulsow, H. Sardemann, Laure-Anne Gueguen, Gottfried Mandelburger, Hans-Gerd Maas","doi":"10.5194/isprs-archives-xlviii-2-2024-289-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-289-2024","url":null,"abstract":"Abstract. A common problem when imaging and measuring through moving water surfaces is the quasi-random refraction caused by waves. The article presents two strategies to overcome this problem by lowering the complexity down to a planer air/water interface problem. In general, the methods assume that the shape of the water surface changes randomly over time and that the water surface moves around an idle-state (calm planar water surface). Thus, moments at which the surface normal is orientated vertically should occur more frequently than others should. By analysing a sequence of images taken from a stable camera position these moments could be identified – this can be done in the image or object space. It will be shown, that a simple median filtering of grey values in each pixel position can provide a corrected image freed from wave and glint effects. This should have the geometry of an image taken through calm water surface. However, in case of multi camera setups, the problem can be analysed in object space. By tracking homological underwater features, sets of image rays hitting accidently horizontal orientated water surface areas can be identified. Both methods are described in depth and evaluated on real and simulated data.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"31 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Evaluation of NeRF Algorithms on Single Image Dataset for 3D Reconstruction 单张图像数据集三维重建 NeRF 算法比较评估
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-73-2024
F. Condorelli, Maurizio Perticarini
Abstract. The reconstruction of three-dimensional scenes from a single image represents a significant challenge in computer vision, particularly in the context of cultural heritage digitisation, where datasets may be limited or of poor quality. This paper addresses this challenge by conducting a study of the latest and most advanced algorithms for single-image 3D reconstruction, with a focus on applications in cultural heritage conservation. Exploiting different single-image datasets, the research evaluates the strengths and limitations of various artificial intelligence-based algorithms, in particular Neural Radiance Fields (NeRF), in reconstructing detailed 3D models from limited visual data. The study includes experiments on scenarios such as inaccessible or non-existent heritage sites, where traditional photogrammetric methods fail. The results demonstrate the effectiveness of NeRF-based approaches in producing accurate, high-resolution reconstructions suitable for visualisation and metric analysis. The results contribute to advancing the understanding of NeRF-based approaches in handling single-image inputs and offer insights for real-world applications such as object location and immersive content generation.
摘要从单幅图像重建三维场景是计算机视觉领域的一项重大挑战,尤其是在文化遗产数字化的背景下,因为文化遗产数字化的数据集可能有限或质量较差。本文针对这一挑战,研究了最新和最先进的单图像三维重建算法,重点关注文化遗产保护中的应用。研究利用不同的单图像数据集,评估了各种基于人工智能的算法,特别是神经辐射场(NeRF),在从有限的视觉数据重建详细三维模型方面的优势和局限性。研究包括在无法进入或不存在的遗产地等传统摄影测量方法无法解决的情况下进行实验。研究结果表明,基于 NeRF 的方法可以有效地生成适合可视化和度量分析的精确、高分辨率重建。这些结果有助于加深对基于 NeRF 的方法在处理单一图像输入方面的理解,并为物体定位和沉浸式内容生成等现实世界应用提供了启示。
{"title":"Comparative Evaluation of NeRF Algorithms on Single Image Dataset for 3D Reconstruction","authors":"F. Condorelli, Maurizio Perticarini","doi":"10.5194/isprs-archives-xlviii-2-2024-73-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-73-2024","url":null,"abstract":"Abstract. The reconstruction of three-dimensional scenes from a single image represents a significant challenge in computer vision, particularly in the context of cultural heritage digitisation, where datasets may be limited or of poor quality. This paper addresses this challenge by conducting a study of the latest and most advanced algorithms for single-image 3D reconstruction, with a focus on applications in cultural heritage conservation. Exploiting different single-image datasets, the research evaluates the strengths and limitations of various artificial intelligence-based algorithms, in particular Neural Radiance Fields (NeRF), in reconstructing detailed 3D models from limited visual data. The study includes experiments on scenarios such as inaccessible or non-existent heritage sites, where traditional photogrammetric methods fail. The results demonstrate the effectiveness of NeRF-based approaches in producing accurate, high-resolution reconstructions suitable for visualisation and metric analysis. The results contribute to advancing the understanding of NeRF-based approaches in handling single-image inputs and offer insights for real-world applications such as object location and immersive content generation.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"1 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141356483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Active and Passive UAV-Based Surveying Systems for Eulittoral Zone Mapping 评估基于主动和被动无人机的浅滩区测绘系统
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-9-2024
R. Arav, C. Ressl, Robert Weiss, Thomas Artz, Gottfried Mandlburger
Abstract. The eulittoral zone, which alternates between being exposed and submerged, presents a challenge for high-resolution characterization. Normally, its mapping is divided between low and high water levels, where each calls for a different type of surveying instrument. This leads to inconsistent mapping products, both in accuracy and resolution. Recently, uncrewed airborne vehicle (UAV) based photogrammetry was suggested as an available and low-cost solution. However, relying on a passive sensor, this approach requires adequate environmental conditions, while its ability to map inundated regions is limited. Alternatively, UAV-based topo-bathymetric laser scanners enable the acquisition of both submerged and exposed regions independent of lighting conditions while maintaining the acquisition flexibility. In this paper, we evaluate the applicability of such systems in the eulittoral zone. To do so, both topographic and topo-bathymetric LiDAR sensors were loaded on UAVs to map a coastal region along the river Rhein. The resulting point clouds were compared to UAV-based photogrammetric ones. Aspects such as point spacing, absolute accuracy, and vertical offsets were analysed. To provide operative recommendations, each LiDAR scan was acquired at different flying altitudes, while the photogrammetric point clouds were georeferenced based on different exterior information configurations. To assess the riverbed modelling, we compared the surface model acquired by the topo-bathymetric LiDAR sensor to multibeam echosounder measurements. Our analysis shows that the accuracies of the LiDAR point clouds are hardly affected by flying altitude. The derived riverbed elevation, on the other hand, shows a bias which is linearly related to water depth.
摘要沿岸带在露出水面和沉入水下之间交替变化,对高分辨率特征描述提出了挑战。通常,其测绘分为低水位和高水位两种,每种水位都需要不同类型的测量仪器。这就导致了测绘产品在精度和分辨率上的不一致。最近,有人建议使用无人驾驶飞行器(UAV)进行摄影测量,这是一种可用且低成本的解决方案。然而,这种方法依赖于被动传感器,需要适当的环境条件,而且其绘制淹没区地图的能力有限。另外,基于无人机的地形测深激光扫描仪可以不受光照条件的限制,同时采集淹没区和暴露区的数据,并保持采集的灵活性。本文将评估此类系统在沿岸带的适用性。为此,我们在无人机上安装了地形和地形-测深激光雷达传感器,以绘制莱茵河沿岸地区的地图。绘制的点云与无人机摄影测量的点云进行了比较。对点间距、绝对精度和垂直偏移等方面进行了分析。为了提供可操作的建议,每次激光雷达扫描都是在不同的飞行高度下获取的,而摄影测量点云则是根据不同的外部信息配置进行地理参照的。为了评估河床模型,我们将地形测深激光雷达传感器获取的地表模型与多波束回声测深仪测量结果进行了比较。我们的分析表明,激光雷达点云的精确度几乎不受飞行高度的影响。另一方面,得出的河床高程显示出与水深成线性关系的偏差。
{"title":"Evaluation of Active and Passive UAV-Based Surveying Systems for Eulittoral Zone Mapping","authors":"R. Arav, C. Ressl, Robert Weiss, Thomas Artz, Gottfried Mandlburger","doi":"10.5194/isprs-archives-xlviii-2-2024-9-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-9-2024","url":null,"abstract":"Abstract. The eulittoral zone, which alternates between being exposed and submerged, presents a challenge for high-resolution characterization. Normally, its mapping is divided between low and high water levels, where each calls for a different type of surveying instrument. This leads to inconsistent mapping products, both in accuracy and resolution. Recently, uncrewed airborne vehicle (UAV) based photogrammetry was suggested as an available and low-cost solution. However, relying on a passive sensor, this approach requires adequate environmental conditions, while its ability to map inundated regions is limited. Alternatively, UAV-based topo-bathymetric laser scanners enable the acquisition of both submerged and exposed regions independent of lighting conditions while maintaining the acquisition flexibility. In this paper, we evaluate the applicability of such systems in the eulittoral zone. To do so, both topographic and topo-bathymetric LiDAR sensors were loaded on UAVs to map a coastal region along the river Rhein. The resulting point clouds were compared to UAV-based photogrammetric ones. Aspects such as point spacing, absolute accuracy, and vertical offsets were analysed. To provide operative recommendations, each LiDAR scan was acquired at different flying altitudes, while the photogrammetric point clouds were georeferenced based on different exterior information configurations. To assess the riverbed modelling, we compared the surface model acquired by the topo-bathymetric LiDAR sensor to multibeam echosounder measurements. Our analysis shows that the accuracies of the LiDAR point clouds are hardly affected by flying altitude. The derived riverbed elevation, on the other hand, shows a bias which is linearly related to water depth.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"52 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Approaches for Aligning Geospatial Vector Maps 对齐地理空间矢量地图的新方法
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-55-2024
M. A. Cherif, S. Tripodi, Y. Tarabalka, Isabelle Manighetti, L. Laurore
Abstract. The surge in data across diverse fields presents an essential need for advanced techniques to merge and interpret this information. With a special emphasis on compiling geospatial data, this integration is crucial for unlocking new insights from geographic data, enhancing our ability to map and analyze trends that span across different locations and environments with more authenticity and reliability. Existing techniques have made progress in addressing data fusion; however, challenges persist in fusing and harmonizing data from different sources, scales, and modalities. This research presents a comprehensive investigation into the challenges and solutions in vector map alignment, focusing on developing methods that enhance the precision and usability of geospatial data. We explored and developed three distinct methodologies for polygonal vector map alignment: ProximityAlign, which excels in precision within urban layouts but faces computational challenges; the Optical Flow Deep Learning-Based Alignment, noted for its efficiency and adaptability; and the Epipolar Geometry-Based Alignment, effective in data-rich contexts but sensitive to data quality. In practice, the proposed approaches serve as tools to benefit from as much as possible from existing datasets while respecting a spatial reference source. It also serves as a paramount step for the data fusion task to reduce its complexity.
摘要不同领域的数据激增,迫切需要先进的技术来合并和解释这些信息。这种融合特别强调地理空间数据的汇编,对于从地理数据中获得新的见解、提高我们以更真实可靠的方式绘制和分析跨越不同地点和环境的趋势的能力至关重要。现有技术在解决数据融合问题方面取得了进展,但在融合和协调不同来源、规模和模式的数据方面仍存在挑战。本研究对矢量地图配准方面的挑战和解决方案进行了全面调查,重点是开发可提高地理空间数据精度和可用性的方法。我们探索并开发了三种不同的多边形矢量地图配准方法:ProximityAlign,它在城市布局中精度出众,但面临计算挑战;基于光流深度学习的配准,以其效率和适应性而著称;以及基于外极几何的配准,在数据丰富的情况下有效,但对数据质量敏感。在实践中,所提出的方法可作为一种工具,在尊重空间参考源的同时,尽可能地从现有数据集中获益。这也是数据融合任务降低其复杂性的重要步骤。
{"title":"Novel Approaches for Aligning Geospatial Vector Maps","authors":"M. A. Cherif, S. Tripodi, Y. Tarabalka, Isabelle Manighetti, L. Laurore","doi":"10.5194/isprs-archives-xlviii-2-2024-55-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-55-2024","url":null,"abstract":"Abstract. The surge in data across diverse fields presents an essential need for advanced techniques to merge and interpret this information. With a special emphasis on compiling geospatial data, this integration is crucial for unlocking new insights from geographic data, enhancing our ability to map and analyze trends that span across different locations and environments with more authenticity and reliability. Existing techniques have made progress in addressing data fusion; however, challenges persist in fusing and harmonizing data from different sources, scales, and modalities. This research presents a comprehensive investigation into the challenges and solutions in vector map alignment, focusing on developing methods that enhance the precision and usability of geospatial data. We explored and developed three distinct methodologies for polygonal vector map alignment: ProximityAlign, which excels in precision within urban layouts but faces computational challenges; the Optical Flow Deep Learning-Based Alignment, noted for its efficiency and adaptability; and the Epipolar Geometry-Based Alignment, effective in data-rich contexts but sensitive to data quality. In practice, the proposed approaches serve as tools to benefit from as much as possible from existing datasets while respecting a spatial reference source. It also serves as a paramount step for the data fusion task to reduce its complexity.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"89 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forecasting water resources from satellite image time series using a graph-based learning strategy 利用基于图的学习策略从卫星图像时间序列预测水资源
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-81-2024
Corentin Dufourg, Charlotte Pelletier, Stéphane May, Sébastien Lefèvre
Abstract. In the context of climate change, it is important to monitor the dynamics of the Earth’s surface in order to prevent extreme weather phenomena such as floods and droughts. To this end, global meteorological forecasting is constantly being improved, with a recent breakthrough in deep learning methods. In this paper, we propose to adapt a recent weather forecasting architecture, called GraphCast, to a water resources forecasting task using high-resolution satellite image time series (SITS). Based on an intermediate mesh, the data geometry used within the network is adapted to match high spatial resolution data acquired in two-dimensional space. In particular, we introduce a predefined irregular mesh based on a segmentation map to guide the network’s predictions and bring more detail to specific areas. We conduct experiments to forecast water resources index two months ahead on lakes and rivers in Italy and Spain. We demonstrate that our adaptation of GraphCast outperforms the existing frameworks designed for SITS analysis. It also showed stable results for the main hyperparameter, i.e., the number of superpixels. We conclude that adapting global meteorological forecasting methods to SITS settings can be beneficial for high spatial resolution predictions.
摘要在气候变化的背景下,监测地球表面的动态以预防洪水和干旱等极端天气现象非常重要。为此,随着深度学习方法的最新突破,全球气象预报正在不断改进。在本文中,我们提出利用高分辨率卫星图像时间序列(SITS),将一种名为 GraphCast 的最新天气预报架构应用于水资源预报任务。基于中间网格,网络内使用的数据几何形状经过调整,以匹配在二维空间获取的高空间分辨率数据。特别是,我们在分割图的基础上引入了预定义的不规则网格,以指导网络预测,并为特定区域提供更多细节。我们提前两个月对意大利和西班牙的湖泊和河流进行了水资源指数预测实验。结果表明,我们对 GraphCast 的改进优于为 SITS 分析设计的现有框架。在主要超参数(即超像素数量)方面,它也显示出了稳定的结果。我们的结论是,根据 SITS 设置调整全球气象预报方法对高空间分辨率预测是有益的。
{"title":"Forecasting water resources from satellite image time series using a graph-based learning strategy","authors":"Corentin Dufourg, Charlotte Pelletier, Stéphane May, Sébastien Lefèvre","doi":"10.5194/isprs-archives-xlviii-2-2024-81-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-81-2024","url":null,"abstract":"Abstract. In the context of climate change, it is important to monitor the dynamics of the Earth’s surface in order to prevent extreme weather phenomena such as floods and droughts. To this end, global meteorological forecasting is constantly being improved, with a recent breakthrough in deep learning methods. In this paper, we propose to adapt a recent weather forecasting architecture, called GraphCast, to a water resources forecasting task using high-resolution satellite image time series (SITS). Based on an intermediate mesh, the data geometry used within the network is adapted to match high spatial resolution data acquired in two-dimensional space. In particular, we introduce a predefined irregular mesh based on a segmentation map to guide the network’s predictions and bring more detail to specific areas. We conduct experiments to forecast water resources index two months ahead on lakes and rivers in Italy and Spain. We demonstrate that our adaptation of GraphCast outperforms the existing frameworks designed for SITS analysis. It also showed stable results for the main hyperparameter, i.e., the number of superpixels. We conclude that adapting global meteorological forecasting methods to SITS settings can be beneficial for high spatial resolution predictions.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"16 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of Camera Networks Used for the Assessment of Speech Movements 用于评估语音移动的摄像机网络的验证
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-41-2024
Liam Boyle, P. Helmholz, D. Lichti, Roslyn Ward
Abstract. The term speech sound disorder describes a range of speech difficulties in children that affect speech intelligibility. Differential diagnosis is difficult and reliant on access to validated and reliable measures. Technological advances aim to provide clinical access to measurements that have been identified as beneficial in diagnosing speech disorders. To generate objective measurements and, consequently, automatic scores, the output from multi-camera networks is required to produce quality results. The quality of photogrammetric results is usually expressed in terms of the precision and reliability of the network. Precision is determined at the design stage as a function of the geometry of the network. In this manuscript, we focus on the design of a photogrammetric camera network using three cameras. We adopted a similar workflow as Alsadika et al. (2012) and tested serval network configurations. As the distances from the camera stations to object points were fixed to 3500mm, only the horizontal and vertical placements of the cameras were varied. Horizontal angles were changed within an increment of 10º, and vertical angles were changed within an increment of 5º. The object space coordinates of GCPs for each camera configuration were assessed in terms of horizontal error ellipses and vertical precision. The best design was the maximum horizontal and vertical convergence angles of 90° and 30°. The existing camera network used to capture videos for speech assessment was approximately as good as the top third of tested designs. However, from a validation perspective, it can be concluded that the design is viable for continued use.
摘要语音障碍一词描述了影响语言清晰度的一系列儿童语言障碍。鉴别诊断十分困难,有赖于获得有效可靠的测量方法。技术进步旨在为临床提供已被确认为有利于诊断言语障碍的测量方法。要生成客观的测量结果,进而进行自动评分,就需要多摄像头网络输出高质量的结果。摄影测量结果的质量通常用网络的精度和可靠性来表示。精度是在设计阶段根据网络的几何形状确定的。在本手稿中,我们将重点讨论使用三台相机设计摄影测量相机网络的问题。我们采用了与 Alsadika 等人(2012 年)类似的工作流程,并测试了多种网络配置。由于相机站到目标点的距离固定为 3500 毫米,因此只改变了相机的水平和垂直位置。水平角度的变化增量为 10º,垂直角度的变化增量为 5º。根据水平误差椭圆和垂直精度评估了每种相机配置的 GCP 物体空间坐标。最佳设计是最大水平和垂直会聚角分别为 90° 和 30°。用于采集语音评估视频的现有摄像机网络与前三分之一的测试设计大致相同。不过,从验证的角度来看,可以断定该设计是可行的,可以继续使用。
{"title":"Validation of Camera Networks Used for the Assessment of Speech Movements","authors":"Liam Boyle, P. Helmholz, D. Lichti, Roslyn Ward","doi":"10.5194/isprs-archives-xlviii-2-2024-41-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-41-2024","url":null,"abstract":"Abstract. The term speech sound disorder describes a range of speech difficulties in children that affect speech intelligibility. Differential diagnosis is difficult and reliant on access to validated and reliable measures. Technological advances aim to provide clinical access to measurements that have been identified as beneficial in diagnosing speech disorders. To generate objective measurements and, consequently, automatic scores, the output from multi-camera networks is required to produce quality results. The quality of photogrammetric results is usually expressed in terms of the precision and reliability of the network. Precision is determined at the design stage as a function of the geometry of the network. In this manuscript, we focus on the design of a photogrammetric camera network using three cameras. We adopted a similar workflow as Alsadika et al. (2012) and tested serval network configurations. As the distances from the camera stations to object points were fixed to 3500mm, only the horizontal and vertical placements of the cameras were varied. Horizontal angles were changed within an increment of 10º, and vertical angles were changed within an increment of 5º. The object space coordinates of GCPs for each camera configuration were assessed in terms of horizontal error ellipses and vertical precision. The best design was the maximum horizontal and vertical convergence angles of 90° and 30°. The existing camera network used to capture videos for speech assessment was approximately as good as the top third of tested designs. However, from a validation perspective, it can be concluded that the design is viable for continued use.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"19 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141360443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Procedure for the Orientation of Laser Triangulation Sensors to a Stereo Camera System for the Inline Measurement of Rubber Extrudate 用于在线测量橡胶挤出物的激光三角测量传感器与立体摄像系统的定向程序
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-1-2024
Simon Albers, R. Rofallski, Paul-Felix Hagen, T. Luhmann
Abstract. Rubber production is a labour-intensive process. In order to reduce the needed number of workers and the waste of material, the level of digitalisation should be increased. One part of the production is the extrusion to produce gaskets and similar objects. An automated observation of the continuous rubber extrudate enables an early intervention in the production process. In addition to chemical monitoring, the geometrical observation of the extrudate is an important aspect of the quality control. For this purpose, we use laser triangulation sensors (LTS) at the beginning and the end of the cooling phase of the extrudate after the extrusion. The LTS acquire two-dimensional profiles at a constant frequency. To combine these profiles into a three-dimensional model of the extrudate, the movement of the extrudate has to be tracked. Since the extrudate is moved over a conveyor belt, the conveyor belt can be tracked by a stereo camera system to deduce the movement of the extrudate. For the correct usage of the tracking, the orientation between the LTS and the stereo camera system needs to be known. A calibration object that considers the different data from the LTS and the camera system was developed to determine the orientation. Afterwards, the orientation can be used to combine arbitrary profiles. The measurement setup, consisting of the LTS, the stereo camera system and the conveyor belt, is explained. The development of the calibration object, the algorithm for evaluating the orientation data and the combination of the LTS profiles are described. Finally, experiments with real extrusion data are presented to validate the results and compare three variations of data evaluation. Two use the calculated orientation, but have different tracking approaches and one without any orientation necessary.
摘要橡胶生产是一个劳动密集型过程。为了减少所需的工人数量和材料浪费,应提高数字化水平。生产中的一个环节是挤出生产垫片和类似物品。通过对连续橡胶挤出的自动观察,可以对生产过程进行早期干预。除化学监测外,挤出物的几何观察也是质量控制的一个重要方面。为此,我们在挤出后挤出物冷却阶段的开始和结束时使用激光三角测量传感器(LTS)。LTS 以恒定的频率获取二维剖面图。为了将这些轮廓组合成挤出物的三维模型,必须跟踪挤出物的移动。由于挤出物是在传送带上移动的,因此可以通过立体摄像系统跟踪传送带来推断挤出物的移动。为了正确使用跟踪功能,需要知道 LTS 和立体摄像系统之间的方向。为了确定方位,我们开发了一个校准对象,该对象考虑了 LTS 和摄像系统的不同数据。之后,方位可用于组合任意剖面图。测量装置由 LTS、立体摄像系统和传送带组成。此外,还介绍了校准对象的开发、评估定向数据的算法以及 LTS 剖面的组合。最后,介绍了使用真实挤压数据进行的实验,以验证结果并比较数据评估的三种变化。其中两种使用计算出的方位,但采用了不同的跟踪方法,另一种则不需要任何方位。
{"title":"Procedure for the Orientation of Laser Triangulation Sensors to a Stereo Camera System for the Inline Measurement of Rubber Extrudate","authors":"Simon Albers, R. Rofallski, Paul-Felix Hagen, T. Luhmann","doi":"10.5194/isprs-archives-xlviii-2-2024-1-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-1-2024","url":null,"abstract":"Abstract. Rubber production is a labour-intensive process. In order to reduce the needed number of workers and the waste of material, the level of digitalisation should be increased. One part of the production is the extrusion to produce gaskets and similar objects. An automated observation of the continuous rubber extrudate enables an early intervention in the production process. In addition to chemical monitoring, the geometrical observation of the extrudate is an important aspect of the quality control. For this purpose, we use laser triangulation sensors (LTS) at the beginning and the end of the cooling phase of the extrudate after the extrusion. The LTS acquire two-dimensional profiles at a constant frequency. To combine these profiles into a three-dimensional model of the extrudate, the movement of the extrudate has to be tracked. Since the extrudate is moved over a conveyor belt, the conveyor belt can be tracked by a stereo camera system to deduce the movement of the extrudate. For the correct usage of the tracking, the orientation between the LTS and the stereo camera system needs to be known. A calibration object that considers the different data from the LTS and the camera system was developed to determine the orientation. Afterwards, the orientation can be used to combine arbitrary profiles. The measurement setup, consisting of the LTS, the stereo camera system and the conveyor belt, is explained. The development of the calibration object, the algorithm for evaluating the orientation data and the combination of the LTS profiles are described. Finally, experiments with real extrusion data are presented to validate the results and compare three variations of data evaluation. Two use the calculated orientation, but have different tracking approaches and one without any orientation necessary.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"76 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEGA Vision: Integrating Reef Photogrammetry Data into Immersive Mixed Reality Experiences MEGA Vision:将珊瑚礁摄影测量数据整合到沉浸式混合现实体验中
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-409-2024
Alex Spengler, K. Pascoe, C. Kapono, Haunani H. Kane, John Burns
Abstract. Coral reefs and submerged cultural heritage sites are integral to supporting marine biodiversity, preserving human history, providing ecosystem services, and understanding drivers of ecosystem health and function. Despite the importance of these submerged underwater habitats, accessibility to these environments remains limited to specialized professionals. The MEGA Vision mixed reality application integrates photogrammetry-derived data products with augmented reality (AR) technologies to transcend this barrier, offering an immersive and educational platform for the broader public. Using high-resolution imagery from SCUBA expeditions, the app presents users with realistic and spatially accurate 3D reconstructions of coral reefs and submerged archaeological artifacts within an interactive interface developed through Unity and Vuforia. The applications’ instructional design includes multimedia elements for enhancing user comprehension of marine and historical sciences. This mixed reality tool exemplifies the convergence of scientific data visualization and public engagement, offering a unique educational tool that demystifies the complexities of marine ecosystems and maritime history, thereby fostering a deeper appreciation and stewardship of underwater environments. By enabling accessible, interactive, and immersive experiences, the application has the potential to revolutionize the way we interact with and contribute to marine sciences, aligning technology with conservation and research efforts to cultivate a more informed and environmentally conscious public.
摘要。珊瑚礁和水下文化遗址对于支持海洋生物多样性、保护人类历史、提供生态系统服务以及了解生态系统健康和功能的驱动因素不可或缺。尽管这些水下栖息地非常重要,但只有专业人员才能进入这些环境。MEGA Vision 混合现实应用将摄影测量数据产品与增强现实(AR)技术相结合,超越了这一障碍,为广大公众提供了一个身临其境的教育平台。该应用程序利用 SCUBA 探险获得的高分辨率图像,在通过 Unity 和 Vuforia 开发的互动界面中,向用户展示了珊瑚礁和水下考古文物的逼真且空间精确的三维重建。应用程序的教学设计包括多媒体元素,以提高用户对海洋和历史科学的理解能力。这种混合现实工具是科学数据可视化和公众参与融合的典范,提供了一种独特的教育工具,揭开了海洋生态系统和海洋历史复杂性的神秘面纱,从而促进了对水下环境的深入了解和管理。通过提供方便、互动和身临其境的体验,该应用程序有可能彻底改变我们与海洋科学互动和为海洋科学做贡献的方式,将技术与保护和研究工作结合起来,培养更加知情和具有环保意识的公众。
{"title":"MEGA Vision: Integrating Reef Photogrammetry Data into Immersive Mixed Reality Experiences","authors":"Alex Spengler, K. Pascoe, C. Kapono, Haunani H. Kane, John Burns","doi":"10.5194/isprs-archives-xlviii-2-2024-409-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-409-2024","url":null,"abstract":"Abstract. Coral reefs and submerged cultural heritage sites are integral to supporting marine biodiversity, preserving human history, providing ecosystem services, and understanding drivers of ecosystem health and function. Despite the importance of these submerged underwater habitats, accessibility to these environments remains limited to specialized professionals. The MEGA Vision mixed reality application integrates photogrammetry-derived data products with augmented reality (AR) technologies to transcend this barrier, offering an immersive and educational platform for the broader public. Using high-resolution imagery from SCUBA expeditions, the app presents users with realistic and spatially accurate 3D reconstructions of coral reefs and submerged archaeological artifacts within an interactive interface developed through Unity and Vuforia. The applications’ instructional design includes multimedia elements for enhancing user comprehension of marine and historical sciences. This mixed reality tool exemplifies the convergence of scientific data visualization and public engagement, offering a unique educational tool that demystifies the complexities of marine ecosystems and maritime history, thereby fostering a deeper appreciation and stewardship of underwater environments. By enabling accessible, interactive, and immersive experiences, the application has the potential to revolutionize the way we interact with and contribute to marine sciences, aligning technology with conservation and research efforts to cultivate a more informed and environmentally conscious public.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"89 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Estimation of 3D Poses and Shapes of Animals from Oblique Drone Imagery 从斜向无人机图像中估算动物的三维姿态和形状
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-379-2024
Vandita Shukla, Luca Morelli, F. Remondino, Andrea Micheli, D. Tuia, Benjamin Risse
Abstract. Wildlife research in both terrestrial and aquatic ecosystems now deploys drone technology for tasks such as monitoring, census counts and habitat analysis. Unlike camera traps, drones offer real-time flexibility for adaptable flight paths and camera views, thus making them ideal for capturing multi-view data on wildlife like zebras or lions. With recent advancements in animals’ 3D shape & pose estimation, there is an increasing interest in bringing 3D analysis from ground to sky by means of drones. The paper reports some activities of the EU-funded WildDrone project and performs, for the first time, 3D analyses of animals exploiting oblique drone imagery. Using parametric model fitting, we estimate 3D shape and pose of animals from frames of a monocular RGB video. With the goal of appending metric information to parametric animal models using photogrammetric evidence, we propose a pipeline where we perform a point cloud reconstruction of the scene to scale and localize the animal within the 3D scene. Challenges, planned next steps and future directions are also reported.
摘要目前,陆地和水生生态系统中的野生动物研究都采用无人机技术来完成监测、普查计数和栖息地分析等任务。与相机陷阱不同,无人机可实时灵活地调整飞行路径和相机视角,因此非常适合捕捉斑马或狮子等野生动物的多视角数据。随着最近在动物三维形状和姿态估计方面取得的进展,人们对利用无人机将三维分析从地面带到空中的兴趣日益浓厚。本文报告了欧盟资助的 WildDrone 项目的一些活动,并首次利用无人机的倾斜图像对动物进行三维分析。利用参数模型拟合,我们从单目 RGB 视频帧中估算出动物的三维形状和姿势。为了利用摄影测量证据将度量信息添加到参数动物模型中,我们提出了一个管道,在该管道中,我们对场景进行点云重建,以在三维场景中对动物进行缩放和定位。此外,我们还报告了面临的挑战、计划采取的下一步措施和未来发展方向。
{"title":"Towards Estimation of 3D Poses and Shapes of Animals from Oblique Drone Imagery","authors":"Vandita Shukla, Luca Morelli, F. Remondino, Andrea Micheli, D. Tuia, Benjamin Risse","doi":"10.5194/isprs-archives-xlviii-2-2024-379-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-379-2024","url":null,"abstract":"Abstract. Wildlife research in both terrestrial and aquatic ecosystems now deploys drone technology for tasks such as monitoring, census counts and habitat analysis. Unlike camera traps, drones offer real-time flexibility for adaptable flight paths and camera views, thus making them ideal for capturing multi-view data on wildlife like zebras or lions. With recent advancements in animals’ 3D shape & pose estimation, there is an increasing interest in bringing 3D analysis from ground to sky by means of drones. The paper reports some activities of the EU-funded WildDrone project and performs, for the first time, 3D analyses of animals exploiting oblique drone imagery. Using parametric model fitting, we estimate 3D shape and pose of animals from frames of a monocular RGB video. With the goal of appending metric information to parametric animal models using photogrammetric evidence, we propose a pipeline where we perform a point cloud reconstruction of the scene to scale and localize the animal within the 3D scene. Challenges, planned next steps and future directions are also reported.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"95 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Archival Framework for Sharing of Cultural Heritage 3D Survey Data: OpenHeritage3D.org 共享文化遗产 3D 勘测数据的档案框架:OpenHeritage3D.org
Pub Date : 2024-06-11 DOI: 10.5194/isprs-archives-xlviii-2-2024-241-2024
Scott McAvoy, B. Tanduo, A. Spreafico, F. Chiabrando, D. Rissolo, J. Ristevski, F. Kuester
Abstract. Photogrammetry and LiDAR have become increasingly accessible methods for documentation of Cultural Heritage sites. Academic and government agencies recognize the utility of high-resolution 3D models supporting long-term asset management through visualization, conservation planning, and change detection. Though detailed models can be created with increasing ease, their potential for future use can be constrained by a lack of accompanying topographic data, data collector skill level, and incomplete recording of the key metadata and paradata which make such survey data useful to future endeavors. In this paper, informed by various international survey organizations and data archives, we present a framework to record and communicate Cultural Heritage - focusing on architectures based on 3D metric survey - to first describe the data and metadata which should be included by surveyors to enable data usage and to communicate the expected utility of this data.
摘要摄影测量和激光雷达已越来越多地成为记录文化遗址的方法。学术界和政府机构认识到高分辨率三维模型通过可视化、保护规划和变化检测支持长期资产管理的效用。虽然创建详细模型越来越容易,但由于缺乏地形数据、数据收集者的技能水平以及关键元数据和范式记录不完整,这些模型的未来使用潜力可能会受到限制,而正是这些关键元数据和范式使这些勘测数据对未来工作有用。在本文中,我们借鉴了各种国际勘测组织和数据档案,提出了一个记录和交流文化遗产的框架--重点是基于三维测量的建筑--首先描述勘测人员应包含的数据和元数据,以便使用数据并交流这些数据的预期用途。
{"title":"An Archival Framework for Sharing of Cultural Heritage 3D Survey Data: OpenHeritage3D.org","authors":"Scott McAvoy, B. Tanduo, A. Spreafico, F. Chiabrando, D. Rissolo, J. Ristevski, F. Kuester","doi":"10.5194/isprs-archives-xlviii-2-2024-241-2024","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-2-2024-241-2024","url":null,"abstract":"Abstract. Photogrammetry and LiDAR have become increasingly accessible methods for documentation of Cultural Heritage sites. Academic and government agencies recognize the utility of high-resolution 3D models supporting long-term asset management through visualization, conservation planning, and change detection. Though detailed models can be created with increasing ease, their potential for future use can be constrained by a lack of accompanying topographic data, data collector skill level, and incomplete recording of the key metadata and paradata which make such survey data useful to future endeavors. In this paper, informed by various international survey organizations and data archives, we present a framework to record and communicate Cultural Heritage - focusing on architectures based on 3D metric survey - to first describe the data and metadata which should be included by surveyors to enable data usage and to communicate the expected utility of this data.\u0000","PeriodicalId":505918,"journal":{"name":"The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"85 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1