首页 > 最新文献

PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science最新文献

英文 中文
Design, Implementation, and Evaluation of an External Pose-Tracking System for Underwater Cameras 水下相机外部姿态跟踪系统的设计、实现和评估
4区 地球科学 Q1 Social Sciences Pub Date : 2023-10-16 DOI: 10.1007/s41064-023-00263-x
Birger Winkel, David Nakath, Felix Woelk, Kevin Köser
Abstract To advance underwater computer vision and robotics from lab environments and clear water scenarios to the deep dark ocean or murky coastal waters, representative benchmarks and realistic datasets with ground truth information are required. In particular, determining the camera pose is essential for many underwater robotic or photogrammetric applications and known ground truth is mandatory to evaluate the performance of, e.g., simultaneous localization and mapping approaches in such extreme environments. This paper presents the conception, calibration, and implementation of an external reference system for determining the underwater camera pose in real time. The approach, based on an HTC Vive tracking system in air, calculates the underwater camera pose by fusing the poses of two controllers tracked above the water surface of a tank. It is shown that the mean deviation of this approach to an optical marker-based reference in air is less than 3 mm and 0.3 $$^{circ }$$ . Finally, the usability of the system for underwater applications is demonstrated.
为了将水下计算机视觉和机器人技术从实验室环境和清澈的水场景推进到深海或阴暗的沿海水域,需要具有代表性的基准和具有地面真实信息的真实数据集。特别是,确定相机姿势对于许多水下机器人或摄影测量应用至关重要,并且已知的地面真相对于评估性能是强制性的,例如,在这种极端环境中同时定位和绘图方法。本文介绍了一种用于实时确定水下摄像机姿态的外部参考系统的概念、标定和实现。这种方法基于HTC Vive在空中的跟踪系统,通过融合在水箱水面上跟踪的两个控制器的姿势来计算水下摄像机的姿势。结果表明,这种方法与空气中基于光学标记的参考值的平均偏差小于3毫米和0.3 $$^{circ }$$°。最后,验证了该系统在水下应用的可用性。
{"title":"Design, Implementation, and Evaluation of an External Pose-Tracking System for Underwater Cameras","authors":"Birger Winkel, David Nakath, Felix Woelk, Kevin Köser","doi":"10.1007/s41064-023-00263-x","DOIUrl":"https://doi.org/10.1007/s41064-023-00263-x","url":null,"abstract":"Abstract To advance underwater computer vision and robotics from lab environments and clear water scenarios to the deep dark ocean or murky coastal waters, representative benchmarks and realistic datasets with ground truth information are required. In particular, determining the camera pose is essential for many underwater robotic or photogrammetric applications and known ground truth is mandatory to evaluate the performance of, e.g., simultaneous localization and mapping approaches in such extreme environments. This paper presents the conception, calibration, and implementation of an external reference system for determining the underwater camera pose in real time. The approach, based on an HTC Vive tracking system in air, calculates the underwater camera pose by fusing the poses of two controllers tracked above the water surface of a tank. It is shown that the mean deviation of this approach to an optical marker-based reference in air is less than 3 mm and 0.3 $$^{circ }$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msup> <mml:mrow /> <mml:mo>∘</mml:mo> </mml:msup> </mml:math> . Finally, the usability of the system for underwater applications is demonstrated.","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136116745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial for PFG Issue 5/2023 PFG第5/2023期社论
4区 地球科学 Q1 Social Sciences Pub Date : 2023-10-13 DOI: 10.1007/s41064-023-00262-y
Markus Gerke, Michael Cramer
{"title":"Editorial for PFG Issue 5/2023","authors":"Markus Gerke, Michael Cramer","doi":"10.1007/s41064-023-00262-y","DOIUrl":"https://doi.org/10.1007/s41064-023-00262-y","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135858445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIN3D Dataset: MultI-seNsor 3D Mapping with an Unmanned Ground Vehicle MIN3D数据集:多传感器三维测绘与无人地面车辆
4区 地球科学 Q1 Social Sciences Pub Date : 2023-10-06 DOI: 10.1007/s41064-023-00260-0
Paweł Trybała, Jarosław Szrek, Fabio Remondino, Paulina Kujawa, Jacek Wodecki, Jan Blachowski, Radosław Zimroz
Abstract The research potential in the field of mobile mapping technologies is often hindered by several constraints. These include the need for costly hardware to collect data, limited access to target sites with specific environmental conditions or the collection of ground truth data for a quantitative evaluation of the developed solutions. To address these challenges, the research community has often prepared open datasets suitable for developments and testing. However, the availability of datasets that encompass truly demanding mixed indoor–outdoor and subterranean conditions, acquired with diverse but synchronized sensors, is currently limited. To alleviate this issue, we propose the MIN3D dataset (MultI-seNsor 3D mapping with an unmanned ground vehicle for mining applications) which includes data gathered using a wheeled mobile robot in two distinct locations: (i) textureless dark corridors and outside parts of a university campus and (ii) tunnels of an underground WW2 site in Walim (Poland). MIN3D comprises around 150 GB of raw data, including images captured by multiple co-calibrated monocular, stereo and thermal cameras, two LiDAR sensors and three inertial measurement units. Reliable ground truth (GT) point clouds were collected using a survey-grade terrestrial laser scanner. By openly sharing this dataset, we aim to support the efforts of the scientific community in developing robust methods for navigation and mapping in challenging underground conditions. In the paper, we describe the collected data and provide an initial accuracy assessment of some visual- and LiDAR-based simultaneous localization and mapping (SLAM) algorithms for selected sequences. Encountered problems, open research questions and areas that could benefit from utilizing our dataset are discussed. Data are available at https://3dom.fbk.eu/benchmarks .
移动地图技术领域的研究潜力往往受到一些制约因素的阻碍。这些问题包括需要昂贵的硬件来收集数据,在特定环境条件下进入目标地点的机会有限,或者收集地面真实数据以对已开发的解决方案进行定量评估。为了应对这些挑战,研究界经常准备适合开发和测试的开放数据集。然而,数据集的可用性,包括真正苛刻的混合室内-室外和地下条件,获得不同但同步的传感器,目前是有限的。为了缓解这个问题,我们提出了MIN3D数据集(用于采矿应用的无人地面车辆的多传感器3D测绘),其中包括使用轮式移动机器人在两个不同位置收集的数据:(i)无纹理的黑暗走廊和大学校园的外部部分,以及(ii) Walim(波兰)地下二战遗址的隧道。MIN3D包含约150gb的原始数据,包括由多个协同校准的单目、立体和热像仪、两个激光雷达传感器和三个惯性测量单元捕获的图像。利用测量级地面激光扫描仪采集了可靠的地面真值点云。通过公开共享该数据集,我们的目标是支持科学界在开发具有挑战性的地下条件下导航和测绘的强大方法方面的努力。在本文中,我们描述了收集到的数据,并提供了一些基于视觉和激光雷达的同步定位和制图(SLAM)算法对选定序列的初步精度评估。讨论了遇到的问题、开放的研究问题和可以从利用我们的数据集中受益的领域。相关数据可从https://3dom.fbk.eu/benchmarks获取。
{"title":"MIN3D Dataset: MultI-seNsor 3D Mapping with an Unmanned Ground Vehicle","authors":"Paweł Trybała, Jarosław Szrek, Fabio Remondino, Paulina Kujawa, Jacek Wodecki, Jan Blachowski, Radosław Zimroz","doi":"10.1007/s41064-023-00260-0","DOIUrl":"https://doi.org/10.1007/s41064-023-00260-0","url":null,"abstract":"Abstract The research potential in the field of mobile mapping technologies is often hindered by several constraints. These include the need for costly hardware to collect data, limited access to target sites with specific environmental conditions or the collection of ground truth data for a quantitative evaluation of the developed solutions. To address these challenges, the research community has often prepared open datasets suitable for developments and testing. However, the availability of datasets that encompass truly demanding mixed indoor–outdoor and subterranean conditions, acquired with diverse but synchronized sensors, is currently limited. To alleviate this issue, we propose the MIN3D dataset (MultI-seNsor 3D mapping with an unmanned ground vehicle for mining applications) which includes data gathered using a wheeled mobile robot in two distinct locations: (i) textureless dark corridors and outside parts of a university campus and (ii) tunnels of an underground WW2 site in Walim (Poland). MIN3D comprises around 150 GB of raw data, including images captured by multiple co-calibrated monocular, stereo and thermal cameras, two LiDAR sensors and three inertial measurement units. Reliable ground truth (GT) point clouds were collected using a survey-grade terrestrial laser scanner. By openly sharing this dataset, we aim to support the efforts of the scientific community in developing robust methods for navigation and mapping in challenging underground conditions. In the paper, we describe the collected data and provide an initial accuracy assessment of some visual- and LiDAR-based simultaneous localization and mapping (SLAM) algorithms for selected sequences. Encountered problems, open research questions and areas that could benefit from utilizing our dataset are discussed. Data are available at https://3dom.fbk.eu/benchmarks .","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135350313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Urban Change Forecasting from Satellite Images 从卫星图像预测城市变化
4区 地球科学 Q1 Social Sciences Pub Date : 2023-10-05 DOI: 10.1007/s41064-023-00258-8
Nando Metzger, Mehmet Özgür Türkoglu, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler
Abstract Forecasting where and when new buildings will emerge is a rather unexplored topic, but one that is very useful in many disciplines such as urban planning, agriculture, resource management, and even autonomous flying. In the present work, we present a method that accomplishes this task with a deep neural network and a custom pretraining procedure. In Stage 1 , a U-Net backbone is pretrained within a Siamese network architecture that aims to solve a (building) change detection task. In Stage 2 , the backbone is repurposed to forecast the emergence of new buildings based solely on one image acquired before its construction. Furthermore, we also present a model that forecasts the time range within which the change will occur. We validate our approach using the SpaceNet7 dataset, which covers an area of 960 km $$^2$$ 2 at 24 points in time across 2 years. In our experiments, we found that our proposed pretraining method consistently outperforms the traditional pretraining using the ImageNet dataset. We also show that it is to some degree possible to predict in advance when building changes will occur.
预测新建筑将在何时何地出现是一个相当未被探索的话题,但它在城市规划、农业、资源管理甚至自主飞行等许多学科中都非常有用。在目前的工作中,我们提出了一种使用深度神经网络和自定义预训练过程来完成此任务的方法。在阶段1中,U-Net骨干网在暹罗网络体系结构中进行预训练,旨在解决(建筑)变化检测任务。在第二阶段,主干网被重新用于预测新建筑的出现,这仅仅基于建筑建造前获得的一张图像。此外,我们还提出了一个预测变化发生的时间范围的模型。我们使用SpaceNet7数据集验证了我们的方法,该数据集覆盖了960公里$$^2$$ 2的区域,时间跨度为2年的24个时间点。在我们的实验中,我们发现我们提出的预训练方法始终优于使用ImageNet数据集的传统预训练方法。我们还表明,在某种程度上,提前预测何时会发生建筑变化是可能的。
{"title":"Urban Change Forecasting from Satellite Images","authors":"Nando Metzger, Mehmet Özgür Türkoglu, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler","doi":"10.1007/s41064-023-00258-8","DOIUrl":"https://doi.org/10.1007/s41064-023-00258-8","url":null,"abstract":"Abstract Forecasting where and when new buildings will emerge is a rather unexplored topic, but one that is very useful in many disciplines such as urban planning, agriculture, resource management, and even autonomous flying. In the present work, we present a method that accomplishes this task with a deep neural network and a custom pretraining procedure. In Stage 1 , a U-Net backbone is pretrained within a Siamese network architecture that aims to solve a (building) change detection task. In Stage 2 , the backbone is repurposed to forecast the emergence of new buildings based solely on one image acquired before its construction. Furthermore, we also present a model that forecasts the time range within which the change will occur. We validate our approach using the SpaceNet7 dataset, which covers an area of 960 km $$^2$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msup> <mml:mrow /> <mml:mn>2</mml:mn> </mml:msup> </mml:math> at 24 points in time across 2 years. In our experiments, we found that our proposed pretraining method consistently outperforms the traditional pretraining using the ImageNet dataset. We also show that it is to some degree possible to predict in advance when building changes will occur.","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135481560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remote Sensing of Turbidity in Optically Shallow Waters Using Sentinel-2 MSI and PRISMA Satellite Data 基于Sentinel-2 MSI和PRISMA卫星数据的光学浅水浊度遥感研究
4区 地球科学 Q1 Social Sciences Pub Date : 2023-10-04 DOI: 10.1007/s41064-023-00257-9
Rim Katlane, David Doxaran, Boubaker ElKilani, Chaïma Trabelsi
{"title":"Remote Sensing of Turbidity in Optically Shallow Waters Using Sentinel-2 MSI and PRISMA Satellite Data","authors":"Rim Katlane, David Doxaran, Boubaker ElKilani, Chaïma Trabelsi","doi":"10.1007/s41064-023-00257-9","DOIUrl":"https://doi.org/10.1007/s41064-023-00257-9","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series 基于融合Sentinel-1和Sentinel-2时间序列的作物分类虚拟训练标签生成
4区 地球科学 Q1 Social Sciences Pub Date : 2023-09-26 DOI: 10.1007/s41064-023-00256-w
Maryam Teimouri, Mehdi Mokhtarzade, Nicolas Baghdadi, Christian Heipke
Abstract Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL.
卷积神经网络(cnn)在许多领域显示出优于大多数传统图像理解方法的结果,包括从卫星时间序列图像中进行作物分类。然而,cnn需要大量的训练样本才能正确训练网络。使用传统方法收集和标记这些样品的过程既耗时又昂贵。为了解决这个问题并提高分类精度,从现有的训练标签生成虚拟训练标签(VTL)是一个很有前途的解决方案。为此,本研究提出了一种基于自组织地图(SOM)对每个作物的训练样本进行细分,然后根据与这些子类的距离为一组未标记像素分配标签的新方法来生成VTL。我们将新方法应用于Sentinel图像的作物分类。利用三维(3D) CNN从光学和雷达时间序列融合中提取特征。评价结果表明,该方法可以有效地生成虚拟带库,总体精度(OA)为95.3%,kappa系数(KC)为94.5%,而无虚拟带库溶液的总体精度为91.3%,kappa系数为89.9%。结果表明,所提出的方法具有提高VTL作物分类精度的潜力。
{"title":"Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series","authors":"Maryam Teimouri, Mehdi Mokhtarzade, Nicolas Baghdadi, Christian Heipke","doi":"10.1007/s41064-023-00256-w","DOIUrl":"https://doi.org/10.1007/s41064-023-00256-w","url":null,"abstract":"Abstract Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL.","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134960018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Physical and Chemical Characteristics of Marine Mucilage Utilizing In-Situ and Remote Sensing Data (Sentinel-1, -2, -3) 利用原位和遥感数据评估海洋黏液的理化特性(Sentinel-1, -2, -3)
4区 地球科学 Q1 Social Sciences Pub Date : 2023-09-19 DOI: 10.1007/s41064-023-00254-y
Umut Gunes Sefercik, Ismail Colkesen, Taskin Kavzoglu, Nizamettin Ozdogan, Muhammed Yusuf Ozturk
{"title":"Assessing the Physical and Chemical Characteristics of Marine Mucilage Utilizing In-Situ and Remote Sensing Data (Sentinel-1, -2, -3)","authors":"Umut Gunes Sefercik, Ismail Colkesen, Taskin Kavzoglu, Nizamettin Ozdogan, Muhammed Yusuf Ozturk","doi":"10.1007/s41064-023-00254-y","DOIUrl":"https://doi.org/10.1007/s41064-023-00254-y","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135060842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Analysis of Multispectral and Hyperspectral Imagery for Mapping Sugarcane Varieties 多光谱与高光谱影像在甘蔗品种定位中的比较分析
IF 4.1 4区 地球科学 Q1 Social Sciences Pub Date : 2023-09-06 DOI: 10.1007/s41064-023-00255-x
A. Sedighi, S. Hamzeh, M. K. Firozjaei, Hamid Valipoori Goodarzi, A. Naseri
{"title":"Comparative Analysis of Multispectral and Hyperspectral Imagery for Mapping Sugarcane Varieties","authors":"A. Sedighi, S. Hamzeh, M. K. Firozjaei, Hamid Valipoori Goodarzi, A. Naseri","doi":"10.1007/s41064-023-00255-x","DOIUrl":"https://doi.org/10.1007/s41064-023-00255-x","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82060067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guiding Deep Learning with Expert Knowledge for Dense Stereo Matching 用专家知识指导深度学习进行密集立体匹配
IF 4.1 4区 地球科学 Q1 Social Sciences Pub Date : 2023-07-28 DOI: 10.1007/s41064-023-00252-0
Waseem Iqbal, J. Paffenholz, M. Mehltretter
{"title":"Guiding Deep Learning with Expert Knowledge for Dense Stereo Matching","authors":"Waseem Iqbal, J. Paffenholz, M. Mehltretter","doi":"10.1007/s41064-023-00252-0","DOIUrl":"https://doi.org/10.1007/s41064-023-00252-0","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87870794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Crowd-aware Thresholded Loss for Object Detection in Wide Area Motion Imagery 广域运动图像中人群感知阈值损失的目标检测
IF 4.1 4区 地球科学 Q1 Social Sciences Pub Date : 2023-07-24 DOI: 10.1007/s41064-023-00253-z
P. U. Hatipoglu, C. Iyigun, Sinan Kalkan
{"title":"Crowd-aware Thresholded Loss for Object Detection in Wide Area Motion Imagery","authors":"P. U. Hatipoglu, C. Iyigun, Sinan Kalkan","doi":"10.1007/s41064-023-00253-z","DOIUrl":"https://doi.org/10.1007/s41064-023-00253-z","url":null,"abstract":"","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80218625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1