首页 > 最新文献

The Photogrammetric Record最新文献

英文 中文
ISPRS ICWG IV/III, WG IV/11: Workshop on 3D digital modelling for SDGs ISPRS ICCWG IV/III,WG IV/11:可持续发展目标三维数字建模研讨会
Pub Date : 2024-06-01 DOI: 10.1111/phor.1_12501
in
{"title":"ISPRS ICWG IV/III, WG IV/11: Workshop on 3D digital modelling for SDGs","authors":"","doi":"10.1111/phor.1_12501","DOIUrl":"https://doi.org/10.1111/phor.1_12501","url":null,"abstract":"in","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"134 S231","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141413950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
31st international conference on Geoinformatics 第 31 届国际地理信息学会议
Pub Date : 2024-06-01 DOI: 10.1111/phor.3_12501
{"title":"31st international conference on Geoinformatics","authors":"","doi":"10.1111/phor.3_12501","DOIUrl":"https://doi.org/10.1111/phor.3_12501","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"124 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141402507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innsbruck Summer School of alpine research—close range sensing techniques in alpine terrain 因斯布鲁克高山研究暑期班--高山地形近距离传感技术
Pub Date : 2024-06-01 DOI: 10.1111/phor.4_12501
{"title":"Innsbruck Summer School of alpine research—close range sensing techniques in alpine terrain","authors":"","doi":"10.1111/phor.4_12501","DOIUrl":"https://doi.org/10.1111/phor.4_12501","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"84 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141408879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
International workshop on ‘Photogrammetric Data Analysis’ 摄影测量数据分析 "国际研讨会
Pub Date : 2024-06-01 DOI: 10.1111/phor.5_12501
{"title":"International workshop on ‘Photogrammetric Data Analysis’","authors":"","doi":"10.1111/phor.5_12501","DOIUrl":"https://doi.org/10.1111/phor.5_12501","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"13 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141409220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ISPRS Geospatial Week 2025: Photogrammetry & remote sensing for a better tomorrow 国际摄影测量与遥感学会 2025 年地理空间周:摄影测量与遥感共创美好明天
Pub Date : 2024-06-01 DOI: 10.1111/phor.7_12501
{"title":"ISPRS Geospatial Week 2025: Photogrammetry & remote sensing for a better tomorrow","authors":"","doi":"10.1111/phor.7_12501","DOIUrl":"https://doi.org/10.1111/phor.7_12501","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141402423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indoor hierarchy relation graph construction method based on RGB‐D 基于 RGB-D 的室内层次关系图构建方法
Pub Date : 2024-05-25 DOI: 10.1111/phor.12499
Jianwu Jiang, Zhizhong Kang, Jingwen Li
Fine‐grained indoor navigation services require obstacle‐level indoor maps to support, but since indoor environments are affected by human activities, resulting in frequent changes in indoor spatial layouts, and indoor environments are easily affected by light and occlusion, the vast majority of indoor maps are at room level, limiting indoor obstacle‐level navigation path planning. To solve this problem, this paper proposes a hierarchy relation graph (HRG) construction method based on RGB‐D. Firstly, the semantic information extraction of indoor scenes and elements is realized by the output transformed PSPNet and YOLO V8 models, and the bounding box of each element is obtained based on YOLO V8. Then an algorithm for determining the hierarchical relationship of indoor elements is proposed, which calculates the correlation between the two elements from both plane and depth dimensions and constructs a HRG of indoor elements based on directed trees. Finally, comparative experiments are designed to validate the proposed method. Experiments showed that the proposed method can construct HRGs in a variety of scenes; the hierarchy relation detection rate is 88.28%; the accuracy of hierarchy relation determination is 73.44%; and the single‐scene HRG can be generated in 3.81 s.
精细化的室内导航服务需要障碍物级别的室内地图来支持,但由于室内环境受人的活动影响,导致室内空间布局变化频繁,而且室内环境容易受到光线和遮挡的影响,因此绝大多数室内地图都是房间级别的,限制了室内障碍物级别的导航路径规划。为解决这一问题,本文提出了一种基于 RGB-D 的层次关系图(HRG)构建方法。首先,通过输出转换后的 PSPNet 和 YOLO V8 模型实现室内场景和元素的语义信息提取,并基于 YOLO V8 获得每个元素的边界框。然后提出了一种确定室内元素层次关系的算法,该算法从平面和深度两个维度计算两个元素之间的相关性,并基于有向树构建室内元素的 HRG。最后,设计了对比实验来验证所提出的方法。实验结果表明,所提出的方法可以在多种场景中构建 HRG;层次关系检测率为 88.28%;层次关系判断准确率为 73.44%;单场景 HRG 生成时间为 3.81 s。
{"title":"Indoor hierarchy relation graph construction method based on RGB‐D","authors":"Jianwu Jiang, Zhizhong Kang, Jingwen Li","doi":"10.1111/phor.12499","DOIUrl":"https://doi.org/10.1111/phor.12499","url":null,"abstract":"Fine‐grained indoor navigation services require obstacle‐level indoor maps to support, but since indoor environments are affected by human activities, resulting in frequent changes in indoor spatial layouts, and indoor environments are easily affected by light and occlusion, the vast majority of indoor maps are at room level, limiting indoor obstacle‐level navigation path planning. To solve this problem, this paper proposes a hierarchy relation graph (HRG) construction method based on RGB‐D. Firstly, the semantic information extraction of indoor scenes and elements is realized by the output transformed PSPNet and YOLO V8 models, and the bounding box of each element is obtained based on YOLO V8. Then an algorithm for determining the hierarchical relationship of indoor elements is proposed, which calculates the correlation between the two elements from both plane and depth dimensions and constructs a HRG of indoor elements based on directed trees. Finally, comparative experiments are designed to validate the proposed method. Experiments showed that the proposed method can construct HRGs in a variety of scenes; the hierarchy relation detection rate is 88.28%; the accuracy of hierarchy relation determination is 73.44%; and the single‐scene HRG can be generated in 3.81 s.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hierarchical occupancy network with multi‐height attention for vision‐centric 3D occupancy prediction 多高度关注的分层占用网络,用于以视觉为中心的三维占用预测
Pub Date : 2024-05-18 DOI: 10.1111/phor.12500
Can Li, Zhi Gao, Zhipeng Lin, Tonghui Ye, Ziyao Li
The precise geometric representation and ability to handle long‐tail targets have led to the increasing attention towards vision‐centric 3D occupancy prediction, which models the real world as a voxel‐wise model solely through visual inputs. Despite some notable achievements in this field, many prior or concurrent approaches simply adapt existing spatial cross‐attention (SCA) as their 2D–3D transformation module, which may lead to informative coupling or compromise the global receptive field along the height dimension. To overcome these limitations, we propose a hierarchical occupancy (HierOcc) network featuring our innovative height‐aware cross‐attention (HACA) and hierarchical self‐attention (HSA) as its core modules to achieve enhanced precision and completeness in 3D occupancy prediction. The former module enables 2D–3D transformation, while the latter promotes voxels’ intercommunication. The key insight behind both modules is our multi‐height attention mechanism which ensures each attention head corresponds explicitly to a specific height, thereby decoupling height information while maintaining global attention across the height dimension. Extensive experiments show that our method brings significant improvements compared to baseline and surpasses all concurrent methods, demonstrating its superiority.
精确的几何表示和处理长尾目标的能力使人们越来越关注以视觉为中心的三维占位预测,这种预测仅通过视觉输入将真实世界建模为一个体素模型。尽管在这一领域取得了一些显著成就,但许多先前或同时出现的方法只是将现有的空间交叉注意(SCA)作为其 2D-3D 转换模块,这可能会导致信息耦合或损害沿高度维度的全局感受野。为了克服这些局限性,我们提出了分层占位(HierOcc)网络,以创新的高度感知交叉注意(HACA)和分层自注意(HSA)为核心模块,从而提高三维占位预测的精度和完整性。前者实现了 2D-3D 转换,后者促进了体素之间的互通。这两个模块背后的关键见解是我们的多高度注意机制,它确保每个注意头明确对应于特定高度,从而在高度维度上保持全局注意的同时解耦高度信息。广泛的实验表明,与基线相比,我们的方法带来了显著的改进,并超越了所有并行方法,证明了它的优越性。
{"title":"A hierarchical occupancy network with multi‐height attention for vision‐centric 3D occupancy prediction","authors":"Can Li, Zhi Gao, Zhipeng Lin, Tonghui Ye, Ziyao Li","doi":"10.1111/phor.12500","DOIUrl":"https://doi.org/10.1111/phor.12500","url":null,"abstract":"The precise geometric representation and ability to handle long‐tail targets have led to the increasing attention towards vision‐centric 3D occupancy prediction, which models the real world as a voxel‐wise model solely through visual inputs. Despite some notable achievements in this field, many prior or concurrent approaches simply adapt existing spatial cross‐attention (SCA) as their 2D–3D transformation module, which may lead to informative coupling or compromise the global receptive field along the height dimension. To overcome these limitations, we propose a hierarchical occupancy (HierOcc) network featuring our innovative height‐aware cross‐attention (HACA) and hierarchical self‐attention (HSA) as its core modules to achieve enhanced precision and completeness in 3D occupancy prediction. The former module enables 2D–3D transformation, while the latter promotes voxels’ intercommunication. The key insight behind both modules is our multi‐height attention mechanism which ensures each attention head corresponds explicitly to a specific height, thereby decoupling height information while maintaining global attention across the height dimension. Extensive experiments show that our method brings significant improvements compared to baseline and surpasses all concurrent methods, demonstrating its superiority.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141063106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross‐attention neural network for land cover change detection with remote sensing images 利用遥感图像检测土地覆被变化的交叉注意神经网络
Pub Date : 2024-05-15 DOI: 10.1111/phor.12492
Zhiyong Lv, Pingdong Zhong, Wei Wang, Weiwei Sun, Tao Lei, Falco Nicola
Land cover change detection (LCCD) with remote sensing images (RSIs) is important for observing the land cover change of the Earth's surface. Considering the insufficient performance of the traditional self‐attention mechanism used in a neural network to smoothen the noise of LCCD with RSIs, in this study a novel cross‐attention neural network (CANN) was proposed for improving the performance of LCCD with RSIs. In the proposed CANN, a cross‐attention mechanism was achieved by employing another temporal image to enhance attention performance and improve detection accuracies. First, a feature difference module was embedded in the backbone of the proposed CANN to generate a change magnitude image and guide the learning progress. A self‐attention module based on the cross‐attention mechanism was then proposed and embedded in the encoder of the proposed network to make the network pay attention to the changed area. Finally, the encoded features were decoded to obtain binary change detection with the ArgMax function. Compared with five methods, the experimental results based on six pairs of real RSIs well demonstrated the feasibility and superiority of the proposed network for achieving LCCD with RSIs. For example, the improvement for overall accuracy for the six pairs of real RSIs improved by our proposed approach is about 0.72–2.56%.
利用遥感图像(RSIs)进行土地覆被变化探测(LCCD)对于观测地球表面的土地覆被变化非常重要。考虑到传统神经网络中使用的自注意机制不足以平滑使用遥感图像进行土地覆被变化检测的噪声,本研究提出了一种新型交叉注意神经网络(CANN),以提高使用遥感图像进行土地覆被变化检测的性能。在所提出的交叉注意神经网络中,通过使用另一个时间图像来实现交叉注意机制,从而增强注意性能并提高检测精度。首先,在所提出的 CANN 的骨干网中嵌入了一个特征差异模块,以生成变化幅度图像并指导学习进度。然后,提出了基于交叉注意机制的自我注意模块,并将其嵌入到拟议网络的编码器中,使网络关注变化区域。最后,对编码特征进行解码,利用 ArgMax 函数获得二进制变化检测。与五种方法相比,基于六对真实 RSI 的实验结果很好地证明了拟议网络实现 RSI LCCD 的可行性和优越性。例如,通过我们提出的方法改进的六对真实 RSI 的总体准确率提高了约 0.72-2.56%。
{"title":"Cross‐attention neural network for land cover change detection with remote sensing images","authors":"Zhiyong Lv, Pingdong Zhong, Wei Wang, Weiwei Sun, Tao Lei, Falco Nicola","doi":"10.1111/phor.12492","DOIUrl":"https://doi.org/10.1111/phor.12492","url":null,"abstract":"Land cover change detection (LCCD) with remote sensing images (RSIs) is important for observing the land cover change of the Earth's surface. Considering the insufficient performance of the traditional self‐attention mechanism used in a neural network to smoothen the noise of LCCD with RSIs, in this study a novel cross‐attention neural network (CANN) was proposed for improving the performance of LCCD with RSIs. In the proposed CANN, a cross‐attention mechanism was achieved by employing another temporal image to enhance attention performance and improve detection accuracies. First, a feature difference module was embedded in the backbone of the proposed CANN to generate a change magnitude image and guide the learning progress. A self‐attention module based on the cross‐attention mechanism was then proposed and embedded in the encoder of the proposed network to make the network pay attention to the changed area. Finally, the encoded features were decoded to obtain binary change detection with the ArgMax function. Compared with five methods, the experimental results based on six pairs of real RSIs well demonstrated the feasibility and superiority of the proposed network for achieving LCCD with RSIs. For example, the improvement for overall accuracy for the six pairs of real RSIs improved by our proposed approach is about 0.72–2.56%.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"42 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140972905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D LiDAR SLAM: A survey 3D LiDAR SLAM:一项调查
Pub Date : 2024-05-13 DOI: 10.1111/phor.12497
Yongjun Zhang, Pengcheng Shi, Jiayuan Li
Simultaneous localization and mapping (SLAM) is a very challenging yet fundamental problem in the field of robotics and photogrammetry, and it is also a prerequisite for intelligent perception of unmanned systems. In recent years, 3D LiDAR SLAM technology has made remarkable progress. However, to the best of our knowledge, almost all existing surveys focus on visual SLAM methods. To bridge the gap, this paper provides a comprehensive review that summarizes the scientific connotation, key difficulties, research status, and future trends of 3D LiDAR SLAM, aiming to give readers a better understanding of LiDAR SLAM technology, thereby inspiring future research. Specifically, it summarizes the contents and characteristics of the main steps of LiDAR SLAM, introduces the key difficulties it faces, and gives the relationship with existing reviews; it provides an overview of current research hotspots, including LiDAR‐only methods and multi‐sensor fusion methods, and gives milestone algorithms and open‐source tools in each category; it summarizes common datasets, evaluation metrics and representative commercial SLAM solutions, and provides the evaluation results of mainstream methods on public datasets; it looks forward to the development trend of LiDAR SLAM, and considers the preliminary ideas of multi‐modal SLAM, event SLAM, and quantum SLAM.
同步定位与绘图(SLAM)是机器人学和摄影测量学领域一个极具挑战性的基本问题,也是无人系统实现智能感知的前提条件。近年来,三维激光雷达 SLAM 技术取得了显著进展。然而,据我们所知,几乎所有现有的研究都集中在视觉 SLAM 方法上。为了弥补这一空白,本文对三维激光雷达 SLAM 的科学内涵、关键难点、研究现状和未来趋势进行了全面综述,旨在让读者更好地了解激光雷达 SLAM 技术,从而对未来的研究有所启发。具体而言,本书总结了LiDAR SLAM主要步骤的内容和特点,介绍了其面临的关键难点,并给出了与现有综述的关系;概述了当前的研究热点,包括纯LiDAR方法和多传感器融合方法,并给出了每类方法中具有里程碑意义的算法和开源工具;总结了常见的数据集、评估指标和代表性的商业 SLAM 解决方案,并提供了主流方法在公共数据集上的评估结果;展望了激光雷达 SLAM 的发展趋势,并考虑了多模态 SLAM、事件 SLAM 和量子 SLAM 的初步设想。
{"title":"3D LiDAR SLAM: A survey","authors":"Yongjun Zhang, Pengcheng Shi, Jiayuan Li","doi":"10.1111/phor.12497","DOIUrl":"https://doi.org/10.1111/phor.12497","url":null,"abstract":"Simultaneous localization and mapping (SLAM) is a very challenging yet fundamental problem in the field of robotics and photogrammetry, and it is also a prerequisite for intelligent perception of unmanned systems. In recent years, 3D LiDAR SLAM technology has made remarkable progress. However, to the best of our knowledge, almost all existing surveys focus on visual SLAM methods. To bridge the gap, this paper provides a comprehensive review that summarizes the scientific connotation, key difficulties, research status, and future trends of 3D LiDAR SLAM, aiming to give readers a better understanding of LiDAR SLAM technology, thereby inspiring future research. Specifically, it summarizes the contents and characteristics of the main steps of LiDAR SLAM, introduces the key difficulties it faces, and gives the relationship with existing reviews; it provides an overview of current research hotspots, including LiDAR‐only methods and multi‐sensor fusion methods, and gives milestone algorithms and open‐source tools in each category; it summarizes common datasets, evaluation metrics and representative commercial SLAM solutions, and provides the evaluation results of mainstream methods on public datasets; it looks forward to the development trend of LiDAR SLAM, and considers the preliminary ideas of multi‐modal SLAM, event SLAM, and quantum SLAM.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"29 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral image classification based on superpixel merging and broad learning system 基于超像素合并和广义学习系统的高光谱图像分类
Pub Date : 2024-05-09 DOI: 10.1111/phor.12493
Fuding Xie, Rui Wang, Cui Jin, Geng Wang
Most spectral–spatial classification methods for hyperspectral images (HSIs) can achieve satisfactory classification results. However, the common problem faced with these approaches is the need for a long training time and sufficient training samples. To address this issue, this study proposes an effective spectral–spatial HSI classification method based on superpixel merging, superpixel smoothing and broad learning system (SMS‐BLS). The newly introduced parameter‐free superpixel merging technique based on local modularity not only enhances the role of local spatial information in classification, but also maintains class boundary information as much as possible. In addition, the spectral and spatial information of HSIs is further fused during the superpixel smoothing process. As a result, with limited training samples, using merged and smoothed superpixels instead of pixels as input to the broad learning system significantly improves its classification performance. Moreover, the merged superpixels weaken the dependence of the classification results on the superpixel segmentation scale. The effectiveness of the proposed method was validated on three HSI benchmarks, namely Indian Pines, Pavia University and Salinas. Experimental and comparative results show the superiority of the method to other state‐of‐the‐art approaches in terms of overall accuracy and running time.
大多数针对高光谱图像(HSI)的光谱空间分类方法都能取得令人满意的分类结果。然而,这些方法面临的共同问题是需要较长的训练时间和足够的训练样本。针对这一问题,本研究提出了一种基于超像素合并、超像素平滑和广泛学习系统(SMS-BLS)的有效光谱空间 HSI 分类方法。新引入的基于局部模块化的无参数超像素合并技术不仅增强了局部空间信息在分类中的作用,还尽可能地保留了类边界信息。此外,在超像素平滑处理过程中,HSI 的光谱和空间信息会进一步融合。因此,在训练样本有限的情况下,使用合并和平滑的超像素代替像素作为广义学习系统的输入,可以显著提高其分类性能。此外,合并后的超像素还能削弱分类结果对超像素分割比例的依赖性。在印度松、帕维亚大学和萨利纳斯三个人机交互基准上验证了所提方法的有效性。实验和比较结果表明,就整体准确性和运行时间而言,该方法优于其他最先进的方法。
{"title":"Hyperspectral image classification based on superpixel merging and broad learning system","authors":"Fuding Xie, Rui Wang, Cui Jin, Geng Wang","doi":"10.1111/phor.12493","DOIUrl":"https://doi.org/10.1111/phor.12493","url":null,"abstract":"Most spectral–spatial classification methods for hyperspectral images (HSIs) can achieve satisfactory classification results. However, the common problem faced with these approaches is the need for a long training time and sufficient training samples. To address this issue, this study proposes an effective spectral–spatial HSI classification method based on superpixel merging, superpixel smoothing and broad learning system (SMS‐BLS). The newly introduced parameter‐free superpixel merging technique based on local modularity not only enhances the role of local spatial information in classification, but also maintains class boundary information as much as possible. In addition, the spectral and spatial information of HSIs is further fused during the superpixel smoothing process. As a result, with limited training samples, using merged and smoothed superpixels instead of pixels as input to the broad learning system significantly improves its classification performance. Moreover, the merged superpixels weaken the dependence of the classification results on the superpixel segmentation scale. The effectiveness of the proposed method was validated on three HSI benchmarks, namely Indian Pines, Pavia University and Salinas. Experimental and comparative results show the superiority of the method to other state‐of‐the‐art approaches in terms of overall accuracy and running time.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140939149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Photogrammetric Record
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1