首页 > 最新文献

IET Image Processing最新文献

英文 中文
ACT-Agent: Affinity-Cross Transformer for Point Cloud Registration via Reinforcement Learning ACT-Agent:基于强化学习的点云配准亲和交叉变压器
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1049/ipr2.70283
Fengguang Xiong, Haixin Gong, Qiao Ma, Yingbo Jia, Ruize Guo, Yu Cao, Ligang He, Liqun Kuang, Xie Han

Point cloud registration, a core task in 3D computer vision for aligning two point clouds via rotation and translation, underpins critical applications like robotic navigation and 3D reconstruction. Classical methods (e.g., Iterative Closest Point) easily converge to local minima under poor initial alignment. Deep learning–based approaches, while efficient, suffer from high annotation costs for large-scale data. Existing reinforcement learning (RL)-based methods rely on simple PointNet feature extractors, which are insensitive to local geometric details and thus yield suboptimal registration precision. To address these challenges, we propose ACT-Agent: Affinity-Cross Transformer for point cloud registration via reinforcement learning, a novel method that formulates point cloud registration as an RL Markov decision process for iterative optimisation. We leverage Pointnet and Affinity-Cross Transformer to extract and enhance expressive salient features and assign adaptive weights to channels based on their relative importance. We use RL to autonomously learn from feedback in the environment, freeing ourselves from dependence on data annotation. Experimental results on ModelNet40 (synthetic data) and ScanObjectNN (real-world data) demonstrate that our proposed ACT-Agent achieves higher accuracy, efficiency, and generalisation ability than the state-of-the-art methods of point cloud registration.

点云配准是3D计算机视觉的核心任务,通过旋转和平移来对齐两个点云,是机器人导航和3D重建等关键应用的基础。经典方法(如迭代最近点法)在初始对准不良的情况下容易收敛到局部极小值。基于深度学习的方法虽然高效,但对大规模数据的注释成本很高。现有的基于强化学习(RL)的方法依赖于简单的PointNet特征提取器,这些特征提取器对局部几何细节不敏感,从而产生次优的配准精度。为了解决这些挑战,我们提出了通过强化学习进行点云配准的ACT-Agent: Affinity-Cross Transformer,这是一种将点云配准作为迭代优化的RL马尔可夫决策过程的新方法。我们利用Pointnet和Affinity-Cross Transformer来提取和增强表达显著特征,并根据通道的相对重要性为其分配自适应权重。我们使用强化学习从环境中的反馈中自主学习,从而摆脱对数据注释的依赖。在ModelNet40(合成数据)和ScanObjectNN(真实数据)上的实验结果表明,我们提出的ACT-Agent比最先进的点云配准方法具有更高的准确性、效率和泛化能力。
{"title":"ACT-Agent: Affinity-Cross Transformer for Point Cloud Registration via Reinforcement Learning","authors":"Fengguang Xiong,&nbsp;Haixin Gong,&nbsp;Qiao Ma,&nbsp;Yingbo Jia,&nbsp;Ruize Guo,&nbsp;Yu Cao,&nbsp;Ligang He,&nbsp;Liqun Kuang,&nbsp;Xie Han","doi":"10.1049/ipr2.70283","DOIUrl":"10.1049/ipr2.70283","url":null,"abstract":"<p>Point cloud registration, a core task in 3D computer vision for aligning two point clouds via rotation and translation, underpins critical applications like robotic navigation and 3D reconstruction. Classical methods (e.g., Iterative Closest Point) easily converge to local minima under poor initial alignment. Deep learning–based approaches, while efficient, suffer from high annotation costs for large-scale data. Existing reinforcement learning (RL)-based methods rely on simple PointNet feature extractors, which are insensitive to local geometric details and thus yield suboptimal registration precision. To address these challenges, we propose ACT-Agent: Affinity-Cross Transformer for point cloud registration via reinforcement learning, a novel method that formulates point cloud registration as an RL Markov decision process for iterative optimisation. We leverage Pointnet and Affinity-Cross Transformer to extract and enhance expressive salient features and assign adaptive weights to channels based on their relative importance. We use RL to autonomously learn from feedback in the environment, freeing ourselves from dependence on data annotation. Experimental results on ModelNet40 (synthetic data) and ScanObjectNN (real-world data) demonstrate that our proposed ACT-Agent achieves higher accuracy, efficiency, and generalisation ability than the state-of-the-art methods of point cloud registration.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70283","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145958173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Residual Attention Smoothing Mixup Network for Efficient Oil Country Tubular Goods Defect Classification 残差注意平滑混合网络用于油品缺陷分类
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1049/ipr2.70278
Lijuan Zhu, Chun Feng, Peng Wang, Xiaoyu Dou, Hao Chang, Lu Li

Image classification is a fundamental task in computer vision, with deep learning significantly improving its accuracy. However, the accurate classification of defect types in industrial imaging, such as for oil country tubular goods (OCTGs), remains a challenge, particularly when dealing with limited datasets. This paper addresses the classification of four distinct damage types in OCTG images under small sample conditions using the residual attention smoothing mixup network (RASMN) model. Our approach integrates a residual attention network for efficient feature extraction, label smoothing to mitigate overfitting, and mixup data augmentation for enhanced model robustness. Experimental results demonstrate that RASMN significantly improves classification accuracy, achieving a Top-1 error rate of 7.6%. This represents a substantial improvement, cutting the error of our baseline residual attention network (15.5%) by more than half and outperforming widely-used architectures like ResNet18 (16.4%) on this specific task. The significance of these results lies in providing a validated, high-performance model for a challenging industrial classification task with limited data, balancing high accuracy with an efficient inference time of 3.94 ms. This study offers an effective deep learning solution for classifying tube defect images, highlighting the efficacy of combining residual attention networks with regularization strategies.

图像分类是计算机视觉的一项基本任务,深度学习大大提高了其准确性。然而,工业成像中缺陷类型的准确分类仍然是一个挑战,特别是在处理有限的数据集时,例如石油国家管材(octg)。本文利用残余注意平滑混合网络(RASMN)模型对小样本条件下OCTG图像中的四种不同损伤类型进行了分类。我们的方法集成了残差注意力网络,用于有效的特征提取,标签平滑以减轻过拟合,混合数据增强以增强模型鲁棒性。实验结果表明,RASMN显著提高了分类准确率,Top-1错误率为7.6%。这代表了一个重大的改进,将我们的基线剩余注意力网络的误差(15.5%)减少了一半以上,并且在这个特定任务上优于广泛使用的架构,如ResNet18(16.4%)。这些结果的意义在于为具有有限数据的挑战性工业分类任务提供了一个经过验证的高性能模型,平衡了高精度和3.94 ms的高效推理时间。本研究提供了一种有效的管材缺陷图像分类深度学习解决方案,突出了残差注意网络与正则化策略相结合的有效性。
{"title":"Residual Attention Smoothing Mixup Network for Efficient Oil Country Tubular Goods Defect Classification","authors":"Lijuan Zhu,&nbsp;Chun Feng,&nbsp;Peng Wang,&nbsp;Xiaoyu Dou,&nbsp;Hao Chang,&nbsp;Lu Li","doi":"10.1049/ipr2.70278","DOIUrl":"https://doi.org/10.1049/ipr2.70278","url":null,"abstract":"<p>Image classification is a fundamental task in computer vision, with deep learning significantly improving its accuracy. However, the accurate classification of defect types in industrial imaging, such as for oil country tubular goods (OCTGs), remains a challenge, particularly when dealing with limited datasets. This paper addresses the classification of four distinct damage types in OCTG images under small sample conditions using the residual attention smoothing mixup network (RASMN) model. Our approach integrates a residual attention network for efficient feature extraction, label smoothing to mitigate overfitting, and mixup data augmentation for enhanced model robustness. Experimental results demonstrate that RASMN significantly improves classification accuracy, achieving a Top-1 error rate of 7.6%. This represents a substantial improvement, cutting the error of our baseline residual attention network (15.5%) by more than half and outperforming widely-used architectures like ResNet18 (16.4%) on this specific task. The significance of these results lies in providing a validated, high-performance model for a challenging industrial classification task with limited data, balancing high accuracy with an efficient inference time of 3.94 ms. This study offers an effective deep learning solution for classifying tube defect images, highlighting the efficacy of combining residual attention networks with regularization strategies.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization With Approximate Nearest Neighbour Search 基于近似近邻搜索的定位
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1049/ipr2.70242
Roland Kotroczó, Dániel Varga, János Márk Szalai-Gindl, Bence Formanek, Péter Vaderna

Localization and place recognition are important tasks in many fields, including autonomous driving, robotics, and AR/VR applications. Local and global feature-based solutions typically rely on exact nearest neighbour search methods, such as KD-tree, to retrieve candidate places or frames and estimate the precise sensor position using point correspondences. However, in large-scale applications, maintaining real-time online processing without loss of performance can be challenging. We propose that by using an approximate nearest neighbour search method instead of exact methods, runtime can be significantly reduced without sacrificing accuracy. To demonstrate this, we developed a localization pipeline based on a keypoint voting mechanism, employing the hierarchical navigable small world (HNSW) structure as the nearest neighbour search method. Graph-based structures like HNSW are widely used in other domains, such as recommender systems and large language models. We argue that for the use case of matching local feature descriptors, the slightly lower accuracy in terms of exact neighbours does not lead to a significant increase in localization error. We evaluated our pipeline on widely known datasets and performed parameter tuning of HNSW specifically for this use case.

定位和位置识别是许多领域的重要任务,包括自动驾驶、机器人和AR/VR应用。局部和全局基于特征的解决方案通常依赖于精确的最近邻搜索方法,如KD-tree,检索候选位置或帧,并使用点对应估计精确的传感器位置。然而,在大规模应用程序中,在不损失性能的情况下维护实时在线处理可能具有挑战性。我们建议使用近似最近邻搜索方法代替精确方法,可以在不牺牲精度的情况下显著缩短运行时间。为了证明这一点,我们开发了一个基于关键点投票机制的定位管道,采用分层可导航小世界(HNSW)结构作为最近邻搜索方法。像HNSW这样的基于图的结构被广泛应用于其他领域,比如推荐系统和大型语言模型。我们认为,对于匹配局部特征描述符的用例,精确邻居方面的精度略低并不会导致定位误差的显着增加。我们在广泛已知的数据集上评估了我们的管道,并针对这个用例执行了HNSW的参数调优。
{"title":"Localization With Approximate Nearest Neighbour Search","authors":"Roland Kotroczó,&nbsp;Dániel Varga,&nbsp;János Márk Szalai-Gindl,&nbsp;Bence Formanek,&nbsp;Péter Vaderna","doi":"10.1049/ipr2.70242","DOIUrl":"https://doi.org/10.1049/ipr2.70242","url":null,"abstract":"<p>Localization and place recognition are important tasks in many fields, including autonomous driving, robotics, and AR/VR applications. Local and global feature-based solutions typically rely on exact nearest neighbour search methods, such as KD-tree, to retrieve candidate places or frames and estimate the precise sensor position using point correspondences. However, in large-scale applications, maintaining real-time online processing without loss of performance can be challenging. We propose that by using an approximate nearest neighbour search method instead of exact methods, runtime can be significantly reduced without sacrificing accuracy. To demonstrate this, we developed a localization pipeline based on a keypoint voting mechanism, employing the hierarchical navigable small world (HNSW) structure as the nearest neighbour search method. Graph-based structures like HNSW are widely used in other domains, such as recommender systems and large language models. We argue that for the use case of matching local feature descriptors, the slightly lower accuracy in terms of exact neighbours does not lead to a significant increase in localization error. We evaluated our pipeline on widely known datasets and performed parameter tuning of HNSW specifically for this use case.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-Varying 3D Gaussian Splatting Representation for Dynamic Scenes 动态场景的时变三维高斯飞溅表示
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1049/ipr2.70280
Yunbiao Liu, Chunyi Chen, Jun Peng, Xiaojuan Hu, Yu Fan

Real-time photorealistic novel view synthesis of dynamic scenes remains a challenging task, primarily due to the inherent complexity of temporal dynamics and motion patterns. Despite recent methods based on Gaussian Splatting having shown considerable progress in this regard, they are still limited by high memory consumption. In this paper, we propose a time-varying 3D Gaussian splatting (TVGS) representation for dynamic scenes, which incorporates two key components. First, we model the scene using 3D Gaussians endowed with temporal opacity and time-varying motion parameters. These attributes effectively capture transient phenomena such as the sudden appearance or disappearance of dynamic elements. Second, we introduce an adaptive density control mechanism to optimise the distribution of these time-varying 3D Gaussians throughout the sequence. As an explicit dynamic scene representation, TVGS not only achieves high-fidelity view synthesis but also attains a real-time rendering speed of 160 FPS on the Neural 3D Video Dataset using a single RTX 4090 GPU.

动态场景的实时逼真新视图合成仍然是一项具有挑战性的任务,主要是由于时间动态和运动模式的固有复杂性。尽管最近基于高斯溅射的方法在这方面取得了相当大的进展,但它们仍然受到高内存消耗的限制。在本文中,我们提出了一种动态场景的时变三维高斯溅射(TVGS)表示,它包含两个关键组件。首先,我们使用具有时间不透明度和时变运动参数的三维高斯模型对场景进行建模。这些属性有效地捕获了瞬态现象,例如动态元素的突然出现或消失。其次,我们引入了一种自适应密度控制机制来优化这些时变三维高斯分布在整个序列中的分布。作为一种显式的动态场景表示,TVGS不仅实现了高保真的视图合成,而且在使用单个RTX 4090 GPU的神经网络3D视频数据集上实现了160 FPS的实时渲染速度。
{"title":"Time-Varying 3D Gaussian Splatting Representation for Dynamic Scenes","authors":"Yunbiao Liu,&nbsp;Chunyi Chen,&nbsp;Jun Peng,&nbsp;Xiaojuan Hu,&nbsp;Yu Fan","doi":"10.1049/ipr2.70280","DOIUrl":"10.1049/ipr2.70280","url":null,"abstract":"<p>Real-time photorealistic novel view synthesis of dynamic scenes remains a challenging task, primarily due to the inherent complexity of temporal dynamics and motion patterns. Despite recent methods based on Gaussian Splatting having shown considerable progress in this regard, they are still limited by high memory consumption. In this paper, we propose a time-varying 3D Gaussian splatting (TVGS) representation for dynamic scenes, which incorporates two key components. First, we model the scene using 3D Gaussians endowed with temporal opacity and time-varying motion parameters. These attributes effectively capture transient phenomena such as the sudden appearance or disappearance of dynamic elements. Second, we introduce an adaptive density control mechanism to optimise the distribution of these time-varying 3D Gaussians throughout the sequence. As an explicit dynamic scene representation, TVGS not only achieves high-fidelity view synthesis but also attains a real-time rendering speed of 160 FPS on the Neural 3D Video Dataset using a single RTX 4090 GPU.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70280","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Channel Fusion Residual Network for Robust Bone Fracture Classification From Radiographs 多通道融合残差网络在x线片骨折分类中的应用
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1049/ipr2.70277
Sivapriya T, K. R. Sri Preethaa, Yuvaraj Natarajan, M. Shyamala Devi

Accurate bone fracture classification from radiographs is hindered by low fracture visibility, imaging artefacts and high intra-class similarity. To overcome this, multi-channel fusion residual network (MFResNet18) is proposed that integrates a multi-modal channel (MMC) filter with a multi-path early feature extraction scheme to enrich fracture relevant features before deep inference. The MMC filter transforms each fracture image into five complementary channels as the original image, the Frangi filter for fracture line enhancement, the Difference of Gaussian (DoG) edge map, mid-frequency wavelet decomposition and the bone mask for contextual details. These channels are processed through three parallel shallow CNN paths. Path 1 handles pathological features with the original image and the Frangi filter, path 2 processes wavelet features having DoG and wavelet, and path 3 processes the anatomical features with the bone mask as an attention channel. The outputs are fused through convolution in a feature fusion layer, which adaptively learns inter-modal features while preserving spatial fidelity. The fused feature map is then propagated through a modified MFResNet18 backbone for hierarchical residual learning. Experimental results with the bone fracture dataset demonstrate that MF-ResNet18 achieves 99.72% classification accuracy, significantly outperforming conventional ResNet18 and other existing models. The integration of MMC filtering, multi-path early specialisation and learnable feature fusion serves as a key novelty of this work that offers a robust, extensible framework for fine-grained bone fracture classification.

骨折能见度低、成像伪影和类内相似性高阻碍了x线片对骨折的准确分类。为了克服这一问题,提出了多通道融合残差网络(MFResNet18),该网络将多模态通道(MMC)滤波器与多路径早期特征提取方案相结合,在深度推理之前丰富裂缝相关特征。MMC滤波器将每个裂缝图像转换为五个互补通道作为原始图像,Frangi滤波器用于裂缝线增强,高斯差分(DoG)边缘图,中频小波分解和骨掩膜用于上下文细节。这些通道通过三个平行的浅CNN路径进行处理。路径1使用原始图像和Frangi滤波器处理病理特征,路径2使用DoG和小波处理小波特征,路径3使用骨掩膜作为注意通道处理解剖特征。输出通过特征融合层的卷积进行融合,该融合层自适应学习模态间特征,同时保持空间保真度。然后通过改进的MFResNet18主干传播融合的特征映射,进行分层残差学习。在骨折数据集上的实验结果表明,MF-ResNet18的分类准确率达到99.72%,显著优于传统的ResNet18和其他现有模型。MMC滤波、多路径早期专业化和可学习特征融合的集成是这项工作的一个关键新颖之处,它为细粒度骨折分类提供了一个强大的、可扩展的框架。
{"title":"Multi-Channel Fusion Residual Network for Robust Bone Fracture Classification From Radiographs","authors":"Sivapriya T,&nbsp;K. R. Sri Preethaa,&nbsp;Yuvaraj Natarajan,&nbsp;M. Shyamala Devi","doi":"10.1049/ipr2.70277","DOIUrl":"https://doi.org/10.1049/ipr2.70277","url":null,"abstract":"<p>Accurate bone fracture classification from radiographs is hindered by low fracture visibility, imaging artefacts and high intra-class similarity. To overcome this, multi-channel fusion residual network (MFResNet18) is proposed that integrates a multi-modal channel (MMC) filter with a multi-path early feature extraction scheme to enrich fracture relevant features before deep inference. The MMC filter transforms each fracture image into five complementary channels as the original image, the Frangi filter for fracture line enhancement, the Difference of Gaussian (DoG) edge map, mid-frequency wavelet decomposition and the bone mask for contextual details. These channels are processed through three parallel shallow CNN paths. Path 1 handles pathological features with the original image and the Frangi filter, path 2 processes wavelet features having DoG and wavelet, and path 3 processes the anatomical features with the bone mask as an attention channel. The outputs are fused through convolution in a feature fusion layer, which adaptively learns inter-modal features while preserving spatial fidelity. The fused feature map is then propagated through a modified MFResNet18 backbone for hierarchical residual learning. Experimental results with the bone fracture dataset demonstrate that MF-ResNet18 achieves 99.72% classification accuracy, significantly outperforming conventional ResNet18 and other existing models. The integration of MMC filtering, multi-path early specialisation and learnable feature fusion serves as a key novelty of this work that offers a robust, extensible framework for fine-grained bone fracture classification.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoboLoc: A Benchmark Dataset for Point Place Recognition and Localization in Indoor–Outdoor Integrated Environments RoboLoc:室内室外综合环境中点位置识别与定位的基准数据集
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1049/ipr2.70267
Jaejin Jeon, Seonghoon Ryoo, Sang-Duck Lee, Soomok Lee, Seungwoo Jeong

Robust place recognition is essential for reliable localization in robotics, particularly in complex environments with frequent indoor–outdoor transitions. However, existing LiDAR-based datasets often focus on outdoor scenarios and lack seamless domain shifts. In this paper, we propose RoboLoc, a benchmark dataset designed for GPS-free place recognition in indoor–outdoor environments with floor transitions. RoboLoc features real-world robot trajectories, diverse elevation profiles, and transitions between structured indoor and unstructured outdoor domains. We benchmark a variety of state-of-the-art models, point-based, voxel-based, and BEV-based architectures, highlighting their generalizability domain shifts. RoboLoc provides a realistic testbed for developing multi-domain localization systems in robotics and autonomous navigation.

鲁棒的位置识别对于机器人的可靠定位至关重要,特别是在复杂的环境中频繁的室内室外转换。然而,现有的基于激光雷达的数据集往往侧重于户外场景,缺乏无缝的域转换。在本文中,我们提出了RoboLoc,这是一个基准数据集,专为室内-室外环境中具有地板转换的无gps位置识别而设计。RoboLoc具有真实世界的机器人轨迹,不同的仰角轮廓,以及结构化室内和非结构化室外域之间的转换。我们对各种最先进的模型,基于点的,基于体素的和基于bev的架构进行了基准测试,突出了它们的通用性领域转移。RoboLoc为开发机器人和自主导航中的多域定位系统提供了一个现实的测试平台。
{"title":"RoboLoc: A Benchmark Dataset for Point Place Recognition and Localization in Indoor–Outdoor Integrated Environments","authors":"Jaejin Jeon,&nbsp;Seonghoon Ryoo,&nbsp;Sang-Duck Lee,&nbsp;Soomok Lee,&nbsp;Seungwoo Jeong","doi":"10.1049/ipr2.70267","DOIUrl":"https://doi.org/10.1049/ipr2.70267","url":null,"abstract":"<p>Robust place recognition is essential for reliable localization in robotics, particularly in complex environments with frequent indoor–outdoor transitions. However, existing LiDAR-based datasets often focus on outdoor scenarios and lack seamless domain shifts. In this paper, we propose RoboLoc, a benchmark dataset designed for GPS-free place recognition in indoor–outdoor environments with floor transitions. RoboLoc features real-world robot trajectories, diverse elevation profiles, and transitions between structured indoor and unstructured outdoor domains. We benchmark a variety of state-of-the-art models, point-based, voxel-based, and BEV-based architectures, highlighting their generalizability domain shifts. RoboLoc provides a realistic testbed for developing multi-domain localization systems in robotics and autonomous navigation.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70267","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-EPDS: A Small Object Detection Algorithm for Power Transmission Line Nut Spacing Looseness YOLO-EPDS:一种输电线路螺母间距松动的小目标检测算法
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1049/ipr2.70279
Guilan Wang, Zenglei Hao, Wangbin Cao, Huawei Mei

To address the challenges of feature loss, inaccurate localization, and false or missed detections in small-object detection of loosened and spaced nuts in power transmission lines, this study proposes an enhanced detection model, you only look once-EPDS (YOLO-EPDS), built upon an improved YOLOv9 framework. A RepNCSPELAN4_EMA module is integrated into the backbone network to incorporate a multi-scale attention mechanism, enhancing the extraction of subtle nut texture features via cross-space interactions and parallel multi-branch feature recalibration. SPD-Conv modules replace conventional downsampling layers in the backbone, effectively preserving spatial details in feature maps. Additionally, a RepNCSPELAN4_DCNv4 module employs dynamic deformable convolutions (DCNv4) to improve adaptability to geometrically deformed objects. The shape-IoU loss function is utilized to optimize bounding box regression for small objects. Experimental results indicate that the proposed model achieves a mAP@50 of 79.7% on a self-constructed transmission line nut dataset, outperforming the baseline by 4.3%. These enhancements synergistically increase confidence scores while reducing false-positive and false-negative rates, demonstrating superior capability in extracting defective features of loosened nuts and substantially improving the reliability of transmission line inspection.

为了解决输电线路中松动和间隔螺母的小物体检测中的特征丢失,定位不准确以及错误或遗漏检测的挑战,本研究提出了一种增强的检测模型,您只需查看一次epds (YOLO-EPDS),建立在改进的YOLOv9框架之上。将RepNCSPELAN4_EMA模块集成到骨干网络中,结合多尺度关注机制,通过跨空间交互和并行多分支特征再校准增强坚果细微纹理特征的提取。SPD-Conv模块取代了传统的主干下采样层,有效地保留了特征图中的空间细节。此外,RepNCSPELAN4_DCNv4模块采用动态可变形卷积(DCNv4)来提高对几何变形对象的适应性。利用shape-IoU损失函数优化小目标的边界盒回归。实验结果表明,该模型在自构建的输电线路螺母数据集上的准确率mAP@50为79.7%,比基线高4.3%。这些改进协同提高了置信度分数,同时降低了假阳性和假阴性率,显示出在提取松动螺母缺陷特征方面的卓越能力,并大大提高了输电线路检查的可靠性。
{"title":"YOLO-EPDS: A Small Object Detection Algorithm for Power Transmission Line Nut Spacing Looseness","authors":"Guilan Wang,&nbsp;Zenglei Hao,&nbsp;Wangbin Cao,&nbsp;Huawei Mei","doi":"10.1049/ipr2.70279","DOIUrl":"10.1049/ipr2.70279","url":null,"abstract":"<p>To address the challenges of feature loss, inaccurate localization, and false or missed detections in small-object detection of loosened and spaced nuts in power transmission lines, this study proposes an enhanced detection model, you only look once-EPDS (YOLO-EPDS), built upon an improved YOLOv9 framework. A RepNCSPELAN4_EMA module is integrated into the backbone network to incorporate a multi-scale attention mechanism, enhancing the extraction of subtle nut texture features via cross-space interactions and parallel multi-branch feature recalibration. SPD-Conv modules replace conventional downsampling layers in the backbone, effectively preserving spatial details in feature maps. Additionally, a RepNCSPELAN4_DCNv4 module employs dynamic deformable convolutions (DCNv4) to improve adaptability to geometrically deformed objects. The shape-IoU loss function is utilized to optimize bounding box regression for small objects. Experimental results indicate that the proposed model achieves a mAP@50 of 79.7% on a self-constructed transmission line nut dataset, outperforming the baseline by 4.3%. These enhancements synergistically increase confidence scores while reducing false-positive and false-negative rates, demonstrating superior capability in extracting defective features of loosened nuts and substantially improving the reliability of transmission line inspection.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70279","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guided by Principles of Composition: A Domain-Specific Priors Based Detector for Recognizing Ritual Implements in Thangka 以构成原则为指导:基于领域先验的唐卡礼器识别方法
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1049/ipr2.70271
Jiachen Li, Hongyun Wang, Xiaolong Peng, Jinyu Xu, Qing Xie, Yanchun Ma, Wenbo Jiang, Mengzi Tang

Detecting ritual implements in Thangka paintings—such as swords and scriptures—remains challenging due to their intricate visual composition and symbolic complexity. Existing object detection models, typically trained on natural scenes, tend to perform poorly in this domain. To address this limitation, we summarize the principles of composition in Thangka and identify key spatial and co-occurrence priors specific to ritual implements. Based on these insights, we propose GPCDet: a guided by principles of composition detector that integrates domain-specific priors into the detection process. Specifically, we introduce a spatial coordinate attention module to emphasize critical spatial regions where implements frequently appear. In addition, we design a graph convolution network-auxiliary detection module to model inter-category co-occurrence, thereby enhancing feature representation and improving classification performance. Experiments on the newly curated ritual implements in Thangka (RITK) dataset show that GPCDet achieves substantial improvements over existing methods, establishing a new state-of-the-art baseline for this challenging task.

在唐卡绘画中发现仪器,如剑和经文,由于其复杂的视觉构成和象征的复杂性,仍然具有挑战性。现有的对象检测模型,通常是在自然场景上训练的,在这个领域的表现往往很差。为了解决这一局限,我们总结了唐卡的构图原则,并确定了特定于礼器的关键空间和共现先验。基于这些见解,我们提出了GPCDet:一个由组合检测器原理指导的,将特定领域的先验集成到检测过程中的组合检测器。具体来说,我们引入了一个空间坐标注意模块来强调工具频繁出现的关键空间区域。此外,我们设计了一个图卷积网络辅助检测模块来建模类别间共现,从而增强特征表示,提高分类性能。对唐卡(RITK)数据集中新整理的礼器进行的实验表明,GPCDet在现有方法的基础上取得了实质性的改进,为这一具有挑战性的任务建立了新的最先进的基线。
{"title":"Guided by Principles of Composition: A Domain-Specific Priors Based Detector for Recognizing Ritual Implements in Thangka","authors":"Jiachen Li,&nbsp;Hongyun Wang,&nbsp;Xiaolong Peng,&nbsp;Jinyu Xu,&nbsp;Qing Xie,&nbsp;Yanchun Ma,&nbsp;Wenbo Jiang,&nbsp;Mengzi Tang","doi":"10.1049/ipr2.70271","DOIUrl":"https://doi.org/10.1049/ipr2.70271","url":null,"abstract":"<p>Detecting ritual implements in Thangka paintings—such as swords and scriptures—remains challenging due to their intricate visual composition and symbolic complexity. Existing object detection models, typically trained on natural scenes, tend to perform poorly in this domain. To address this limitation, we summarize the principles of composition in Thangka and identify key spatial and co-occurrence priors specific to ritual implements. Based on these insights, we propose GPCDet: a guided by principles of composition detector that integrates domain-specific priors into the detection process. Specifically, we introduce a spatial coordinate attention module to emphasize critical spatial regions where implements frequently appear. In addition, we design a graph convolution network-auxiliary detection module to model inter-category co-occurrence, thereby enhancing feature representation and improving classification performance. Experiments on the newly curated ritual implements in Thangka (RITK) dataset show that GPCDet achieves substantial improvements over existing methods, establishing a new state-of-the-art baseline for this challenging task.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M-PointNet: A Multi-Layer Embedded Deep Learning Model for 3D Intracranial Aneurysm Classification and Segmentation M-PointNet:用于颅内动脉瘤三维分类和分割的多层嵌入式深度学习模型
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1049/ipr2.70275
Jiaqi Wang, Juntong Liu, Zhengyuan Xu, Yunfeng Zhou, Mingquan Ye

Accurate classification and segmentation of intracranial aneurysms from 3D point cloud data are critical for computer-aided diagnosis and surgical planning. However, existing point-based deep learning methods suffer from limited feature representation and poor segmentation performance on medical data due to insufficient training samples and complex geometric variations. M-PointNet introduces a novel multi-layer embedded deep learning architecture that significantly enhances the classification and segmentation of intracranial aneurysms through three key innovations: (1) an enhanced PointNet++ with an expanded hierarchical structure for better geometric feature extraction; (2) a multi-layer embedding mechanism that integrates preprocessed and resampled point cloud data at multiple hierarchical levels to enrich feature representation; and (3) a deep supervision strategy with auxiliary output layers to accelerate convergence and improve performance. Experiments on the IntrA dataset demonstrate that M-PointNet achieves 91.96% accuracy and a 0.923 F1-score in classification, surpassing baseline by 5.27% and 3.0%, respectively. For segmentation, it attains 83.85% IoU and 90.25% DSC for aneurysm regions and 95.81% IoU and 97.82% DSC for vessel regions. Additionally, its generalization capability is validated by a 92.8% accuracy on the ModelNet40 dataset. M-PointNet effectively addresses the challenges of medical point cloud analysis, achieving state-of-the-art performance in intracranial aneurysms classification and segmentation while maintaining robust cross-domain generalization.

从三维点云数据中准确分类和分割颅内动脉瘤对于计算机辅助诊断和手术计划至关重要。然而,现有的基于点的深度学习方法由于训练样本不足和几何变化复杂,对医疗数据的特征表示有限,分割性能差。M-PointNet引入了一种新的多层嵌入式深度学习架构,通过三个关键创新显著增强了颅内动脉瘤的分类和分割:(1)增强了pointnet++,扩展了层次结构,更好地提取了几何特征;(2)将预处理和重采样的点云数据在多个层次上进行融合,丰富特征表示的多层嵌入机制;(3)采用带辅助输出层的深度监督策略,加快收敛速度,提高性能。在IntrA数据集上的实验表明,M-PointNet的分类准确率为91.96%,f1评分为0.923,分别比基线提高了5.27%和3.0%。对于分割,动脉瘤区域IoU达到83.85%,DSC达到90.25%,血管区域IoU达到95.81%,DSC达到97.82%。此外,在ModelNet40数据集上,其泛化能力得到了92.8%的验证。M-PointNet有效地解决了医学点云分析的挑战,在保持稳健的跨域泛化的同时,实现了颅内动脉瘤分类和分割的最先进性能。
{"title":"M-PointNet: A Multi-Layer Embedded Deep Learning Model for 3D Intracranial Aneurysm Classification and Segmentation","authors":"Jiaqi Wang,&nbsp;Juntong Liu,&nbsp;Zhengyuan Xu,&nbsp;Yunfeng Zhou,&nbsp;Mingquan Ye","doi":"10.1049/ipr2.70275","DOIUrl":"10.1049/ipr2.70275","url":null,"abstract":"<p>Accurate classification and segmentation of intracranial aneurysms from 3D point cloud data are critical for computer-aided diagnosis and surgical planning. However, existing point-based deep learning methods suffer from limited feature representation and poor segmentation performance on medical data due to insufficient training samples and complex geometric variations. M-PointNet introduces a novel multi-layer embedded deep learning architecture that significantly enhances the classification and segmentation of intracranial aneurysms through three key innovations: (1) an enhanced PointNet++ with an expanded hierarchical structure for better geometric feature extraction; (2) a multi-layer embedding mechanism that integrates preprocessed and resampled point cloud data at multiple hierarchical levels to enrich feature representation; and (3) a deep supervision strategy with auxiliary output layers to accelerate convergence and improve performance. Experiments on the IntrA dataset demonstrate that M-PointNet achieves 91.96% accuracy and a 0.923 F1-score in classification, surpassing baseline by 5.27% and 3.0%, respectively. For segmentation, it attains 83.85% IoU and 90.25% DSC for aneurysm regions and 95.81% IoU and 97.82% DSC for vessel regions. Additionally, its generalization capability is validated by a 92.8% accuracy on the ModelNet40 dataset. M-PointNet effectively addresses the challenges of medical point cloud analysis, achieving state-of-the-art performance in intracranial aneurysms classification and segmentation while maintaining robust cross-domain generalization.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Reversible Watermarking Algorithm Resistant to Geometric Attacks Based on Tchebichef Moments 基于切切夫矩的抗几何攻击的鲁棒可逆水印算法
IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1049/ipr2.70265
Wenjing Sun, Ling Zhang, Hongjun Zhang

This paper presents a robust reversible watermarking algorithm based on a two-stage embedding strategy. In the first stage, the host image is partitioned into non-overlapping blocks. Embedding locations are selected within the inscribed circle of the host image by leveraging the Just Noticeable Distortion (JND) threshold, and copyright watermarks are embedded into the lower-order Tchebichef Moments (TMs) at these positions. The watermark quantisation error is converted to an integer value using an enhanced Distortion-Compensated Quantised Index Modulation (DC-QIM) technique. In the second stage, compensation data is embedded into image blocks located outside the inscribed circle, thereby ensuring reversibility in the absence of attacks. Prior to watermark extraction, a resynchronisation method tailored to the specific type of attack is applied to realign the block positions, significantly improving robustness against geometric distortions. Compared with existing advanced methods of the same kind, the proposed algorithm effectively addresses key limitations of traditional methods, including the trade-off between robustness and reversibility, redundancy in compensation data, and insufficient resistance to geometric attacks. The amount of compensation information is reduced by over 90%. Under comparable experimental settings, the average peak signal-to-noise ratio (PSNR) is improved by 0.5–2.2 dB.

Extensive experiments demonstrate that, in terms of resistance to noise interference, the performance of the proposed algorithm is comparable to that of methods based on Zernike Moments (ZMs) and Pseudo-Zernike Moments (PZMs). The algorithm achieves a bit error rate (BER) of less than 1% under Joint Photographic Experts Group (JPEG) compression, salt-and-pepper noise with intensity ≤ 0.017, Gaussian noise with variance ≤ 0.011, and rotation and scaling attacks under ideal resynchronisation conditions. When subjected to random cropping attacks of 128 × 128 pixels, the average BER remains below 7%. It also demonstrates robust resilience against various attacks, including filtering and translation.

提出了一种基于两阶段嵌入策略的鲁棒可逆水印算法。在第一阶段,主机图像被分割成不重叠的块。通过利用可注意失真(JND)阈值在宿主图像的内切圆内选择嵌入位置,并在这些位置将版权水印嵌入到低阶切切夫矩(TMs)中。采用增强的失真补偿量化指数调制(DC-QIM)技术将水印量化误差转换为整数。在第二阶段,补偿数据被嵌入到内切圆外的图像块中,从而确保在没有攻击的情况下的可逆性。在水印提取之前,一种针对特定攻击类型的重新同步方法被应用于重新调整块位置,显著提高了对几何扭曲的鲁棒性。与现有的同类先进方法相比,该算法有效地解决了传统方法的关键局限性,包括鲁棒性和可逆性之间的权衡、补偿数据的冗余以及对几何攻击的抵抗能力不足。补偿信息量减少90%以上。在可比较的实验设置下,平均峰值信噪比(PSNR)提高了0.5-2.2 dB。大量实验表明,在抗噪声干扰方面,该算法的性能与基于Zernike矩(ZMs)和伪Zernike矩(PZMs)的方法相当。该算法在JPEG (Joint Photographic Experts Group)压缩、强度≤0.017的椒盐噪声、方差≤0.011的高斯噪声以及理想重同步条件下的旋转和缩放攻击下,实现了小于1%的误码率。当受到128 × 128像素的随机裁剪攻击时,平均误码率保持在7%以下。它还展示了针对各种攻击(包括过滤和转换)的强大弹性。
{"title":"A Robust Reversible Watermarking Algorithm Resistant to Geometric Attacks Based on Tchebichef Moments","authors":"Wenjing Sun,&nbsp;Ling Zhang,&nbsp;Hongjun Zhang","doi":"10.1049/ipr2.70265","DOIUrl":"10.1049/ipr2.70265","url":null,"abstract":"<p>This paper presents a robust reversible watermarking algorithm based on a two-stage embedding strategy. In the first stage, the host image is partitioned into non-overlapping blocks. Embedding locations are selected within the inscribed circle of the host image by leveraging the Just Noticeable Distortion (JND) threshold, and copyright watermarks are embedded into the lower-order Tchebichef Moments (TMs) at these positions. The watermark quantisation error is converted to an integer value using an enhanced Distortion-Compensated Quantised Index Modulation (DC-QIM) technique. In the second stage, compensation data is embedded into image blocks located outside the inscribed circle, thereby ensuring reversibility in the absence of attacks. Prior to watermark extraction, a resynchronisation method tailored to the specific type of attack is applied to realign the block positions, significantly improving robustness against geometric distortions. Compared with existing advanced methods of the same kind, the proposed algorithm effectively addresses key limitations of traditional methods, including the trade-off between robustness and reversibility, redundancy in compensation data, and insufficient resistance to geometric attacks. The amount of compensation information is reduced by over 90%. Under comparable experimental settings, the average peak signal-to-noise ratio (PSNR) is improved by 0.5–2.2 dB.</p><p>Extensive experiments demonstrate that, in terms of resistance to noise interference, the performance of the proposed algorithm is comparable to that of methods based on Zernike Moments (ZMs) and Pseudo-Zernike Moments (PZMs). The algorithm achieves a bit error rate (BER) of less than 1% under Joint Photographic Experts Group (JPEG) compression, salt-and-pepper noise with intensity ≤ 0.017, Gaussian noise with variance ≤ 0.011, and rotation and scaling attacks under ideal resynchronisation conditions. When subjected to random cropping attacks of 128 × 128 pixels, the average BER remains below 7%. It also demonstrates robust resilience against various attacks, including filtering and translation.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"20 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1