首页 > 最新文献

IET Computer Vision最新文献

英文 中文
A robust few-shot classifier with image as set of points 以图像为点集的鲁棒少射分类器
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-02 DOI: 10.1049/cvi2.12340
Suhua Peng, Zongliang Zhang, Xingwang Huang, Zongyue Wang, Shubing Su, Guorong Cai

In recent years, many few-shot classification methods have been proposed. However, only a few of them have explored robust classification, which is an important aspect of human visual intelligence. Humans can effortlessly recognise visual patterns, including lines, circles, and even characters, from image data that has been corrupted or degraded. In this paper, the authors investigate a robust classification method that extends the classical paradigm of robust geometric model fitting. The method views an image as a set of points in a low-dimensional space and analyses each image through low-dimensional geometric model fitting. In contrast, the majority of other methods, such as deep learning methods, treat an image as a single point in a high-dimensional space. The authors evaluate the performance of the method using a noisy Omniglot dataset. The experimental results demonstrate that the proposed method is significantly more robust than other methods. The source code and data for this paper are available at https://github.com/pengsuhua/PMF_OMNIGLOT.

近年来,人们提出了许多小样本分类方法。然而,只有少数人探索了鲁棒分类,这是人类视觉智能的一个重要方面。人类可以毫不费力地从损坏或退化的图像数据中识别出视觉模式,包括线条、圆圈甚至字符。在本文中,作者研究了一种鲁棒分类方法,它扩展了经典的鲁棒几何模型拟合范式。该方法将图像视为低维空间中点的集合,并通过低维几何模型拟合对每个图像进行分析。相比之下,大多数其他方法,如深度学习方法,将图像视为高维空间中的单个点。作者使用带有噪声的Omniglot数据集评估了该方法的性能。实验结果表明,该方法的鲁棒性明显优于其他方法。本文的源代码和数据可在https://github.com/pengsuhua/PMF_OMNIGLOT上获得。
{"title":"A robust few-shot classifier with image as set of points","authors":"Suhua Peng,&nbsp;Zongliang Zhang,&nbsp;Xingwang Huang,&nbsp;Zongyue Wang,&nbsp;Shubing Su,&nbsp;Guorong Cai","doi":"10.1049/cvi2.12340","DOIUrl":"10.1049/cvi2.12340","url":null,"abstract":"<p>In recent years, many few-shot classification methods have been proposed. However, only a few of them have explored robust classification, which is an important aspect of human visual intelligence. Humans can effortlessly recognise visual patterns, including lines, circles, and even characters, from image data that has been corrupted or degraded. In this paper, the authors investigate a robust classification method that extends the classical paradigm of robust geometric model fitting. The method views an image as a set of points in a low-dimensional space and analyses each image through low-dimensional geometric model fitting. In contrast, the majority of other methods, such as deep learning methods, treat an image as a single point in a high-dimensional space. The authors evaluate the performance of the method using a noisy Omniglot dataset. The experimental results demonstrate that the proposed method is significantly more robust than other methods. The source code and data for this paper are available at https://github.com/pengsuhua/PMF_OMNIGLOT.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12340","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMGNFORMER: Fusion Mamba-graph transformer network for human pose estimation SMGNFORMER:用于人体姿态估计的融合曼巴图变压器网络
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-31 DOI: 10.1049/cvi2.12339
Yi Li, Zan Wang, Weiran Niu

In the field of 3D human pose estimation (HPE), many deep learning algorithms overlook the topological relationships between 2D keypoints, resulting in imprecise regression of 3D coordinates and a notable decline in estimation performance. To address this limitation, this paper proposes a novel approach to 3D HPE, termed the Spatial Mamba Graph Convolutional Neural Network (GCN) Former (SMGNFormer). The proposed method utilises the Mamba architecture to extract spatial information from 2D keypoints and integrates GCNs with multi-head attention mechanisms to build a relational graph of 2D keypoints across a global receptive field. The outputs are subsequently processed by a Time-Frequency Feature Fusion Transformer to estimate 3D human poses. SMGNFormer demonstrates superior estimation performance on the Human3.6M dataset and real-world video data compared to most Transformer-based algorithms. Moreover, the proposed method achieves a training speed comparable to PoseFormerv2, providing a clear advantage over other methods in its category.

在三维人体姿态估计(HPE)领域,许多深度学习算法忽略了二维关键点之间的拓扑关系,导致三维坐标回归不精确,估计性能明显下降。为了解决这一限制,本文提出了一种新的3D HPE方法,称为空间曼巴图卷积神经网络(GCN)前(SMGNFormer)。该方法利用曼巴结构从二维关键点中提取空间信息,并将GCNs与多头注意机制相结合,在全局接受场中构建二维关键点的关系图。输出随后由时频特征融合变压器进行处理,以估计3D人体姿势。与大多数基于transformer的算法相比,SMGNFormer在Human3.6M数据集和真实视频数据上展示了优越的估计性能。此外,该方法的训练速度与PoseFormerv2相当,与同类方法相比具有明显的优势。
{"title":"SMGNFORMER: Fusion Mamba-graph transformer network for human pose estimation","authors":"Yi Li,&nbsp;Zan Wang,&nbsp;Weiran Niu","doi":"10.1049/cvi2.12339","DOIUrl":"10.1049/cvi2.12339","url":null,"abstract":"<p>In the field of 3D human pose estimation (HPE), many deep learning algorithms overlook the topological relationships between 2D keypoints, resulting in imprecise regression of 3D coordinates and a notable decline in estimation performance. To address this limitation, this paper proposes a novel approach to 3D HPE, termed the Spatial Mamba Graph Convolutional Neural Network (GCN) Former (SMGNFormer). The proposed method utilises the Mamba architecture to extract spatial information from 2D keypoints and integrates GCNs with multi-head attention mechanisms to build a relational graph of 2D keypoints across a global receptive field. The outputs are subsequently processed by a Time-Frequency Feature Fusion Transformer to estimate 3D human poses. SMGNFormer demonstrates superior estimation performance on the Human3.6M dataset and real-world video data compared to most Transformer-based algorithms. Moreover, the proposed method achieves a training speed comparable to PoseFormerv2, providing a clear advantage over other methods in its category.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12339","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLFormer4D: LiDAR-based lane detection method by temporal feature fusion and sparse transformer LLFormer4D:基于时间特征融合和稀疏变压器的激光雷达车道检测方法
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-30 DOI: 10.1049/cvi2.12338
Jun Hu, Chaolu Feng, Haoxiang Jie, Zuotao Ning, Xinyi Zuo, Wei Liu, Xiangyu Wei

Lane detection is a fundamental problem in autonomous driving, which provides vehicles with essential road information. Despite the attention from scholars and engineers, lane detection based on LiDAR meets challenges such as unsatisfactory detection accuracy and significant computation overhead. In this paper, the authors propose LLFormer4D to overcome these technical challenges by leveraging the strengths of both Convolutional Neural Network and Transformer networks. Specifically, the Temporal Feature Fusion module is introduced to enhance accuracy and robustness by integrating features from multi-frame point clouds. In addition, a sparse Transformer decoder based on Lane Key-point Query is designed, which introduces key-point supervision for each lane line to streamline the post-processing. The authors conduct experiments and evaluate the proposed method on the K-Lane and nuScenes map datasets respectively. The results demonstrate the effectiveness of the presented method, achieving second place with an F1 score of 82.39 and a processing speed of 16.03 Frames Per Seconds on the K-Lane dataset. Furthermore, this algorithm attains the best mAP of 70.66 for lane detection on the nuScenes map dataset.

车道检测是自动驾驶的一个基本问题,它为车辆提供必要的道路信息。尽管受到学者和工程师的关注,但基于激光雷达的车道检测仍然面临着检测精度不理想、计算开销大等挑战。在本文中,作者提出LLFormer4D通过利用卷积神经网络和变压器网络的优势来克服这些技术挑战。具体来说,引入了时间特征融合模块,通过对多帧点云的特征进行融合来提高精度和鲁棒性。此外,设计了一种基于车道关键点查询的稀疏变压器解码器,该解码器对每条车道线引入关键点监督,简化了后处理。作者分别在K-Lane和nuScenes地图数据集上进行了实验和评价。结果证明了该方法的有效性,在K-Lane数据集上以82.39的F1分数和16.03帧/秒的处理速度获得了第二名。此外,该算法在nuScenes地图数据集上的车道检测达到了70.66的最佳mAP值。
{"title":"LLFormer4D: LiDAR-based lane detection method by temporal feature fusion and sparse transformer","authors":"Jun Hu,&nbsp;Chaolu Feng,&nbsp;Haoxiang Jie,&nbsp;Zuotao Ning,&nbsp;Xinyi Zuo,&nbsp;Wei Liu,&nbsp;Xiangyu Wei","doi":"10.1049/cvi2.12338","DOIUrl":"10.1049/cvi2.12338","url":null,"abstract":"<p>Lane detection is a fundamental problem in autonomous driving, which provides vehicles with essential road information. Despite the attention from scholars and engineers, lane detection based on LiDAR meets challenges such as unsatisfactory detection accuracy and significant computation overhead. In this paper, the authors propose LLFormer4D to overcome these technical challenges by leveraging the strengths of both Convolutional Neural Network and Transformer networks. Specifically, the Temporal Feature Fusion module is introduced to enhance accuracy and robustness by integrating features from multi-frame point clouds. In addition, a sparse Transformer decoder based on Lane Key-point Query is designed, which introduces key-point supervision for each lane line to streamline the post-processing. The authors conduct experiments and evaluate the proposed method on the K-Lane and nuScenes map datasets respectively. The results demonstrate the effectiveness of the presented method, achieving second place with an F1 score of 82.39 and a processing speed of 16.03 Frames Per Seconds on the K-Lane dataset. Furthermore, this algorithm attains the best mAP of 70.66 for lane detection on the nuScenes map dataset.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12338","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HMSFU: A hierarchical multi-scale fusion unit for video prediction and beyond HMSFU:用于视频预测及其他领域的分层多尺度融合单元
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-29 DOI: 10.1049/cvi2.12312
Hongchang Zhu, Faming Fang

Video prediction is the process of learning necessary information from historical frames to predict future video frames. Learning features from historical frames is a crucial step in this process. However, most current methods have a relatively single-scale learning approach, even if they learn features at different scales, they cannot fully integrate and utilise them, resulting in unsatisfactory prediction results. To address this issue, a hierarchical multi-scale fusion unit (HMSFU) is proposed. By using a hierarchical multi-scale architecture, each layer predicts future frames at different granularities using different convolutional scales. The abstract features from different layers can be fused, enabling the model not only to capture rich contextual information but also to expand the model's receptive field, enhance its expressive power, and improve its applicability to complex prediction scenarios. To fully utilise the expanded receptive field, HMSFU incorporates three fusion modules. The first module is the single-layer historical attention fusion module, which uses an attention mechanism to fuse the features from historical frames into the current frame at each layer. The second module is the single-layer spatiotemporal fusion module, which fuses complementary temporal and spatial features at each layer. The third module is the multi-layer spatiotemporal fusion module, which fuses spatiotemporal features from different layers. Additionally, the authors not only focus on the frame-level error using mean squared error loss, but also introduce the novel use of Kullback–Leibler (KL) divergence to consider inter-frame variations. Experimental results demonstrate that our proposed HMSFU model achieves the best performance on popular video prediction datasets, showcasing its remarkable competitiveness in the field.

视频预测是从历史帧中学习必要的信息来预测未来视频帧的过程。在这个过程中,从历史框架中学习特征是至关重要的一步。然而,目前的大多数方法都是相对单一尺度的学习方法,即使学习到不同尺度的特征,也不能充分整合和利用,导致预测结果不理想。为了解决这一问题,提出了一种分层多尺度融合单元(HMSFU)。通过使用分层多尺度架构,每层使用不同的卷积尺度预测不同粒度的未来帧。将不同层次的抽象特征融合在一起,不仅可以捕获丰富的上下文信息,还可以扩展模型的接受域,增强模型的表达能力,提高模型对复杂预测场景的适用性。为了充分利用扩展的感受野,HMSFU包含三个融合模块。第一个模块是单层历史关注融合模块,该模块使用关注机制将历史框架中的特征融合到每一层的当前框架中。第二个模块是单层时空融合模块,将每一层的互补时空特征融合在一起。第三个模块是多层时空融合模块,融合来自不同层的时空特征。此外,作者不仅关注使用均方误差损失的帧级误差,而且还引入了新颖的Kullback-Leibler (KL)散度来考虑帧间变化。实验结果表明,我们提出的HMSFU模型在流行的视频预测数据集上取得了最好的性能,显示了它在该领域的显著竞争力。
{"title":"HMSFU: A hierarchical multi-scale fusion unit for video prediction and beyond","authors":"Hongchang Zhu,&nbsp;Faming Fang","doi":"10.1049/cvi2.12312","DOIUrl":"10.1049/cvi2.12312","url":null,"abstract":"<p>Video prediction is the process of learning necessary information from historical frames to predict future video frames. Learning features from historical frames is a crucial step in this process. However, most current methods have a relatively single-scale learning approach, even if they learn features at different scales, they cannot fully integrate and utilise them, resulting in unsatisfactory prediction results. To address this issue, a hierarchical multi-scale fusion unit (HMSFU) is proposed. By using a hierarchical multi-scale architecture, each layer predicts future frames at different granularities using different convolutional scales. The abstract features from different layers can be fused, enabling the model not only to capture rich contextual information but also to expand the model's receptive field, enhance its expressive power, and improve its applicability to complex prediction scenarios. To fully utilise the expanded receptive field, HMSFU incorporates three fusion modules. The first module is the single-layer historical attention fusion module, which uses an attention mechanism to fuse the features from historical frames into the current frame at each layer. The second module is the single-layer spatiotemporal fusion module, which fuses complementary temporal and spatial features at each layer. The third module is the multi-layer spatiotemporal fusion module, which fuses spatiotemporal features from different layers. Additionally, the authors not only focus on the frame-level error using mean squared error loss, but also introduce the novel use of Kullback–Leibler (KL) divergence to consider inter-frame variations. Experimental results demonstrate that our proposed HMSFU model achieves the best performance on popular video prediction datasets, showcasing its remarkable competitiveness in the field.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12312","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation of urban airborne LiDAR data of varying landcover diversity using XGBoost 基于XGBoost的城市机载激光雷达地表覆盖多样性数据语义分割
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-27 DOI: 10.1049/cvi2.12334
Jayati Vijaywargiya, Anandakumar M. Ramiya

Semantic segmentation of aerial LiDAR dataset is a crucial step for accurate identification of urban objects for various applications pertaining to sustainable urban development. However, this task becomes more complex in urban areas characterised by the coexistence of modern developments and natural vegetation. The unstructured nature of point cloud data, along with data sparsity, irregular point distribution, and varying sizes of urban objects, presents challenges in point cloud classification. To address these challenges, development of robust algorithmic approach encompassing efficient feature sets and classification model are essential. This study incorporates point-wise features to capture the local spatial context of points in datasets. Furthermore, an ensemble machine learning model based on extreme boosting is utilised, which integrates sequential training for weak learners, to enhance the model’s resilience. To thoroughly investigate the efficacy of the proposed approach, this study utilises three distinct datasets from diverse geographical locations, each presenting unique challenges related to class distribution, 3D terrain intricacies, and geographical variations. The Land-cover Diversity Index is introduced to quantify the complexity of landcover in 3D by measuring the degree of class heterogeneity and the frequency of class variation in the dataset. The proposed approach achieved an accuracy of 90% on the regionally complex, higher landcover diversity dataset, Trivandrum Aerial LiDAR Dataset. Furthermore, the results of the study demonstrate improved overall predictive accuracy of 91% and 87% on data segments from two benchmark datasets, DALES and Vaihingen 3D.

航空激光雷达数据集的语义分割是准确识别城市目标的关键步骤,适用于城市可持续发展的各种应用。然而,在现代发展和自然植被共存的城市地区,这项任务变得更加复杂。点云数据的非结构化特性,以及数据稀疏性、不规则点分布和城市物体大小的变化,给点云分类带来了挑战。为了应对这些挑战,开发包含高效特征集和分类模型的鲁棒算法方法至关重要。本研究采用逐点特征来捕捉数据集中点的局部空间背景。在此基础上,提出了一种基于极限提升的集成机器学习模型,该模型集成了弱学习者的顺序训练,以增强模型的弹性。为了彻底研究该方法的有效性,本研究利用了来自不同地理位置的三个不同的数据集,每个数据集都提出了与班级分布、3D地形复杂性和地理变化相关的独特挑战。引入土地覆盖多样性指数,通过测量数据集中土地覆盖的类别异质性程度和类别变化频率,量化三维土地覆盖的复杂性。该方法在Trivandrum航空激光雷达数据集(Trivandrum Aerial LiDAR dataset)上实现了90%的精度。此外,研究结果表明,在DALES和Vaihingen 3D两个基准数据集的数据段上,总体预测准确率提高了91%和87%。
{"title":"Semantic segmentation of urban airborne LiDAR data of varying landcover diversity using XGBoost","authors":"Jayati Vijaywargiya,&nbsp;Anandakumar M. Ramiya","doi":"10.1049/cvi2.12334","DOIUrl":"10.1049/cvi2.12334","url":null,"abstract":"<p>Semantic segmentation of aerial LiDAR dataset is a crucial step for accurate identification of urban objects for various applications pertaining to sustainable urban development. However, this task becomes more complex in urban areas characterised by the coexistence of modern developments and natural vegetation. The unstructured nature of point cloud data, along with data sparsity, irregular point distribution, and varying sizes of urban objects, presents challenges in point cloud classification. To address these challenges, development of robust algorithmic approach encompassing efficient feature sets and classification model are essential. This study incorporates point-wise features to capture the local spatial context of points in datasets. Furthermore, an ensemble machine learning model based on extreme boosting is utilised, which integrates sequential training for weak learners, to enhance the model’s resilience. To thoroughly investigate the efficacy of the proposed approach, this study utilises three distinct datasets from diverse geographical locations, each presenting unique challenges related to class distribution, 3D terrain intricacies, and geographical variations. The Land-cover Diversity Index is introduced to quantify the complexity of landcover in 3D by measuring the degree of class heterogeneity and the frequency of class variation in the dataset. The proposed approach achieved an accuracy of 90% on the regionally complex, higher landcover diversity dataset, Trivandrum Aerial LiDAR Dataset. Furthermore, the results of the study demonstrate improved overall predictive accuracy of 91% and 87% on data segments from two benchmark datasets, DALES and Vaihingen 3D.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143363024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMEF-Net: Towards an attention and multi-level enhancement fusion for medical image classification in Parkinson's aided diagnosis AMEF-Net:面向帕金森辅助诊断医学图像分类的关注与多级增强融合
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-25 DOI: 10.1049/cvi2.12324
Qingyan Ding, Yu Pan, Jianxin Liu, Lianxin Li, Nan Liu, Na Li, Wan Zheng, Xuecheng Dong

Parkinson's disease (PD) is a neurodegenerative disorder primarily affecting middle-aged and elderly populations. Its insidious onset, high disability rate, long diagnostic cycle, and high diagnostic costs impose a heavy burden on patients and their families. Leveraging artificial intelligence, with its rapid diagnostic speed, high accuracy, and fatigue resistance, to achieve intelligent assisted diagnosis of PD holds significant promise for alleviating patients' financial stress, reducing diagnostic cycles, and helping patients seize the golden period for early treatment. This paper proposes an Attention and Multi-level Enhancement Fusion Network (AMEF-Net) based on the characteristics of three-dimensional medical imaging and the specific manifestations of PD in medical images. The focus is on small lesion areas and structural lesion areas that are often overlooked in traditional deep learning models, achieving multi-level attention and processing of imaging information. The model achieved a diagnostic accuracy of 98.867%, a precision of 99.830%, a sensitivity of 99.182%, and a specificity of 99.384% on Magnetic Resonance Images from the Parkinson's Progression Markers Initiative dataset. On Diffusion Tensor Images, it achieved a diagnostic accuracy of 99.602%, a precision of 99.930%, a sensitivity of 99.463%, and a specificity of 99.877%. The relevant code has been placed in https://github.com/EdwardTj/AMEF-NET.

帕金森病(PD)是一种主要影响中老年人群的神经退行性疾病。该病起病隐匿、致残率高、诊断周期长、诊断费用高,给患者及其家属带来沉重负担。利用诊断速度快、准确率高、抗疲劳的人工智能,实现PD的智能辅助诊断,对于减轻患者的经济压力、缩短诊断周期、帮助患者抢占早期治疗的黄金期具有重要意义。本文根据医学三维影像的特点和PD在医学图像中的具体表现,提出了一种关注与多层次增强融合网络(AMEF-Net)。重点关注传统深度学习模型中经常被忽略的小病变区域和结构性病变区域,实现对成像信息的多层次关注和处理。该模型在帕金森病进展标志物倡议数据集的磁共振图像上实现了98.86.7%的诊断准确度、99.830%的精度、99.182%的灵敏度和99.384%的特异性。在弥散张量图像上,其诊断准确率为99.602%,精密度为99.930%,灵敏度为99.463%,特异性为99.877%。相关代码已放在https://github.com/EdwardTj/AMEF-NET中。
{"title":"AMEF-Net: Towards an attention and multi-level enhancement fusion for medical image classification in Parkinson's aided diagnosis","authors":"Qingyan Ding,&nbsp;Yu Pan,&nbsp;Jianxin Liu,&nbsp;Lianxin Li,&nbsp;Nan Liu,&nbsp;Na Li,&nbsp;Wan Zheng,&nbsp;Xuecheng Dong","doi":"10.1049/cvi2.12324","DOIUrl":"10.1049/cvi2.12324","url":null,"abstract":"<p>Parkinson's disease (PD) is a neurodegenerative disorder primarily affecting middle-aged and elderly populations. Its insidious onset, high disability rate, long diagnostic cycle, and high diagnostic costs impose a heavy burden on patients and their families. Leveraging artificial intelligence, with its rapid diagnostic speed, high accuracy, and fatigue resistance, to achieve intelligent assisted diagnosis of PD holds significant promise for alleviating patients' financial stress, reducing diagnostic cycles, and helping patients seize the golden period for early treatment. This paper proposes an Attention and Multi-level Enhancement Fusion Network (AMEF-Net) based on the characteristics of three-dimensional medical imaging and the specific manifestations of PD in medical images. The focus is on small lesion areas and structural lesion areas that are often overlooked in traditional deep learning models, achieving multi-level attention and processing of imaging information. The model achieved a diagnostic accuracy of 98.867%, a precision of 99.830%, a sensitivity of 99.182%, and a specificity of 99.384% on Magnetic Resonance Images from the Parkinson's Progression Markers Initiative dataset. On Diffusion Tensor Images, it achieved a diagnostic accuracy of 99.602%, a precision of 99.930%, a sensitivity of 99.463%, and a specificity of 99.877%. The relevant code has been placed in https://github.com/EdwardTj/AMEF-NET.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12324","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking the power of multi-modal fusion in 3D object tracking 解锁3D对象跟踪中多模态融合的力量
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-25 DOI: 10.1049/cvi2.12335
Yue Hu

3D Single Object Tracking plays a vital role in autonomous driving and robotics, yet traditional approaches have predominantly focused on using pure LiDAR-based point cloud data, often neglecting the benefits of integrating image modalities. To address this gap, we propose a novel Multi-modal Image-LiDAR Tracker (MILT) designed to overcome the limitations of single-modality methods by effectively combining RGB and point cloud data. Our key contribution is a dual-branch architecture that separately extracts geometric features from LiDAR and texture features from images. These features are then fused in a BEV perspective to achieve a comprehensive representation of the tracked object. A significant innovation in our approach is the Image-to-LiDAR Adapter module, which transfers the rich feature representation capabilities of the image modality to the 3D tracking task, and the BEV-Fusion module, which facilitates the interactive fusion of geometry and texture features. By validating MILT on public datasets, we demonstrate substantial performance improvements over traditional methods, effectively showcasing the advantages of our multi-modal fusion strategy. This work advances the state-of-the-art in SOT by integrating complementary information from RGB and LiDAR modalities, resulting in enhanced tracking accuracy and robustness.

3D单目标跟踪在自动驾驶和机器人技术中发挥着至关重要的作用,然而传统的方法主要集中在使用纯基于激光雷达的点云数据,往往忽略了集成图像模式的好处。为了解决这一差距,我们提出了一种新的多模态图像激光雷达跟踪器(MILT),旨在通过有效地结合RGB和点云数据来克服单模态方法的局限性。我们的主要贡献是一个双分支架构,分别从激光雷达中提取几何特征和从图像中提取纹理特征。然后将这些特征融合到BEV视角中,以实现跟踪对象的全面表示。我们方法中的一个重要创新是图像到激光雷达适配器模块,它将图像模态的丰富特征表示能力转移到3D跟踪任务中,以及bev融合模块,它促进了几何和纹理特征的交互式融合。通过在公共数据集上验证MILT,我们证明了比传统方法有实质性的性能改进,有效地展示了我们的多模态融合策略的优势。这项工作通过整合来自RGB和LiDAR模式的互补信息,提高了SOT的先进水平,从而提高了跟踪精度和鲁棒性。
{"title":"Unlocking the power of multi-modal fusion in 3D object tracking","authors":"Yue Hu","doi":"10.1049/cvi2.12335","DOIUrl":"10.1049/cvi2.12335","url":null,"abstract":"<p>3D Single Object Tracking plays a vital role in autonomous driving and robotics, yet traditional approaches have predominantly focused on using pure LiDAR-based point cloud data, often neglecting the benefits of integrating image modalities. To address this gap, we propose a novel Multi-modal Image-LiDAR Tracker (MILT) designed to overcome the limitations of single-modality methods by effectively combining RGB and point cloud data. Our key contribution is a dual-branch architecture that separately extracts geometric features from LiDAR and texture features from images. These features are then fused in a BEV perspective to achieve a comprehensive representation of the tracked object. A significant innovation in our approach is the Image-to-LiDAR Adapter module, which transfers the rich feature representation capabilities of the image modality to the 3D tracking task, and the BEV-Fusion module, which facilitates the interactive fusion of geometry and texture features. By validating MILT on public datasets, we demonstrate substantial performance improvements over traditional methods, effectively showcasing the advantages of our multi-modal fusion strategy. This work advances the state-of-the-art in SOT by integrating complementary information from RGB and LiDAR modalities, resulting in enhanced tracking accuracy and robustness.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12335","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143363016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Category-instance distillation based on visual-language models for rehearsal-free class incremental learning 基于视觉语言模型的类别-实例蒸馏在无预演课堂增量学习中的应用
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-23 DOI: 10.1049/cvi2.12327
Weilong Jin, Zilei Wang, Yixin Zhang

Recently, visual-language models (VLMs) have displayed potent capabilities in the field of computer vision. Their emerging trend as the backbone of visual tasks necessitates studying class incremental learning (CIL) issues within the VLM architecture. However, the pre-training data for many VLMs is proprietary, and during the incremental phase, old task data may also raise privacy issues. Moreover, replay-based methods can introduce new problems like class imbalance, the selection of data for replay and a trade-off between replay cost and performance. Therefore, the authors choose the more challenging rehearsal-free settings. In this paper, the authors study class-incremental tasks based on the large pre-trained vision-language models like CLIP model. Initially, at the category level, the authors combine traditional optimisation and distillation techniques, utilising both pre-trained models and models trained in previous incremental stages to jointly guide the training of the new model. This paradigm effectively balances the stability and plasticity of the new model, mitigating the issue of catastrophic forgetting. Moreover, utilising the VLM infrastructure, the authors redefine the relationship between instances. This allows us to glean fine-grained instance relational information from the a priori knowledge provided during pre-training. The authors supplement this approach with an entropy-balancing method that allows the model to adaptively distribute optimisation weights across training samples. The authors’ experimental results validate that their method, within the framework of VLMs, outperforms traditional CIL methods.

近年来,视觉语言模型(VLMs)在计算机视觉领域显示出强大的能力。它们作为可视化任务的支柱的新兴趋势需要研究VLM体系结构中的类增量学习(CIL)问题。然而,许多vlm的预训练数据是专有的,并且在增量阶段,旧的任务数据也可能引起隐私问题。此外,基于重放的方法可能会引入新的问题,如类不平衡、重放数据的选择以及重放成本和性能之间的权衡。因此,作者选择了更具挑战性的场景。本文研究了基于大型预训练视觉语言模型(如CLIP模型)的类增量任务。最初,在类别层面,作者结合了传统的优化和蒸馏技术,利用预训练模型和之前增量阶段训练的模型来共同指导新模型的训练。这种范式有效地平衡了新模型的稳定性和可塑性,减轻了灾难性遗忘的问题。此外,利用VLM基础结构,作者重新定义了实例之间的关系。这使我们能够从预训练期间提供的先验知识中收集细粒度的实例关系信息。作者用熵平衡方法补充了这种方法,该方法允许模型自适应地在训练样本之间分配优化权重。实验结果表明,该方法在vlm框架内优于传统的CIL方法。
{"title":"Category-instance distillation based on visual-language models for rehearsal-free class incremental learning","authors":"Weilong Jin,&nbsp;Zilei Wang,&nbsp;Yixin Zhang","doi":"10.1049/cvi2.12327","DOIUrl":"10.1049/cvi2.12327","url":null,"abstract":"<p>Recently, visual-language models (VLMs) have displayed potent capabilities in the field of computer vision. Their emerging trend as the backbone of visual tasks necessitates studying class incremental learning (CIL) issues within the VLM architecture. However, the pre-training data for many VLMs is proprietary, and during the incremental phase, old task data may also raise privacy issues. Moreover, replay-based methods can introduce new problems like class imbalance, the selection of data for replay and a trade-off between replay cost and performance. Therefore, the authors choose the more challenging rehearsal-free settings. In this paper, the authors study class-incremental tasks based on the large pre-trained vision-language models like CLIP model. Initially, at the category level, the authors combine traditional optimisation and distillation techniques, utilising both pre-trained models and models trained in previous incremental stages to jointly guide the training of the new model. This paradigm effectively balances the stability and plasticity of the new model, mitigating the issue of catastrophic forgetting. Moreover, utilising the VLM infrastructure, the authors redefine the relationship between instances. This allows us to glean fine-grained instance relational information from the a priori knowledge provided during pre-training. The authors supplement this approach with an entropy-balancing method that allows the model to adaptively distribute optimisation weights across training samples. The authors’ experimental results validate that their method, within the framework of VLMs, outperforms traditional CIL methods.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12327","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outliers rejection for robust camera pose estimation using graduated non-convexity 利用渐进式非凸性鲁棒相机姿态估计的异常值抑制
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-23 DOI: 10.1049/cvi2.12330
Hao Yi, Bo Liu, Bin Zhao, Enhai Liu

Camera pose estimation plays a crucial role in computer vision, which is widely used in augmented reality, robotics and autonomous driving. However, previous studies have neglected the presence of outliers in measurements, so that even a small percentage of outliers will significantly degrade precision. In order to deal with outliers, this paper proposes using a graduated non-convexity (GNC) method to suppress outliers in robust camera pose estimation, which serves as the core of GNCPnP. The authors first reformulate the camera pose estimation problem using a non-convex cost, which is less affected by outliers. Then, to apply a non-minimum solver to solve the reformulated problem, the authors use the Black-Rangarajan duality theory to transform it. Finally, to address the dependence of non-convex optimisation on initial values, the GNC method was customised according to the truncated least squares cost. The results of simulation and real experiments show that GNCPnP can effectively handle the interference of outliers and achieve higher accuracy compared to existing state-of-the-art algorithms. In particular, the camera pose estimation accuracy of GNCPnP in the case of a low percentage of outliers is almost comparable to that of the state-of-the-art algorithm in the case of no outliers.

相机姿态估计在计算机视觉中起着至关重要的作用,在增强现实、机器人和自动驾驶中有着广泛的应用。然而,以往的研究忽略了测量中异常值的存在,因此即使是很小比例的异常值也会显著降低精度。为了处理异常点,本文提出了一种梯度非凸性(GNC)方法来抑制鲁棒相机姿态估计中的异常点,这是GNCPnP的核心。作者首先使用非凸代价重新表述相机姿态估计问题,该问题受离群值的影响较小。然后,利用Black-Rangarajan对偶理论对其进行变换,利用非极小解来求解重表述问题。最后,为了解决非凸优化对初始值的依赖,根据截断最小二乘代价对GNC方法进行了定制。仿真和实际实验结果表明,与现有算法相比,GNCPnP可以有效地处理异常点的干扰,并达到更高的精度。特别是,GNCPnP在低异常值百分比情况下的相机姿态估计精度几乎与最先进的算法在无异常值情况下的精度相当。
{"title":"Outliers rejection for robust camera pose estimation using graduated non-convexity","authors":"Hao Yi,&nbsp;Bo Liu,&nbsp;Bin Zhao,&nbsp;Enhai Liu","doi":"10.1049/cvi2.12330","DOIUrl":"10.1049/cvi2.12330","url":null,"abstract":"<p>Camera pose estimation plays a crucial role in computer vision, which is widely used in augmented reality, robotics and autonomous driving. However, previous studies have neglected the presence of outliers in measurements, so that even a small percentage of outliers will significantly degrade precision. In order to deal with outliers, this paper proposes using a graduated non-convexity (GNC) method to suppress outliers in robust camera pose estimation, which serves as the core of GNCPnP. The authors first reformulate the camera pose estimation problem using a non-convex cost, which is less affected by outliers. Then, to apply a non-minimum solver to solve the reformulated problem, the authors use the Black-Rangarajan duality theory to transform it. Finally, to address the dependence of non-convex optimisation on initial values, the GNC method was customised according to the truncated least squares cost. The results of simulation and real experiments show that GNCPnP can effectively handle the interference of outliers and achieve higher accuracy compared to existing state-of-the-art algorithms. In particular, the camera pose estimation accuracy of GNCPnP in the case of a low percentage of outliers is almost comparable to that of the state-of-the-art algorithm in the case of no outliers.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143363007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised bounding-box generation for camera-trap image based animal detection 基于摄像机陷阱图像的动物检测弱监督边界盒生成
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 DOI: 10.1049/cvi2.12332
Puxuan Xie, Renwu Gao, Weizeng Lu, Linlin Shen

In ecology, deep learning is improving the performance of camera-trap image based wild animal analysis. However, high labelling cost becomes a big challenge, as it requires involvement of huge human annotation. For example, the Snapshot Serengeti (SS) dataset contains over 900,000 images, while only 322,653 contains valid animals, 68,000 volunteers were recruited to provide image level labels such as species, the no. of animals and five behaviour attributes such as standing, resting and moving etc. In contrast, the Gold Standard SS Bounding-Box Coordinates (GSBBC for short) contains only 4011 images for training of object detection algorithms, as the annotation of bounding-box for animals in the image, is much more costive. Such a no. of training images, is obviously insufficient. To address this, the authors propose a method to generate bounding-boxes for a larger dataset using limited manually labelled images. To achieve this, the authors first train a wild animal detector using a small dataset (e.g. GSBBC) that is manually labelled to locate animals in images; then apply this detector to a bigger dataset (e.g. SS) for bounding-box generation; finally, we remove false detections according to the existing label information of the images. Experiments show that detector trained with images whose bounding-boxes are generated using the proposal, outperformed the existing camera-trap image based animal detection, in terms of mean average precision (mAP). Compared with the traditional data augmentation method, our method improved the mAP by 21.3% and 44.9% for rare species, also alleviating the long-tail issue in data distribution. In addition, detectors trained with the proposed method also achieve promising results when applied to classification and counting tasks, which are commonly required in wildlife research.

在生态学中,深度学习正在提高基于摄像机陷阱图像的野生动物分析的性能。然而,高标注成本成为一个巨大的挑战,因为它需要大量的人工标注。例如,快照塞伦盖蒂(SS)数据集包含超过90万张图像,而只有322,653张包含有效动物,68,000名志愿者被招募来提供图像级标签,如物种,编号。动物和五种行为属性,如站立,休息和移动等。相比之下,黄金标准SS边界盒坐标(Gold Standard SS Bounding-Box Coordinates,简称GSBBC)只包含4011张图像用于训练目标检测算法,因为图像中动物边界盒的标注成本要高得多。这样的不。对训练图像,显然是不够的。为了解决这个问题,作者提出了一种方法,使用有限的手动标记图像为更大的数据集生成边界框。为了实现这一点,作者首先使用一个小数据集(例如GSBBC)训练一个野生动物检测器,该数据集被手动标记以定位图像中的动物;然后将此检测器应用于更大的数据集(例如SS)以生成边界盒;最后,根据图像已有的标签信息去除误检。实验表明,使用该方法生成的边界框图像训练的检测器在平均平均精度(mAP)方面优于现有的基于摄像机陷阱图像的动物检测。与传统的数据增强方法相比,该方法对稀有物种的mAP分别提高了21.3%和44.9%,同时也缓解了数据分布的长尾问题。此外,使用该方法训练的检测器在应用于野生动物研究中通常需要的分类和计数任务时也取得了令人满意的结果。
{"title":"Weakly supervised bounding-box generation for camera-trap image based animal detection","authors":"Puxuan Xie,&nbsp;Renwu Gao,&nbsp;Weizeng Lu,&nbsp;Linlin Shen","doi":"10.1049/cvi2.12332","DOIUrl":"10.1049/cvi2.12332","url":null,"abstract":"<p>In ecology, deep learning is improving the performance of camera-trap image based wild animal analysis. However, high labelling cost becomes a big challenge, as it requires involvement of huge human annotation. For example, the Snapshot Serengeti (SS) dataset contains over 900,000 images, while only 322,653 contains valid animals, 68,000 volunteers were recruited to provide image level labels such as species, the no. of animals and five behaviour attributes such as standing, resting and moving etc. In contrast, the Gold Standard SS Bounding-Box Coordinates (GSBBC for short) contains only 4011 images for training of object detection algorithms, as the annotation of bounding-box for animals in the image, is much more costive. Such a no. of training images, is obviously insufficient. To address this, the authors propose a method to generate bounding-boxes for a larger dataset using limited manually labelled images. To achieve this, the authors first train a wild animal detector using a small dataset (e.g. GSBBC) that is manually labelled to locate animals in images; then apply this detector to a bigger dataset (e.g. SS) for bounding-box generation; finally, we remove false detections according to the existing label information of the images. Experiments show that detector trained with images whose bounding-boxes are generated using the proposal, outperformed the existing camera-trap image based animal detection, in terms of mean average precision (mAP). Compared with the traditional data augmentation method, our method improved the mAP by 21.3% and 44.9% for rare species, also alleviating the long-tail issue in data distribution. In addition, detectors trained with the proposed method also achieve promising results when applied to classification and counting tasks, which are commonly required in wildlife research.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12332","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143363031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1