首页 > 最新文献

IET Computer Vision最新文献

英文 中文
EDG-CDM: A New Encoder-Guided Conditional Diffusion Model-Based Image Synthesis Method for Limited Data EDG-CDM:一种新的基于编码器引导的条件扩散模型的有限数据图像合成方法
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-08 DOI: 10.1049/cvi2.70018
Haopeng Lei, Hao Yin, Kaijun Liang, Mingwen Wang, Jinshan Zeng, Guoliang Luo

The Diffusion Probabilistic Model (DM) has emerged as a powerful generative model in the field of image synthesis, capable of producing high-quality and realistic images. However, training DM requires a large and diverse dataset, which can be challenging to obtain. This limitation weakens the model's generalisation and robustness when training data is limited. To address this issue, EDG-CDM, an innovative encoder-guided conditional diffusion model was proposed for image synthesis with limited data. Firstly, the authors pre-train the encoder by introducing noise to capture the distribution of image features and generate the condition vector through contrastive learning and KL divergence. Next, the encoder undergoes further training with classification to integrate image class information, providing more favourable and versatile conditions for the diffusion model. Subsequently, the encoder is connected to the diffusion model, which is trained using all available data with encoder-provided conditions. Finally, the authors evaluate EDG-CDM on various public datasets with limited data, conducting extensive experiments and comparing our results with state-of-the-art methods using metrics such as Fréchet Inception Distance and Inception Score. Our experiments demonstrate that EDG-CDM outperforms existing models by consistently achieving the lowest FID scores and the highest IS scores, highlighting its effectiveness in generating high-quality and diverse images with limited training data. These results underscore the significance of EDG-CDM in advancing image synthesis techniques under data-constrained scenarios.

扩散概率模型(Diffusion Probabilistic Model, DM)是图像合成领域中一种强大的生成模型,能够生成高质量、逼真的图像。然而,训练DM需要一个大而多样的数据集,这可能是具有挑战性的。当训练数据有限时,这种限制削弱了模型的泛化和鲁棒性。为了解决这一问题,提出了一种创新的编码器引导条件扩散模型EDG-CDM,用于有限数据的图像合成。首先,通过引入噪声对编码器进行预训练,捕捉图像特征的分布,并通过对比学习和KL散度生成条件向量;接下来,对编码器进行进一步的分类训练,整合图像类信息,为扩散模型提供更有利和通用的条件。随后,编码器连接到扩散模型,扩散模型使用编码器提供的条件下的所有可用数据进行训练。最后,作者在各种有限数据的公共数据集上评估了EDG-CDM,进行了广泛的实验,并将我们的结果与使用fr盗梦距离和盗梦分数等指标的最先进方法进行了比较。我们的实验表明,EDG-CDM优于现有模型,始终如一地获得最低的FID分数和最高的IS分数,突出了其在有限的训练数据下生成高质量和多样化图像的有效性。这些结果强调了EDG-CDM在数据受限情况下推进图像合成技术的重要性。
{"title":"EDG-CDM: A New Encoder-Guided Conditional Diffusion Model-Based Image Synthesis Method for Limited Data","authors":"Haopeng Lei,&nbsp;Hao Yin,&nbsp;Kaijun Liang,&nbsp;Mingwen Wang,&nbsp;Jinshan Zeng,&nbsp;Guoliang Luo","doi":"10.1049/cvi2.70018","DOIUrl":"10.1049/cvi2.70018","url":null,"abstract":"<p>The Diffusion Probabilistic Model (DM) has emerged as a powerful generative model in the field of image synthesis, capable of producing high-quality and realistic images. However, training DM requires a large and diverse dataset, which can be challenging to obtain. This limitation weakens the model's generalisation and robustness when training data is limited. To address this issue, EDG-CDM, an innovative encoder-guided conditional diffusion model was proposed for image synthesis with limited data. Firstly, the authors pre-train the encoder by introducing noise to capture the distribution of image features and generate the condition vector through contrastive learning and KL divergence. Next, the encoder undergoes further training with classification to integrate image class information, providing more favourable and versatile conditions for the diffusion model. Subsequently, the encoder is connected to the diffusion model, which is trained using all available data with encoder-provided conditions. Finally, the authors evaluate EDG-CDM on various public datasets with limited data, conducting extensive experiments and comparing our results with state-of-the-art methods using metrics such as Fréchet Inception Distance and Inception Score. Our experiments demonstrate that EDG-CDM outperforms existing models by consistently achieving the lowest FID scores and the highest IS scores, highlighting its effectiveness in generating high-quality and diverse images with limited training data. These results underscore the significance of EDG-CDM in advancing image synthesis techniques under data-constrained scenarios.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Computer Vision Algorithms for Fine-Grained Classification Using Crowdsourced Insect Images 利用众包昆虫图像进行细粒度分类的计算机视觉算法的性能
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-04 DOI: 10.1049/cvi2.70006
Rita Pucci, Vincent J. Kalkman, Dan Stowell

With fine-grained classification, we identify unique characteristics to distinguish among classes of the same super-class. We are focusing on species recognition in Insecta as they are critical for biodiversity monitoring and at the base of many ecosystems. With citizen science campaigns, billions of images are collected in the wild. Once these are labelled, experts can use them to create distribution maps. However, the labelling process is time consuming, which is where computer vision comes in. The field of computer vision offers a wide range of algorithms, each with its strengths and weaknesses; how do we identify the algorithm that is in line with our application? To answer this question, we provide a full and detailed evaluation of nine algorithms among deep convolutional networks (CNN), vision transformers (ViT) and locality-based vision transformers (LBVT) on 4 different aspects: classification performance, embedding quality, computational cost and gradient activity. We offer insights that we have not yet had in this domain proving to which extent these algorithms solve the fine-grained tasks in Insecta. We found that ViT performs the best on inference speed and computational cost, whereas LBVT outperforms the others on performance and embedding quality; the CNN provide a trade-off among the metrics.

通过细粒度分类,我们可以识别独特的特征来区分相同超类的不同类。我们将重点放在昆虫科的物种识别上,因为它们对生物多样性监测至关重要,也是许多生态系统的基础。随着公民科学运动的开展,数十亿张野外照片被收集起来。一旦这些标签被标记,专家就可以用它们来创建分布图。然而,标签过程是耗时的,这就是计算机视觉的用武之地。计算机视觉领域提供了各种各样的算法,每种算法都有其优缺点;我们如何识别符合我们应用程序的算法?为了回答这个问题,我们从分类性能、嵌入质量、计算成本和梯度活动四个不同方面对深度卷积网络(CNN)、视觉变压器(ViT)和基于位置的视觉变压器(LBVT)中的九种算法进行了全面而详细的评估。我们提供了我们在这个领域还没有的见解,证明了这些算法在多大程度上解决了昆虫中的细粒度任务。我们发现ViT在推理速度和计算成本上表现最好,而LBVT在性能和嵌入质量上优于其他方法;CNN提供了指标之间的权衡。
{"title":"Performance of Computer Vision Algorithms for Fine-Grained Classification Using Crowdsourced Insect Images","authors":"Rita Pucci,&nbsp;Vincent J. Kalkman,&nbsp;Dan Stowell","doi":"10.1049/cvi2.70006","DOIUrl":"10.1049/cvi2.70006","url":null,"abstract":"<p>With fine-grained classification, we identify unique characteristics to distinguish among classes of the same super-class. We are focusing on species recognition in Insecta as they are critical for biodiversity monitoring and at the base of many ecosystems. With citizen science campaigns, billions of images are collected in the wild. Once these are labelled, experts can use them to create distribution maps. However, the labelling process is time consuming, which is where computer vision comes in. The field of computer vision offers a wide range of algorithms, each with its strengths and weaknesses; how do we identify the algorithm that is in line with our application? To answer this question, we provide a full and detailed evaluation of nine algorithms among deep convolutional networks (CNN), vision transformers (ViT) and locality-based vision transformers (LBVT) on 4 different aspects: classification performance, embedding quality, computational cost and gradient activity. We offer insights that we have not yet had in this domain proving to which extent these algorithms solve the fine-grained tasks in Insecta. We found that ViT performs the best on inference speed and computational cost, whereas LBVT outperforms the others on performance and embedding quality; the CNN provide a trade-off among the metrics.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143778248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foundation Model Based Camouflaged Object Detection 基于基础模型的伪装目标检测
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 DOI: 10.1049/cvi2.70009
Zefeng Chen, Zhijiang Li, Yunqi Xue, Li Zhang

Camouflaged object detection (COD) aims to identify and segment objects that closely resemble and are seamlessly integrated into their surrounding environments, making it a challenging task in computer vision. COD is constrained by the limited availability of training data and annotated samples, and most carefully designed COD models exhibit diminished performance under low-data conditions. In recent years, there has been increasing interest in leveraging foundation models, which have demonstrated robust general capabilities and superior generalisation performance, to address COD challenges. This work proposes a knowledge-guided domain adaptation (KGDA) approach to tackle the data scarcity problem in COD. The method utilises the knowledge descriptions generated by multimodal large language models (MLLMs) for camouflaged images, aiming to enhance the model's comprehension of semantic objects and camouflaged scenes through highly abstract and generalised knowledge representations. To resolve ambiguities and errors in the generated text descriptions, a multi-level knowledge aggregation (MLKG) module is devised. This module consolidates consistent semantic knowledge and forms multi-level semantic knowledge features. To incorporate semantic knowledge into the visual foundation model, the authors introduce a knowledge-guided semantic enhancement adaptor (KSEA) that integrates the semantic knowledge of camouflaged objects while preserving the original knowledge of the foundation model. Extensive experiments demonstrate that our method surpasses 19 state-of-the-art approaches and exhibits strong generalisation capabilities even with limited annotated data.

伪装对象检测(COD)旨在识别和分割与周围环境紧密相似并无缝集成的对象,这使其成为计算机视觉中的一项具有挑战性的任务。COD受到训练数据和带注释样本的有限可用性的限制,大多数精心设计的COD模型在低数据条件下表现出较低的性能。近年来,人们对利用基础模型越来越感兴趣,这些模型已经证明了强大的通用能力和优越的泛化性能,以解决COD挑战。本文提出了一种知识引导的领域自适应(KGDA)方法来解决COD中的数据稀缺问题。该方法利用多模态大语言模型(mllm)对伪装图像生成的知识描述,旨在通过高度抽象和泛化的知识表示,增强模型对语义对象和伪装场景的理解。为了解决生成的文本描述中的歧义和错误,设计了多级知识聚合模块。该模块巩固了一致的语义知识,形成了多层次的语义知识特征。为了将语义知识整合到可视化基础模型中,作者引入了一种知识引导的语义增强适配器(KSEA),该适配器在保留基础模型原有知识的同时集成了伪装对象的语义知识。大量的实验表明,我们的方法超过了19种最先进的方法,即使在有限的注释数据下也表现出强大的泛化能力。
{"title":"Foundation Model Based Camouflaged Object Detection","authors":"Zefeng Chen,&nbsp;Zhijiang Li,&nbsp;Yunqi Xue,&nbsp;Li Zhang","doi":"10.1049/cvi2.70009","DOIUrl":"10.1049/cvi2.70009","url":null,"abstract":"<p>Camouflaged object detection (COD) aims to identify and segment objects that closely resemble and are seamlessly integrated into their surrounding environments, making it a challenging task in computer vision. COD is constrained by the limited availability of training data and annotated samples, and most carefully designed COD models exhibit diminished performance under low-data conditions. In recent years, there has been increasing interest in leveraging foundation models, which have demonstrated robust general capabilities and superior generalisation performance, to address COD challenges. This work proposes a knowledge-guided domain adaptation (KGDA) approach to tackle the data scarcity problem in COD. The method utilises the knowledge descriptions generated by multimodal large language models (MLLMs) for camouflaged images, aiming to enhance the model's comprehension of semantic objects and camouflaged scenes through highly abstract and generalised knowledge representations. To resolve ambiguities and errors in the generated text descriptions, a multi-level knowledge aggregation (MLKG) module is devised. This module consolidates consistent semantic knowledge and forms multi-level semantic knowledge features. To incorporate semantic knowledge into the visual foundation model, the authors introduce a knowledge-guided semantic enhancement adaptor (KSEA) that integrates the semantic knowledge of camouflaged objects while preserving the original knowledge of the foundation model. Extensive experiments demonstrate that our method surpasses 19 state-of-the-art approaches and exhibits strong generalisation capabilities even with limited annotated data.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143749464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Optimisation of Satellite Image-Based Crop Mapping: A Comparison of Deep Time Series and Semi-Supervised Time Warping Strategies 基于卫星图像作物制图的时间优化:深度时间序列与半监督时间翘曲策略的比较
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-26 DOI: 10.1049/cvi2.70014
Rosie Finnegan, Joseph Metcalfe, Sara Sharifzadeh, Fabio Caraffini, Xianghua Xie, Alberto Hornero, Nicholas W. Synes

This study presents a novel approach to crop mapping using remotely sensed satellite images. It addresses the significant classification modelling challenges, including (1) the requirements for extensive labelled data and (2) the complex optimisation problem for selection of appropriate temporal windows in the absence of prior knowledge of cultivation calendars. We compare the lightweight Dynamic Time Warping (DTW) classification method with the heavily supervised Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) using high-resolution multispectral optical satellite imagery (3 m/pixel). Our approach integrates effective practical preprocessing steps, including data augmentation and a data-driven optimisation strategy for the temporal window, even in the presence of numerous crop classes. Our findings demonstrate that DTW, despite its lower data demands, can match the performance of CNN-LSTM through our effective preprocessing steps while significantly improving runtime. These results demonstrate that both CNN-LSTM and DTW can achieve deployment-level accuracy and underscore the potential of DTW as a viable alternative to more resource-intensive models. The results also prove the effectiveness of temporal windowing for improving runtime and accuracy of a crop classification study, even with no prior knowledge of planting timeframes.

本研究提出了一种利用遥感卫星图像进行作物制图的新方法。它解决了重大的分类建模挑战,包括(1)对大量标记数据的要求和(2)在缺乏种植日历先验知识的情况下选择适当时间窗口的复杂优化问题。我们将轻量级动态时间扭曲(DTW)分类方法与使用高分辨率多光谱光学卫星图像(3米/像素)的重监督卷积神经网络-长短期记忆(CNN-LSTM)进行比较。我们的方法集成了有效的实际预处理步骤,包括数据增强和数据驱动的时间窗口优化策略,即使在存在许多作物类的情况下也是如此。我们的研究结果表明,尽管DTW的数据需求较低,但通过我们有效的预处理步骤,DTW的性能可以与CNN-LSTM相匹配,同时显著提高了运行时间。这些结果表明,CNN-LSTM和DTW都可以达到部署级精度,并强调了DTW作为资源密集型模型的可行替代方案的潜力。结果还证明了时间窗口对于提高作物分类研究的运行时间和准确性的有效性,即使没有种植时间框架的先验知识。
{"title":"Temporal Optimisation of Satellite Image-Based Crop Mapping: A Comparison of Deep Time Series and Semi-Supervised Time Warping Strategies","authors":"Rosie Finnegan,&nbsp;Joseph Metcalfe,&nbsp;Sara Sharifzadeh,&nbsp;Fabio Caraffini,&nbsp;Xianghua Xie,&nbsp;Alberto Hornero,&nbsp;Nicholas W. Synes","doi":"10.1049/cvi2.70014","DOIUrl":"10.1049/cvi2.70014","url":null,"abstract":"<p>This study presents a novel approach to crop mapping using remotely sensed satellite images. It addresses the significant classification modelling challenges, including (1) the requirements for extensive labelled data and (2) the complex optimisation problem for selection of appropriate temporal windows in the absence of prior knowledge of cultivation calendars. We compare the lightweight Dynamic Time Warping (DTW) classification method with the heavily supervised Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) using high-resolution multispectral optical satellite imagery (3 m/pixel). Our approach integrates effective practical preprocessing steps, including data augmentation and a data-driven optimisation strategy for the temporal window, even in the presence of numerous crop classes. Our findings demonstrate that DTW, despite its lower data demands, can match the performance of CNN-LSTM through our effective preprocessing steps while significantly improving runtime. These results demonstrate that both CNN-LSTM and DTW can achieve deployment-level accuracy and underscore the potential of DTW as a viable alternative to more resource-intensive models. The results also prove the effectiveness of temporal windowing for improving runtime and accuracy of a crop classification study, even with no prior knowledge of planting timeframes.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crafting Transferable Adversarial Examples Against 3D Object Detection 制作可转移的对抗3D物体检测的例子
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-26 DOI: 10.1049/cvi2.70011
Haiyan Long, Hai Chen, Mengyao Xu, Chonghao Zhang, Fulan Qian

3D object detection is one of the current popular hotspots by perceiving the surrounding environment through LiDAR and camera sensors to recognise the category and location of objects in the scene. Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Although some approaches have begun to investigate the robustness of 3D object detection models, they are currently generating adversarial examples in a white-box setting and there is a lack of research into generating transferable adversarial examples in a black-box setting. In this paper, a non-end-to-end attack algorithm was proposed for LiDAR pipelines that crafts transferable adversarial examples against 3D object detection. Specifically, the method generates adversarial examples by restraining features with high contribution to downstream tasks and amplifying features with low contribution to downstream tasks in the feature space. Extensive experiments validate that the method produces more transferable adversarial point clouds, for example, the method generates adversarial point clouds in the nuScenes dataset that are about 10% $%$ and 7% $%$ better than the state-of-the-art method on mAP and NDS, respectively.

通过激光雷达和摄像传感器感知周围环境,识别场景中物体的种类和位置,是目前的热门热点之一。深度神经网络(dnn)已被发现容易受到对抗性示例的影响。尽管一些方法已经开始研究3D目标检测模型的鲁棒性,但它们目前是在白盒环境中生成对抗示例,而在黑盒环境中生成可转移的对抗示例的研究缺乏。本文提出了一种针对激光雷达管道的非端到端攻击算法,该算法可以生成针对3D目标检测的可转移对抗示例。具体而言,该方法通过在特征空间中抑制对下游任务贡献大的特征,放大对下游任务贡献小的特征来生成对抗样例。大量的实验验证了该方法产生了更多可转移的对抗性点云,例如,该方法在nuScenes数据集中产生的对抗性点云分别比mAP和NDS上最先进的方法好10%和7%。
{"title":"Crafting Transferable Adversarial Examples Against 3D Object Detection","authors":"Haiyan Long,&nbsp;Hai Chen,&nbsp;Mengyao Xu,&nbsp;Chonghao Zhang,&nbsp;Fulan Qian","doi":"10.1049/cvi2.70011","DOIUrl":"10.1049/cvi2.70011","url":null,"abstract":"<p>3D object detection is one of the current popular hotspots by perceiving the surrounding environment through LiDAR and camera sensors to recognise the category and location of objects in the scene. Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Although some approaches have begun to investigate the robustness of 3D object detection models, they are currently generating adversarial examples in a white-box setting and there is a lack of research into generating transferable adversarial examples in a black-box setting. In this paper, a non-end-to-end attack algorithm was proposed for LiDAR pipelines that crafts transferable adversarial examples against 3D object detection. Specifically, the method generates adversarial examples by restraining features with high contribution to downstream tasks and amplifying features with low contribution to downstream tasks in the feature space. Extensive experiments validate that the method produces more transferable adversarial point clouds, for example, the method generates adversarial point clouds in the nuScenes dataset that are about 10<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> and 7<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> better than the state-of-the-art method on mAP and NDS, respectively.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances of Continual Learning in Computer Vision: An Overview 计算机视觉中持续学习的最新进展:概述
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-19 DOI: 10.1049/cvi2.70013
Haoxuan Qu, Hossein Rahmani, Li Xu, Bryan Williams, Jun Liu

In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing and accumulating new knowledge acquired at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularisation, knowledge distillation, memory, generative replay, parameter isolation and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.

与一次性获得所有训练数据的批处理学习相反,持续学习代表了一系列方法,这些方法可以积累知识,并使用顺序可用的数据连续学习。人类的学习过程具有学习、融合和积累在不同时间步长的新知识的能力,因此持续学习被认为具有很高的现实意义。因此,持续学习在各种人工智能任务中得到了研究。本文对计算机视觉中持续学习的最新进展进行了综述。特别地,这些作品按其代表性技术分组,包括正则化,知识蒸馏,记忆,生成重播,参数隔离以及上述技术的组合。介绍了这些技术的特点及其在计算机视觉中的应用。在本概述的最后,讨论了几个子领域,在这些领域中,持续的知识积累可能是有帮助的,而持续的学习尚未得到很好的研究。
{"title":"Recent Advances of Continual Learning in Computer Vision: An Overview","authors":"Haoxuan Qu,&nbsp;Hossein Rahmani,&nbsp;Li Xu,&nbsp;Bryan Williams,&nbsp;Jun Liu","doi":"10.1049/cvi2.70013","DOIUrl":"10.1049/cvi2.70013","url":null,"abstract":"<p>In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing and accumulating new knowledge acquired at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularisation, knowledge distillation, memory, generative replay, parameter isolation and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Multi-Object Tracking in Recent Times 近年来多目标跟踪研究综述
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-09 DOI: 10.1049/cvi2.70010
Suya Li, Hengyi Ren, Xin Xie, Ying Cao

Multi-object tracking (MOT) is a fundamental problem in computer vision that involves tracing the trajectories of foreground targets throughout a video sequence while establishing correspondences for identical objects across frames. With the advancement of deep learning techniques, methods based on deep learning have significantly improved accuracy and efficiency in MOT. This paper reviews several recent deep learning-based MOT methods and categorises them into three main groups: detection-based, single-object tracking (SOT)-based, and segmentation-based methods, according to their core technologies. Additionally, this paper discusses the metrics and datasets used for evaluating MOT performance, the challenges faced in the field, and future directions for research.

多目标跟踪(MOT)是计算机视觉中的一个基本问题,它涉及在整个视频序列中跟踪前景目标的轨迹,同时建立跨帧相同对象的对应关系。随着深度学习技术的进步,基于深度学习的方法显著提高了MOT的精度和效率。本文回顾了最近几种基于深度学习的MOT方法,并根据其核心技术将其分为三大类:基于检测的方法、基于单目标跟踪(SOT)的方法和基于分割的方法。此外,本文还讨论了用于评估MOT性能的指标和数据集,该领域面临的挑战以及未来的研究方向。
{"title":"A Review of Multi-Object Tracking in Recent Times","authors":"Suya Li,&nbsp;Hengyi Ren,&nbsp;Xin Xie,&nbsp;Ying Cao","doi":"10.1049/cvi2.70010","DOIUrl":"10.1049/cvi2.70010","url":null,"abstract":"<p>Multi-object tracking (MOT) is a fundamental problem in computer vision that involves tracing the trajectories of foreground targets throughout a video sequence while establishing correspondences for identical objects across frames. With the advancement of deep learning techniques, methods based on deep learning have significantly improved accuracy and efficiency in MOT. This paper reviews several recent deep learning-based MOT methods and categorises them into three main groups: detection-based, single-object tracking (SOT)-based, and segmentation-based methods, according to their core technologies. Additionally, this paper discusses the metrics and datasets used for evaluating MOT performance, the challenges faced in the field, and future directions for research.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143581368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAPCNet: Tactile-Assisted Point Cloud Completion Network via Iterative Fusion Strategy 基于迭代融合策略的触觉辅助点云补全网络
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-07 DOI: 10.1049/cvi2.70012
Yangrong Liu, Jian Li, Huaiyu Wang, Ming Lu, Haorao Shen, Qin Wang

With the development of the 3D point cloud field in recent years, point cloud completion of 3D objects has increasingly attracted researchers' attention. Point cloud data can accurately express the shape information of 3D objects at different resolutions, but the original point clouds collected directly by various 3D scanning equipment are often incomplete and have uneven density. Tactile is one distinctive way to perceive the 3D shape of an object. Tactile point clouds can provide local shape information for unknown areas during completion, which is a valuable complement to the point cloud data acquired with visual devices. In order to effectively improve the effect of point cloud completion using tactile information, the authors propose an innovative tactile-assisted point cloud completion network, TAPCNet. This network is the first neural network customised for the input of tactile point clouds and incomplete point clouds, which can fuse two types of point cloud information in the feature domain. Besides, a new dataset named 3DVT was rebuilt, to fit the proposed network model. Based on the tactile fusion strategy and related modules, multiple comparative experiments were conducted by controlling the quantity of tactile point clouds on the 3DVT dataset. The experimental data illustrates that TAPCNet can outperform the state-of-the-art methods in the benchmark.

随着近年来三维点云领域的发展,三维物体的点云补全越来越受到研究者的关注。点云数据可以准确表达不同分辨率下三维物体的形状信息,但各种三维扫描设备直接采集的原始点云往往不完整,密度不均匀。触觉是感知物体三维形状的一种独特方式。触觉点云可以在补全过程中提供未知区域的局部形状信息,是对视觉设备获得的点云数据的有益补充。为了有效提高利用触觉信息进行点云补全的效果,作者提出了一种创新的触觉辅助点云补全网络TAPCNet。该网络是首个针对触觉点云和不完全点云输入定制的神经网络,可以在特征域融合两种类型的点云信息。此外,重建了一个名为3DVT的新数据集,以拟合所提出的网络模型。基于触觉融合策略和相关模块,通过控制3DVT数据集上触觉点云的数量,进行了多次对比实验。实验数据表明,在基准测试中,TAPCNet的性能优于最先进的方法。
{"title":"TAPCNet: Tactile-Assisted Point Cloud Completion Network via Iterative Fusion Strategy","authors":"Yangrong Liu,&nbsp;Jian Li,&nbsp;Huaiyu Wang,&nbsp;Ming Lu,&nbsp;Haorao Shen,&nbsp;Qin Wang","doi":"10.1049/cvi2.70012","DOIUrl":"10.1049/cvi2.70012","url":null,"abstract":"<p>With the development of the 3D point cloud field in recent years, point cloud completion of 3D objects has increasingly attracted researchers' attention. Point cloud data can accurately express the shape information of 3D objects at different resolutions, but the original point clouds collected directly by various 3D scanning equipment are often incomplete and have uneven density. Tactile is one distinctive way to perceive the 3D shape of an object. Tactile point clouds can provide local shape information for unknown areas during completion, which is a valuable complement to the point cloud data acquired with visual devices. In order to effectively improve the effect of point cloud completion using tactile information, the authors propose an innovative tactile-assisted point cloud completion network, TAPCNet. This network is the first neural network customised for the input of tactile point clouds and incomplete point clouds, which can fuse two types of point cloud information in the feature domain. Besides, a new dataset named 3DVT was rebuilt, to fit the proposed network model. Based on the tactile fusion strategy and related modules, multiple comparative experiments were conducted by controlling the quantity of tactile point clouds on the 3DVT dataset. The experimental data illustrates that TAPCNet can outperform the state-of-the-art methods in the benchmark.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143571267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Transferable Adversarial Point Clouds via Autoencoders for 3D Object Classification 通过自动编码器为3D对象分类生成可转移的对抗点云
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-05 DOI: 10.1049/cvi2.70008
Mengyao Xu, Hai Chen, Chonghao Zhang, Yuanjun Zou, Chenchu Xu, Yanping Zhang, Fulan Qian

Recent studies have shown that deep neural networks are vulnerable to adversarial attacks. In the field of 3D point cloud classification, transfer-based black-box attack strategies have been explored to address the challenge of limited knowledge about the model in practical scenarios. However, existing approaches typically rely excessively on network structure, resulting in poor transferability of the generated adversarial examples. To address the above problem, the authors propose AEattack, an adversarial attack method capable of generating highly transferable adversarial examples. Specifically, AEattack employs an autoencoder (AE) to extract features from the point cloud data and reconstruct the adversarial point cloud based on these features. Notably, the AE does not require pre-training, and its parameters are jointly optimised using a loss function during the process of generating adversarial point clouds. The method makes the generated adversarial point cloud not overly dependent on the network structure, but more concerned with the data distribution. Moreover, this design endows AEattack with a broader potential for application. Extensive experiments on the ModelNet40 dataset show that AEattack is capable of generating highly transferable adversarial point clouds, with up to 61.8% improvement in transferability compared to state-of-the-art adversarial attacks.

最近的研究表明,深度神经网络很容易受到对抗性攻击。在三维点云分类领域,探索了基于传输的黑盒攻击策略,以解决实际场景中模型知识有限的挑战。然而,现有的方法通常过度依赖网络结构,导致生成的对抗示例的可移植性较差。为了解决上述问题,作者提出了AEattack,一种能够生成高度可转移的对抗性示例的对抗性攻击方法。具体来说,AEattack使用自编码器(AE)从点云数据中提取特征,并基于这些特征重建对抗点云。值得注意的是,AE不需要预训练,并且在生成对抗点云的过程中使用损失函数对其参数进行了联合优化。该方法使生成的对抗点云不过分依赖于网络结构,而更关注于数据分布。此外,这种设计赋予了AEattack更广泛的应用潜力。在ModelNet40数据集上进行的大量实验表明,AEattack能够产生高度可转移的对抗性点云,与最先进的对抗性攻击相比,可转移性提高了61.8%。
{"title":"Generating Transferable Adversarial Point Clouds via Autoencoders for 3D Object Classification","authors":"Mengyao Xu,&nbsp;Hai Chen,&nbsp;Chonghao Zhang,&nbsp;Yuanjun Zou,&nbsp;Chenchu Xu,&nbsp;Yanping Zhang,&nbsp;Fulan Qian","doi":"10.1049/cvi2.70008","DOIUrl":"10.1049/cvi2.70008","url":null,"abstract":"<p>Recent studies have shown that deep neural networks are vulnerable to adversarial attacks. In the field of 3D point cloud classification, transfer-based black-box attack strategies have been explored to address the challenge of limited knowledge about the model in practical scenarios. However, existing approaches typically rely excessively on network structure, resulting in poor transferability of the generated adversarial examples. To address the above problem, the authors propose <i>AEattack</i>, an adversarial attack method capable of generating highly transferable adversarial examples. Specifically, AEattack employs an autoencoder (AE) to extract features from the point cloud data and reconstruct the adversarial point cloud based on these features. Notably, the AE does not require pre-training, and its parameters are jointly optimised using a loss function during the process of generating adversarial point clouds. The method makes the generated adversarial point cloud not overly dependent on the network structure, but more concerned with the data distribution. Moreover, this design endows AEattack with a broader potential for application. Extensive experiments on the ModelNet40 dataset show that AEattack is capable of generating highly transferable adversarial point clouds, with up to 61.8% improvement in transferability compared to state-of-the-art adversarial attacks.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Large-Scale Dataset for Marine Vessel Re-Identification Based on Swin Transformer Network in Ocean Surveillance Scenario 海洋监测场景下基于Swin变压器网络的大型船舶再识别新数据集
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-02 DOI: 10.1049/cvi2.70007
Zhi Lu, Liguo Sun, Pin Lv, Jiuwu Hao, Bo Tang, Xuanzhen Chen

In recent years, there has been an upward trend that marine vessels, an important object category in marine monitoring, have gradually become a research focal point in the field of computer vision, such as detection, tracking, and classification. Among them, marine vessel re-identification (Re-ID) emerges as a significant frontier research topics, which not only faces the dual challenge of huge intra-class and small inter-class differences, but also has complex environmental interference in the port monitoring scenarios. To propel advancements in marine vessel Re-ID technology, SwinTransReID, a framework grounded in the Swin Transformer for marine vessel Re-ID, is introduced. Specifically, the project initially encodes the triplet images separately as a sequence of blocks and construct a baseline model leveraging the Swin Transformer, achieving better performance on the Re-ID benchmark dataset in comparison to convolution neural network (CNN)-based approaches. And it introduces side information embedding (SIE) to further enhance the robust feature-learning capabilities of Swin Transformer, thus, integrating non-visual cues (orientation and type of vessel) and other auxiliary information (hull colour) through the insertion of learnable embedding modules. Additionally, the project presents VesselReID-1656, the first annotated large-scale benchmark dataset for vessel Re-ID in real-world ocean surveillance, comprising 135,866 images of 1656 vessels along with 5 orientations, 12 types, and 17 colours. The proposed method achieves 87.1% $%$ mAP and 96.1% $%$ Rank-1 accuracy on the newly-labelled challenging dataset, which surpasses the state-of-the-art (SOTA) method by 1.9% $%$ mAP regarding to performance. Moreover, extensive empirical results demonstrate the superiority of the proposed SwinTransReID on the person Market-1501 dataset, vehicle VeRi-776 dataset, and Boat Re-ID vessel dataset.

近年来,船舶作为海洋监测中的重要目标类别,逐渐成为计算机视觉检测、跟踪、分类等领域的研究热点,并呈现上升趋势。其中,船舶再识别(Re-ID)成为一个重要的前沿研究课题,不仅面临着类内差异大、类间差异小的双重挑战,而且在港口监测场景中存在复杂的环境干扰。为了推动船舶Re-ID技术的进步,引入了基于Swin变压器的船舶Re-ID框架SwinTransReID。具体来说,该项目最初将三联体图像单独编码为一系列块,并利用Swin Transformer构建基线模型,与基于卷积神经网络(CNN)的方法相比,在Re-ID基准数据集上实现了更好的性能。引入侧信息嵌入(SIE),进一步增强Swin Transformer的鲁棒特征学习能力,从而通过插入可学习的嵌入模块,整合非视觉线索(船舶方向和类型)和其他辅助信息(船体颜色)。此外,该项目还展示了VesselReID-1656,这是第一个在现实世界海洋监测中用于船舶Re-ID的带注释的大规模基准数据集,包含1656艘船舶的135,866张图像,有5个方向,12种类型和17种颜色。该方法在新标记的挑战性数据集上达到了87.1%的mAP和96.1%的Rank-1准确率。在性能方面,它比最先进的(SOTA)方法高出1.9%。此外,大量的实证结果表明,所提出的SwinTransReID在个人市场-1501数据集、车辆VeRi-776数据集和船只Re-ID数据集上具有优势。
{"title":"A New Large-Scale Dataset for Marine Vessel Re-Identification Based on Swin Transformer Network in Ocean Surveillance Scenario","authors":"Zhi Lu,&nbsp;Liguo Sun,&nbsp;Pin Lv,&nbsp;Jiuwu Hao,&nbsp;Bo Tang,&nbsp;Xuanzhen Chen","doi":"10.1049/cvi2.70007","DOIUrl":"10.1049/cvi2.70007","url":null,"abstract":"<p>In recent years, there has been an upward trend that marine vessels, an important object category in marine monitoring, have gradually become a research focal point in the field of computer vision, such as detection, tracking, and classification. Among them, marine vessel re-identification (Re-ID) emerges as a significant frontier research topics, which not only faces the dual challenge of huge intra-class and small inter-class differences, but also has complex environmental interference in the port monitoring scenarios. To propel advancements in marine vessel Re-ID technology, SwinTransReID, a framework grounded in the Swin Transformer for marine vessel Re-ID, is introduced. Specifically, the project initially encodes the triplet images separately as a sequence of blocks and construct a baseline model leveraging the Swin Transformer, achieving better performance on the Re-ID benchmark dataset in comparison to convolution neural network (CNN)-based approaches. And it introduces side information embedding (SIE) to further enhance the robust feature-learning capabilities of Swin Transformer, thus, integrating non-visual cues (orientation and type of vessel) and other auxiliary information (hull colour) through the insertion of learnable embedding modules. Additionally, the project presents VesselReID-1656, the first annotated large-scale benchmark dataset for vessel Re-ID in real-world ocean surveillance, comprising 135,866 images of 1656 vessels along with 5 orientations, 12 types, and 17 colours. The proposed method achieves 87.1<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> mAP and 96.1<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> Rank-1 accuracy on the newly-labelled challenging dataset, which surpasses the state-of-the-art (SOTA) method by 1.9<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> mAP regarding to performance. Moreover, extensive empirical results demonstrate the superiority of the proposed SwinTransReID on the person Market-1501 dataset, vehicle VeRi-776 dataset, and Boat Re-ID vessel dataset.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143530544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1