首页 > 最新文献

IET Computer Vision最新文献

英文 中文
To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images 裁剪或不裁剪:在相机陷阱图像的大型数据集上比较整幅图像和裁剪后的分类
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-24 DOI: 10.1049/cvi2.12318
Tomer Gadot, Ștefan Istrate, Hyungwon Kim, Dan Morris, Sara Beery, Tanya Birch, Jorge Ahumada

Camera traps facilitate non-invasive wildlife monitoring, but their widespread adoption has created a data processing bottleneck: a camera trap survey can create millions of images, and the labour required to review those images strains the resources of conservation organisations. AI is a promising approach for accelerating image review, but AI tools for camera trap data are imperfect; in particular, classifying small animals remains difficult, and accuracy falls off outside the ecosystems in which a model was trained. It has been proposed that incorporating an object detector into an image analysis pipeline may help address these challenges, but the benefit of object detection has not been systematically evaluated in the literature. In this work, the authors assess the hypothesis that classifying animals cropped from camera trap images using a species-agnostic detector yields better accuracy than classifying whole images. We find that incorporating an object detection stage into an image classification pipeline yields a macro-average F1 improvement of around 25% on a large, long-tailed dataset; this improvement is reproducible on a large public dataset and a smaller public benchmark dataset. The authors describe a classification architecture that performs well for both whole and detector-cropped images, and demonstrate that this architecture yields state-of-the-art benchmark accuracy.

相机陷阱促进了对野生动物的非侵入性监测,但它们的广泛采用造成了数据处理的瓶颈:相机陷阱调查可以产生数百万张图像,审查这些图像所需的劳动力使保护组织的资源紧张。人工智能是加速图像审查的一种很有前途的方法,但用于相机陷阱数据的人工智能工具并不完善;特别是,对小动物进行分类仍然很困难,而且在模型被训练的生态系统之外,准确率也会下降。有人提出,将目标检测器合并到图像分析管道中可能有助于解决这些挑战,但目标检测的好处尚未在文献中得到系统评估。在这项工作中,作者评估了这样一种假设,即使用物种不可知检测器对从相机陷阱图像中截取的动物进行分类比对整个图像进行分类更准确。我们发现,在大型长尾数据集上,将目标检测阶段纳入图像分类管道可以产生约25%的宏观平均F1改进;这种改进可以在大型公共数据集和较小的公共基准数据集上重现。作者描述了一种对完整图像和检测器裁剪图像都表现良好的分类体系结构,并证明了这种体系结构产生了最先进的基准精度。
{"title":"To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images","authors":"Tomer Gadot,&nbsp;Ștefan Istrate,&nbsp;Hyungwon Kim,&nbsp;Dan Morris,&nbsp;Sara Beery,&nbsp;Tanya Birch,&nbsp;Jorge Ahumada","doi":"10.1049/cvi2.12318","DOIUrl":"10.1049/cvi2.12318","url":null,"abstract":"<p>Camera traps facilitate non-invasive wildlife monitoring, but their widespread adoption has created a data processing bottleneck: a camera trap survey can create millions of images, and the labour required to review those images strains the resources of conservation organisations. AI is a promising approach for accelerating image review, but AI tools for camera trap data are imperfect; in particular, classifying small animals remains difficult, and accuracy falls off outside the ecosystems in which a model was trained. It has been proposed that incorporating an object detector into an image analysis pipeline may help address these challenges, but the benefit of object detection has not been systematically evaluated in the literature. In this work, the authors assess the hypothesis that classifying animals cropped from camera trap images using a species-agnostic detector yields better accuracy than classifying whole images. We find that incorporating an object detection stage into an image classification pipeline yields a macro-average F1 improvement of around 25% on a large, long-tailed dataset; this improvement is reproducible on a large public dataset and a smaller public benchmark dataset. The authors describe a classification architecture that performs well for both whole and detector-cropped images, and demonstrate that this architecture yields state-of-the-art benchmark accuracy.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1193-1208"},"PeriodicalIF":1.3,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12318","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive research on light field imaging: Theory and application 光场成像的综合研究:理论与应用
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1049/cvi2.12321
Fei Liu, Yunlong Wang, Qing Yang, Shubo Zhou, Kunbo Zhang

Computational photography is a combination of novel optical designs and processing methods to capture high-dimensional visual information. As an emerged promising technique, light field (LF) imaging measures the lighting, reflectance, focus, geometry and viewpoint in the free space, which has been widely explored for depth estimation, view synthesis, refocus, rendering, 3D displays, microscopy and other applications in computer vision in the past decades. In this paper, the authors present a comprehensive research survey on the LF imaging theory, technology and application. Firstly, the LF imaging process based on a MicroLens Array structure is derived, that is MLA-LF. Subsequently, the innovations of LF imaging technology are presented in terms of the imaging prototype, consumer LF camera and LF displays in Virtual Reality (VR) and Augmented Reality (AR). Finally the applications and challenges of LF imaging integrating with deep learning models are analysed, which consist of depth estimation, saliency detection, semantic segmentation, de-occlusion and defocus deblurring in recent years. It is believed that this paper will be a good reference for the future research on LF imaging technology in Artificial Intelligence era.

计算摄影是一种结合了新颖的光学设计和处理方法来捕捉高维视觉信息的技术。光场成像(LF)作为一项新兴的有前途的技术,测量自由空间中的光照、反射率、焦点、几何和视点,在过去的几十年里,在深度估计、视图合成、重聚焦、渲染、3D显示、显微镜等计算机视觉应用中得到了广泛的探索。本文对低频成像理论、技术及应用等方面的研究进行了综述。首先,推导了基于微透镜阵列结构的低频成像过程,即MLA-LF。随后,从成像样机、消费级LF相机和虚拟现实(VR)和增强现实(AR)中的LF显示器三个方面介绍了LF成像技术的创新。最后分析了近年来结合深度学习模型的LF成像的应用和挑战,包括深度估计、显著性检测、语义分割、去遮挡和离焦去模糊。相信本文将为未来人工智能时代低频成像技术的研究提供很好的参考。
{"title":"A comprehensive research on light field imaging: Theory and application","authors":"Fei Liu,&nbsp;Yunlong Wang,&nbsp;Qing Yang,&nbsp;Shubo Zhou,&nbsp;Kunbo Zhang","doi":"10.1049/cvi2.12321","DOIUrl":"10.1049/cvi2.12321","url":null,"abstract":"<p>Computational photography is a combination of novel optical designs and processing methods to capture high-dimensional visual information. As an emerged promising technique, light field (LF) imaging measures the lighting, reflectance, focus, geometry and viewpoint in the free space, which has been widely explored for depth estimation, view synthesis, refocus, rendering, 3D displays, microscopy and other applications in computer vision in the past decades. In this paper, the authors present a comprehensive research survey on the LF imaging theory, technology and application. Firstly, the LF imaging process based on a MicroLens Array structure is derived, that is MLA-LF. Subsequently, the innovations of LF imaging technology are presented in terms of the imaging prototype, consumer LF camera and LF displays in Virtual Reality (VR) and Augmented Reality (AR). Finally the applications and challenges of LF imaging integrating with deep learning models are analysed, which consist of depth estimation, saliency detection, semantic segmentation, de-occlusion and defocus deblurring in recent years. It is believed that this paper will be a good reference for the future research on LF imaging technology in Artificial Intelligence era.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1269-1284"},"PeriodicalIF":1.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEUFormer: High-precision semantic segmentation for urban remote sensing images DEUFormer:高精度城市遥感图像语义分割
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-12 DOI: 10.1049/cvi2.12313
Xinqi Jia, Xiaoyong Song, Lei Rao, Guangyu Fan, Songlin Cheng, Niansheng Chen

Urban remote sensing image semantic segmentation has a wide range of applications, such as urban planning, resource exploration, intelligent transportation, and other scenarios. Although UNetFormer performs well by introducing the self-attention mechanism of Transformer, it still faces challenges arising from relatively low segmentation accuracy and significant edge segmentation errors. To this end, this paper proposes DEUFormer by employing a special weighted sum method to fuse the features of the encoder and the decoder, thus capturing both local details and global context information. Moreover, an Enhanced Feature Refinement Head is designed to finely re-weight features on the channel dimension and narrow the semantic gap between shallow and deep features, thereby enhancing multi-scale feature extraction. Additionally, an Edge-Guided Context Module is introduced to enhance edge areas through effective edge detection, which can improve edge information extraction. Experimental results show that DEUFormer achieves an average Mean Intersection over Union (mIoU) of 53.8% on the LoveDA dataset and 69.1% on the UAVid dataset. Notably, the mIoU of buildings in the LoveDA dataset is 5.0% higher than that of UNetFormer. The proposed model outperforms methods such as UNetFormer on multiple datasets, which demonstrates its effectiveness.

城市遥感图像语义分割具有广泛的应用,如城市规划、资源勘探、智能交通等场景。UNetFormer通过引入Transformer的自关注机制,虽然表现良好,但仍然面临分割精度较低、边缘分割误差较大的挑战。为此,本文提出了DEUFormer,采用一种特殊的加权和方法融合编码器和解码器的特征,从而捕获局部细节和全局上下文信息。此外,设计了一个增强特征细化头,在通道维度上对特征进行精细加权,缩小浅特征和深特征之间的语义差距,从而增强多尺度特征提取。此外,引入边缘引导上下文模块,通过有效的边缘检测来增强边缘区域,从而提高边缘信息的提取。实验结果表明,DEUFormer在LoveDA数据集上实现了53.8%的平均交汇率,在UAVid数据集上实现了69.1%的平均交汇率。值得注意的是,LoveDA数据集中建筑物的mIoU比UNetFormer高5.0%。该模型在多个数据集上优于UNetFormer等方法,证明了其有效性。
{"title":"DEUFormer: High-precision semantic segmentation for urban remote sensing images","authors":"Xinqi Jia,&nbsp;Xiaoyong Song,&nbsp;Lei Rao,&nbsp;Guangyu Fan,&nbsp;Songlin Cheng,&nbsp;Niansheng Chen","doi":"10.1049/cvi2.12313","DOIUrl":"10.1049/cvi2.12313","url":null,"abstract":"<p>Urban remote sensing image semantic segmentation has a wide range of applications, such as urban planning, resource exploration, intelligent transportation, and other scenarios. Although UNetFormer performs well by introducing the self-attention mechanism of Transformer, it still faces challenges arising from relatively low segmentation accuracy and significant edge segmentation errors. To this end, this paper proposes DEUFormer by employing a special weighted sum method to fuse the features of the encoder and the decoder, thus capturing both local details and global context information. Moreover, an Enhanced Feature Refinement Head is designed to finely re-weight features on the channel dimension and narrow the semantic gap between shallow and deep features, thereby enhancing multi-scale feature extraction. Additionally, an Edge-Guided Context Module is introduced to enhance edge areas through effective edge detection, which can improve edge information extraction. Experimental results show that DEUFormer achieves an average Mean Intersection over Union (mIoU) of 53.8% on the LoveDA dataset and 69.1% on the UAVid dataset. Notably, the mIoU of buildings in the LoveDA dataset is 5.0% higher than that of UNetFormer. The proposed model outperforms methods such as UNetFormer on multiple datasets, which demonstrates its effectiveness.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1209-1222"},"PeriodicalIF":1.3,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient transformer tracking with adaptive attention 具有自适应注意力的高效变压器跟踪
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-07 DOI: 10.1049/cvi2.12315
Dingkun Xiao, Zhenzhong Wei, Guangjun Zhang

Recently, several trackers utilising Transformer architecture have shown significant performance improvement. However, the high computational cost of multi-head attention, a core component in the Transformer, has limited real-time running speed, which is crucial for tracking tasks. Additionally, the global mechanism of multi-head attention makes it susceptible to distractors with similar semantic information to the target. To address these issues, the authors propose a novel adaptive attention that enhances features through the spatial sparse attention mechanism with less than 1/4 of the computational complexity of multi-head attention. Our adaptive attention sets a perception range around each element in the feature map based on the target scale in the previous tracking result and adaptively searches for the information of interest. This allows the module to focus on the target region rather than background distractors. Based on adaptive attention, the authors build an efficient transformer tracking framework. It can perform deep interaction between search and template features to activate target information and aggregate multi-level interaction features to enhance the representation ability. The evaluation results on seven benchmarks show that the authors’ tracker achieves outstanding performance with a speed of 43 fps and significant advantages in hard circumstances.

最近,一些使用Transformer架构的跟踪器显示出显著的性能改进。然而,作为Transformer的核心组件,多头注意力的高计算成本限制了实时运行速度,而实时运行速度对于跟踪任务至关重要。此外,多头注意的全局机制使其容易受到与目标具有相似语义信息的干扰物的影响。为了解决这些问题,作者提出了一种新的自适应注意,该注意通过空间稀疏注意机制增强特征,其计算复杂度低于多头注意的1/4。我们的自适应注意力基于先前跟踪结果中的目标尺度,在特征映射中每个元素周围设置感知范围,并自适应搜索感兴趣的信息。这使得模块可以专注于目标区域,而不是背景干扰物。基于自适应注意力,构建了一个高效的变压器跟踪框架。它可以通过搜索特征和模板特征之间的深度交互来激活目标信息,并聚合多层次的交互特征来增强表征能力。在7个基准测试上的评估结果表明,作者的跟踪器取得了出色的性能,速度达到43 fps,在困难环境下具有明显的优势。
{"title":"Efficient transformer tracking with adaptive attention","authors":"Dingkun Xiao,&nbsp;Zhenzhong Wei,&nbsp;Guangjun Zhang","doi":"10.1049/cvi2.12315","DOIUrl":"10.1049/cvi2.12315","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <p>Recently, several trackers utilising Transformer architecture have shown significant performance improvement. However, the high computational cost of multi-head attention, a core component in the Transformer, has limited real-time running speed, which is crucial for tracking tasks. Additionally, the global mechanism of multi-head attention makes it susceptible to distractors with similar semantic information to the target. To address these issues, the authors propose a novel adaptive attention that enhances features through the spatial sparse attention mechanism with less than 1/4 of the computational complexity of multi-head attention. Our adaptive attention sets a perception range around each element in the feature map based on the target scale in the previous tracking result and adaptively searches for the information of interest. This allows the module to focus on the target region rather than background distractors. Based on adaptive attention, the authors build an efficient transformer tracking framework. It can perform deep interaction between search and template features to activate target information and aggregate multi-level interaction features to enhance the representation ability. The evaluation results on seven benchmarks show that the authors’ tracker achieves outstanding performance with a speed of 43 fps and significant advantages in hard circumstances.</p>\u0000 </section>\u0000 </div>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1338-1350"},"PeriodicalIF":1.3,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143248922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale feature extraction for energy-efficient object detection in remote sensing images 面向高能效遥感图像目标检测的多尺度特征提取
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-30 DOI: 10.1049/cvi2.12317
Di Wu, Hongning Liu, Jiawei Xu, Fei Xie

Object detection in remote sensing images aims to interpret images to obtain information on the category and location of potential targets, which is of great importance in traffic detection, marine supervision, and space reconnaissance. However, the complex backgrounds and large scale variations in remote sensing images present significant challenges. Traditional methods relied mainly on image filtering or feature descriptor methods to extract features, resulting in underperformance. Deep learning methods, especially one-stage detectors, for example, the Real-Time Object Detector (RTMDet) offers advanced solutions with efficient network architectures. Nevertheless, difficulty in feature extraction from complex backgrounds and target localisation in scale variations images limits detection accuracy. In this paper, an improved detector based on RTMDet, called the Multi-Scale Feature Extraction-assist RTMDet (MRTMDet), is proposed which address limitations through enhancement feature extraction and fusion networks. At the core of MRTMDet is a new backbone network MobileViT++ and a feature fusion network SFC-FPN, which enhances the model's ability to capture global and multi-scale features by carefully designing a hybrid feature processing unit of CNN and a transformer based on vision transformer (ViT) and poly-scale convolution (PSConv), respectively. The experiment in DIOR-R demonstrated that MRTMDet achieves competitive performance of 62.2% mAP, balancing precision with a lightweight design.

遥感图像目标检测的目的是对遥感图像进行解译,获取潜在目标的类别和位置信息,在交通检测、海洋监管、空间侦察等领域具有重要意义。然而,遥感图像的复杂背景和大尺度变化带来了重大挑战。传统方法主要依靠图像滤波或特征描述符方法提取特征,导致性能不佳。深度学习方法,特别是单阶段检测器,例如实时对象检测器(RTMDet)提供了具有高效网络架构的先进解决方案。然而,复杂背景的特征提取和尺度变化图像中目标定位的困难限制了检测的准确性。本文提出了一种基于RTMDet的改进检测器,称为多尺度特征提取辅助RTMDet (MRTMDet),该检测器通过增强特征提取和融合网络来解决局限性。MRTMDet的核心是新的骨干网络MobileViT++和特征融合网络SFC-FPN,通过精心设计CNN混合特征处理单元和基于视觉变压器(ViT)和多尺度卷积(PSConv)的变压器,增强了模型捕获全局和多尺度特征的能力。在DIOR-R中的实验表明,MRTMDet达到了62.2% mAP的竞争性能,平衡了精度和轻量化设计。
{"title":"Multi-scale feature extraction for energy-efficient object detection in remote sensing images","authors":"Di Wu,&nbsp;Hongning Liu,&nbsp;Jiawei Xu,&nbsp;Fei Xie","doi":"10.1049/cvi2.12317","DOIUrl":"10.1049/cvi2.12317","url":null,"abstract":"<p>Object detection in remote sensing images aims to interpret images to obtain information on the category and location of potential targets, which is of great importance in traffic detection, marine supervision, and space reconnaissance. However, the complex backgrounds and large scale variations in remote sensing images present significant challenges. Traditional methods relied mainly on image filtering or feature descriptor methods to extract features, resulting in underperformance. Deep learning methods, especially one-stage detectors, for example, the Real-Time Object Detector (RTMDet) offers advanced solutions with efficient network architectures. Nevertheless, difficulty in feature extraction from complex backgrounds and target localisation in scale variations images limits detection accuracy. In this paper, an improved detector based on RTMDet, called the Multi-Scale Feature Extraction-assist RTMDet (MRTMDet), is proposed which address limitations through enhancement feature extraction and fusion networks. At the core of MRTMDet is a new backbone network MobileViT++ and a feature fusion network SFC-FPN, which enhances the model's ability to capture global and multi-scale features by carefully designing a hybrid feature processing unit of CNN and a transformer based on vision transformer (ViT) and poly-scale convolution (PSConv), respectively. The experiment in DIOR-R demonstrated that MRTMDet achieves competitive performance of 62.2% mAP, balancing precision with a lightweight design.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1223-1234"},"PeriodicalIF":1.3,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on person and vehicle re-identification 人车再识别调查
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1049/cvi2.12316
Zhaofa Wang, Liyang Wang, Zhiping Shi, Miaomiao Zhang, Qichuan Geng, Na Jiang

Person/vehicle re-identification aims to use technologies such as cross-camera retrieval to associate the same person (same vehicle) in the surveillance videos at different locations, different times, and images captured by different cameras so as to achieve cross-surveillance image matching, person retrieval and trajectory tracking. It plays an extremely important role in the fields of intelligent security, criminal investigation etc. In recent years, the rapid development of deep learning technology has significantly propelled the advancement of re-identification (Re-ID) technology. An increasing number of technical methods have emerged, aiming to enhance Re-ID performance. This paper summarises four popular research areas in the current field of re-identification, focusing on the current research hotspots. These areas include the multi-task learning domain, the generalisation learning domain, the cross-modality domain, and the optimisation learning domain. Specifically, the paper analyses various challenges faced within these domains and elaborates on different deep learning frameworks and networks that address these challenges. A comparative analysis of re-identification tasks from various classification perspectives is provided, introducing mainstream research directions and current achievements. Finally, insights into future development trends are presented.

人/车再识别是利用跨摄像头检索等技术,将不同地点、不同时间的监控视频、不同摄像头拍摄的图像中的同一人(同一车辆)关联起来,实现跨摄像头图像匹配、人员检索和轨迹跟踪。它在智能安防、刑事侦查等领域发挥着极其重要的作用。近年来,深度学习技术的快速发展极大地推动了再识别(Re-ID)技术的进步。为了提高Re-ID性能,出现了越来越多的技术方法。本文总结了当前再识别领域的四个热门研究领域,重点介绍了当前的研究热点。这些领域包括多任务学习领域、泛化学习领域、跨模态学习领域和优化学习领域。具体来说,本文分析了这些领域面临的各种挑战,并详细阐述了解决这些挑战的不同深度学习框架和网络。从不同的分类角度对再识别任务进行了比较分析,介绍了主流研究方向和目前取得的成果。最后,对未来的发展趋势进行了展望。
{"title":"A survey on person and vehicle re-identification","authors":"Zhaofa Wang,&nbsp;Liyang Wang,&nbsp;Zhiping Shi,&nbsp;Miaomiao Zhang,&nbsp;Qichuan Geng,&nbsp;Na Jiang","doi":"10.1049/cvi2.12316","DOIUrl":"10.1049/cvi2.12316","url":null,"abstract":"<p>Person/vehicle re-identification aims to use technologies such as cross-camera retrieval to associate the same person (same vehicle) in the surveillance videos at different locations, different times, and images captured by different cameras so as to achieve cross-surveillance image matching, person retrieval and trajectory tracking. It plays an extremely important role in the fields of intelligent security, criminal investigation etc. In recent years, the rapid development of deep learning technology has significantly propelled the advancement of re-identification (Re-ID) technology. An increasing number of technical methods have emerged, aiming to enhance Re-ID performance. This paper summarises four popular research areas in the current field of re-identification, focusing on the current research hotspots. These areas include the multi-task learning domain, the generalisation learning domain, the cross-modality domain, and the optimisation learning domain. Specifically, the paper analyses various challenges faced within these domains and elaborates on different deep learning frameworks and networks that address these challenges. A comparative analysis of re-identification tasks from various classification perspectives is provided, introducing mainstream research directions and current achievements. Finally, insights into future development trends are presented.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1235-1268"},"PeriodicalIF":1.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occluded object 6D pose estimation using foreground probability compensation 前景概率补偿被遮挡物体6D姿态估计
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1049/cvi2.12314
Meihui Ren, Junying Jia, Xin Lu

6D object pose estimation usually refers to acquiring the 6D pose information of 3D objects in the sensor coordinate system using computer vision techniques. However, the task faces numerous challenges due to the complexity of natural scenes. One of the most significant challenges is occlusion, which is an unavoidable situation in 3D scenes and poses a significant obstacle in real-world applications. To tackle this issue, we propose a novel 6D pose estimation algorithm based on RGB-D images, aiming for enhanced robustness in occluded environments. Our approach follows the basic architecture of keypoint-based pose estimation algorithms. To better leverage complementary information of RGB-D data, we introduce a novel foreground probability-guided sampling strategy at the network's input stage. This strategy mitigates the sampling ratio imbalance between foreground and background points due to smaller foreground objects in occluded environments. Moreover, considering the impact of occlusion on semantic segmentation networks, we introduce a new object segmentation module. This module utilises traditional image processing techniques to compensate for severe semantic segmentation errors of deep learning networks. We evaluate our algorithm using the Occlusion LineMOD public dataset. Experimental results demonstrate that our method is more robust in occlusion environments compared to existing state-of-the-art algorithms. It maintains stable performance even in scenarios with no or low occlusion.

6D目标位姿估计通常是指利用计算机视觉技术获取传感器坐标系中三维目标的6D位姿信息。然而,由于自然场景的复杂性,这项任务面临着许多挑战。其中最重要的挑战之一是遮挡,这是3D场景中不可避免的情况,并且在现实世界的应用中构成了重大障碍。为了解决这个问题,我们提出了一种新的基于RGB-D图像的6D姿态估计算法,旨在增强闭塞环境中的鲁棒性。我们的方法遵循基于关键点的姿态估计算法的基本架构。为了更好地利用RGB-D数据的互补信息,我们在网络输入阶段引入了一种新的前景概率引导采样策略。该策略缓解了在遮挡环境中由于前景物体较小而导致的前景点和背景点之间采样比例的不平衡。此外,考虑到遮挡对语义分割网络的影响,我们引入了一个新的目标分割模块。该模块利用传统的图像处理技术来弥补深度学习网络严重的语义分割错误。我们使用Occlusion LineMOD公共数据集评估我们的算法。实验结果表明,与现有的先进算法相比,我们的方法在遮挡环境中具有更强的鲁棒性。即使在没有或低遮挡的情况下,它也能保持稳定的性能。
{"title":"Occluded object 6D pose estimation using foreground probability compensation","authors":"Meihui Ren,&nbsp;Junying Jia,&nbsp;Xin Lu","doi":"10.1049/cvi2.12314","DOIUrl":"10.1049/cvi2.12314","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <p>6D object pose estimation usually refers to acquiring the 6D pose information of 3D objects in the sensor coordinate system using computer vision techniques. However, the task faces numerous challenges due to the complexity of natural scenes. One of the most significant challenges is occlusion, which is an unavoidable situation in 3D scenes and poses a significant obstacle in real-world applications. To tackle this issue, we propose a novel 6D pose estimation algorithm based on RGB-D images, aiming for enhanced robustness in occluded environments. Our approach follows the basic architecture of keypoint-based pose estimation algorithms. To better leverage complementary information of RGB-D data, we introduce a novel foreground probability-guided sampling strategy at the network's input stage. This strategy mitigates the sampling ratio imbalance between foreground and background points due to smaller foreground objects in occluded environments. Moreover, considering the impact of occlusion on semantic segmentation networks, we introduce a new object segmentation module. This module utilises traditional image processing techniques to compensate for severe semantic segmentation errors of deep learning networks. We evaluate our algorithm using the Occlusion LineMOD public dataset. Experimental results demonstrate that our method is more robust in occlusion environments compared to existing state-of-the-art algorithms. It maintains stable performance even in scenarios with no or low occlusion.</p>\u0000 </section>\u0000 </div>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1325-1337"},"PeriodicalIF":1.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12314","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time semantic segmentation network for crops and weeds based on multi-branch structure 基于多分支结构的农作物和杂草实时语义分割网络
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-01 DOI: 10.1049/cvi2.12311
Yufan Liu, Muhua Liu, Xuhui Zhao, Junlong Zhu, Lin Wang, Hao Ma, Mingchuan Zhang

Weed recognition is an inevitable problem in smart agriculture, and to realise efficient weed recognition, complex background, insufficient feature information, varying target sizes and overlapping crops and weeds are the main problems to be solved. To address these problems, the authors propose a real-time semantic segmentation network based on a multi-branch structure for recognising crops and weeds. First, a new backbone network for capturing feature information between crops and weeds of different sizes is constructed. Second, the authors propose a weight refinement fusion (WRF) module to enhance the feature extraction ability of crops and weeds and reduce the interference caused by the complex background. Finally, a Semantic Guided Fusion is devised to enhance the interaction of information between crops and weeds and reduce the interference caused by overlapping goals. The experimental results demonstrate that the proposed network can balance speed and accuracy. Specifically, the 0.713 Mean IoU (MIoU), 0.802 MIoU, 0.746 MIoU and 0.906 MIoU can be achieved on the sugar beet (BoniRob) dataset, synthetic BoniRob dataset, CWFID dataset and self-labelled wheat dataset, respectively.

杂草识别是智能农业中不可避免的问题,要实现高效的杂草识别,需要解决的主要问题是背景复杂、特征信息不足、目标大小不一、作物与杂草重叠等。为了解决这些问题,作者提出了一种基于多分支结构的实时语义分割网络,用于识别作物和杂草。首先,构建了用于捕获不同大小作物和杂草之间特征信息的新型骨干网络;其次,提出了权重细化融合(WRF)模块,增强了作物和杂草的特征提取能力,降低了复杂背景对图像的干扰;最后,提出了一种语义引导融合方法,增强作物和杂草之间的信息交互,减少目标重叠造成的干扰。实验结果表明,该网络能够很好地平衡速度和精度。具体而言,在甜菜(BoniRob)数据集、合成BoniRob数据集、CWFID数据集和自标记小麦数据集上分别可以实现0.713、0.802、0.746和0.906 MIoU的Mean IoU (MIoU)。
{"title":"Real-time semantic segmentation network for crops and weeds based on multi-branch structure","authors":"Yufan Liu,&nbsp;Muhua Liu,&nbsp;Xuhui Zhao,&nbsp;Junlong Zhu,&nbsp;Lin Wang,&nbsp;Hao Ma,&nbsp;Mingchuan Zhang","doi":"10.1049/cvi2.12311","DOIUrl":"10.1049/cvi2.12311","url":null,"abstract":"<p>Weed recognition is an inevitable problem in smart agriculture, and to realise efficient weed recognition, complex background, insufficient feature information, varying target sizes and overlapping crops and weeds are the main problems to be solved. To address these problems, the authors propose a real-time semantic segmentation network based on a multi-branch structure for recognising crops and weeds. First, a new backbone network for capturing feature information between crops and weeds of different sizes is constructed. Second, the authors propose a weight refinement fusion (WRF) module to enhance the feature extraction ability of crops and weeds and reduce the interference caused by the complex background. Finally, a Semantic Guided Fusion is devised to enhance the interaction of information between crops and weeds and reduce the interference caused by overlapping goals. The experimental results demonstrate that the proposed network can balance speed and accuracy. Specifically, the 0.713 Mean IoU (MIoU), 0.802 MIoU, 0.746 MIoU and 0.906 MIoU can be achieved on the sugar beet (BoniRob) dataset, synthetic BoniRob dataset, CWFID dataset and self-labelled wheat dataset, respectively.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1313-1324"},"PeriodicalIF":1.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12311","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143248083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging modality-specific and shared features for RGB-T salient object detection 利用特定于模态的共享特性进行RGB-T显著对象检测
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-25 DOI: 10.1049/cvi2.12307
Shuo Wang, Gang Yang, Qiqi Xu, Xun Dai

Most of the existing RGB-T salient object detection methods are usually based on dual-stream encoding single-stream decoding network architecture. These models always rely on the quality of fusion features, which often focus on modality-shared features and overlook modality-specific features, thus failing to fully utilise the rich information contained in multi-modality data. To this end, a modality separate tri-stream net (MSTNet), which consists of a tri-stream encoding (TSE) structure and a tri-stream decoding (TSD) structure is proposed. The TSE explicitly separates and extracts the modality-shared and modality-specific features to improve the utilisation of multi-modality data. In addition, based on the hybrid-attention and cross-attention mechanism, we design an enhanced complementary fusion module (ECF), which fully considers the complementarity between the features to be fused and realises high-quality feature fusion. Furthermore, in TSD, the quality of uni-modality features is ensured under the constraint of supervision. Finally, to make full use of the rich multi-level and multi-scale decoding features contained in TSD, the authors design the adaptive multi-scale decoding module and the multi-stream feature aggregation module to improve the decoding capability. Extensive experiments on three public datasets show that the MSTNet outperforms 14 state-of-the-art methods, demonstrating that this method can extract and utilise the multi-modality information more adequately and extract more complete and rich features, thus improving the model's performance. The code will be released at https://github.com/JOOOOKII/MSTNet.

现有的RGB-T显著目标检测方法大多是基于双流编码单流解码的网络架构。这些模型往往依赖于融合特征的质量,往往侧重于模态共享特征而忽略了模态特定特征,无法充分利用多模态数据中包含的丰富信息。为此,提出了一种由三流编码(TSE)结构和三流解码(TSD)结构组成的模态分离三流网络(MSTNet)。TSE明确地分离和提取模态共享和模态特定的特征,以提高多模态数据的利用率。此外,基于混合注意和交叉注意机制,设计了增强型互补融合模块(ECF),充分考虑待融合特征之间的互补性,实现了高质量的特征融合。此外,在TSD中,在监督约束下保证了单模态特征的质量。最后,为了充分利用TSD丰富的多层次、多尺度译码特性,设计了自适应多尺度译码模块和多流特征聚合模块,提高了译码能力。在三个公共数据集上的大量实验表明,MSTNet优于14种最先进的方法,表明该方法可以更充分地提取和利用多模态信息,提取出更完整、更丰富的特征,从而提高了模型的性能。代码将在https://github.com/JOOOOKII/MSTNet上发布。
{"title":"Leveraging modality-specific and shared features for RGB-T salient object detection","authors":"Shuo Wang,&nbsp;Gang Yang,&nbsp;Qiqi Xu,&nbsp;Xun Dai","doi":"10.1049/cvi2.12307","DOIUrl":"10.1049/cvi2.12307","url":null,"abstract":"<p>Most of the existing RGB-T salient object detection methods are usually based on dual-stream encoding single-stream decoding network architecture. These models always rely on the quality of fusion features, which often focus on modality-shared features and overlook modality-specific features, thus failing to fully utilise the rich information contained in multi-modality data. To this end, a modality separate tri-stream net (MSTNet), which consists of a tri-stream encoding (TSE) structure and a tri-stream decoding (TSD) structure is proposed. The TSE explicitly separates and extracts the modality-shared and modality-specific features to improve the utilisation of multi-modality data. In addition, based on the hybrid-attention and cross-attention mechanism, we design an enhanced complementary fusion module (ECF), which fully considers the complementarity between the features to be fused and realises high-quality feature fusion. Furthermore, in TSD, the quality of uni-modality features is ensured under the constraint of supervision. Finally, to make full use of the rich multi-level and multi-scale decoding features contained in TSD, the authors design the adaptive multi-scale decoding module and the multi-stream feature aggregation module to improve the decoding capability. Extensive experiments on three public datasets show that the MSTNet outperforms 14 state-of-the-art methods, demonstrating that this method can extract and utilise the multi-modality information more adequately and extract more complete and rich features, thus improving the model's performance. The code will be released at https://github.com/JOOOOKII/MSTNet.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1285-1299"},"PeriodicalIF":1.3,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12307","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPANet: Spatial perceptual activation network for camouflaged object detection 用于伪装目标检测的空间感知激活网络
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1049/cvi2.12310
Jianhao Zhang, Gang Yang, Xun Dai, Pengyu Yang

Camouflaged object detection (COD) aims to segment objects embedded in the environment from the background. Most existing methods are easily affected by background interference in cluttered environments and cannot accurately locate camouflage areas, resulting in over-segmentation or incomplete segmentation structures. To effectively improve the performance of COD, we propose a spatial perceptual activation network (SPANet). SPANet extracts the spatial positional relationship between each object in the scene by activating spatial perception and uses it as global information to guide segmentation. It mainly consists of three modules: perceptual activation module (PAM), feature inference module (FIM), and interaction recovery module (IRM). Specifically, the authors design a PAM to model the positional relationship between the camouflaged object and the surrounding environment to obtain semantic correlation information. Then, a FIM that can effectively combine correlation information to suppress background interference and re-encode to generate multi-scale features is proposed. In addition, to further fuse multi-scale features, an IRM to mine the complementary information and differences between features at different scales is designed. Extensive experimental results on four widely used benchmark datasets (i.e. CAMO, CHAMELEON, COD10K, and NC4K) show that the authors’ method outperforms 13 state-of-the-art methods.

伪装目标检测(COD)旨在将嵌入环境中的目标从背景中分割出来。现有方法在杂乱环境下容易受到背景干扰的影响,无法准确定位伪装区域,导致分割过度或分割结构不完整。为了有效地提高COD的性能,我们提出了一种空间感知激活网络(SPANet)。SPANet通过激活空间感知来提取场景中各个物体之间的空间位置关系,并以此作为全局信息来指导分割。它主要包括三个模块:感知激活模块(PAM)、特征推理模块(FIM)和交互恢复模块(IRM)。具体而言,作者设计了一个PAM来模拟伪装对象与周围环境之间的位置关系,以获得语义相关信息。在此基础上,提出了一种能够有效结合相关信息抑制背景干扰并重新编码生成多尺度特征的FIM方法。此外,为了进一步融合多尺度特征,设计了一种IRM来挖掘不同尺度特征之间的互补信息和差异。在四种广泛使用的基准数据集(即CAMO, CHAMELEON, COD10K和NC4K)上进行的大量实验结果表明,作者的方法优于13种最先进的方法。
{"title":"SPANet: Spatial perceptual activation network for camouflaged object detection","authors":"Jianhao Zhang,&nbsp;Gang Yang,&nbsp;Xun Dai,&nbsp;Pengyu Yang","doi":"10.1049/cvi2.12310","DOIUrl":"10.1049/cvi2.12310","url":null,"abstract":"<p>Camouflaged object detection (COD) aims to segment objects embedded in the environment from the background. Most existing methods are easily affected by background interference in cluttered environments and cannot accurately locate camouflage areas, resulting in over-segmentation or incomplete segmentation structures. To effectively improve the performance of COD, we propose a spatial perceptual activation network (SPANet). SPANet extracts the spatial positional relationship between each object in the scene by activating spatial perception and uses it as global information to guide segmentation. It mainly consists of three modules: perceptual activation module (PAM), feature inference module (FIM), and interaction recovery module (IRM). Specifically, the authors design a PAM to model the positional relationship between the camouflaged object and the surrounding environment to obtain semantic correlation information. Then, a FIM that can effectively combine correlation information to suppress background interference and re-encode to generate multi-scale features is proposed. In addition, to further fuse multi-scale features, an IRM to mine the complementary information and differences between features at different scales is designed. Extensive experimental results on four widely used benchmark datasets (i.e. CAMO, CHAMELEON, COD10K, and NC4K) show that the authors’ method outperforms 13 state-of-the-art methods.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 8","pages":"1300-1312"},"PeriodicalIF":1.3,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12310","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1