首页 > 最新文献

IET Computer Vision最新文献

英文 中文
Context-aware relation enhancement and similarity reasoning for image-text retrieval 用于图像文本检索的上下文感知关系增强和相似性推理
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-30 DOI: 10.1049/cvi2.12270
Zheng Cui, Yongli Hu, Yanfeng Sun, Baocai Yin

Image-text retrieval is a fundamental yet challenging task, which aims to bridge a semantic gap between heterogeneous data to achieve precise measurements of semantic similarity. The technique of fine-grained alignment between cross-modal features plays a key role in various successful methods that have been proposed. Nevertheless, existing methods cannot effectively utilise intra-modal information to enhance feature representation and lack powerful similarity reasoning to get a precise similarity score. Intending to tackle these issues, a context-aware Relation Enhancement and Similarity Reasoning model, called RESR, is proposed, which conducts both intra-modal relation enhancement and inter-modal similarity reasoning while considering the global-context information. For intra-modal relation enhancement, a novel context-aware graph convolutional network is introduced to enhance local feature representations by utilising relation and global-context information. For inter-modal similarity reasoning, local and global similarity features are exploited by the bidirectional alignment of image and text, and the similarity reasoning is implemented among multi-granularity similarity features. Finally, refined local and global similarity features are adaptively fused to get a precise similarity score. The experimental results show that our effective model outperforms some state-of-the-art approaches, achieving average improvements of 2.5% and 6.3% in R@sum on the Flickr30K and MS-COCO dataset.

图像-文本检索是一项基本但极具挑战性的任务,其目的是弥合异构数据之间的语义鸿沟,实现语义相似性的精确测量。在已提出的各种成功方法中,跨模态特征之间的精细配准技术起着关键作用。然而,现有方法无法有效利用模态内信息来增强特征表示,也缺乏强大的相似性推理能力来获得精确的相似性得分。为了解决这些问题,我们提出了一种称为 RESR 的情境感知关系增强和相似性推理模型,它在考虑全局情境信息的同时,还能进行模内关系增强和模间相似性推理。在模内关系增强方面,引入了一个新颖的上下文感知图卷积网络,利用关系和全局上下文信息来增强局部特征表征。在模态间相似性推理方面,通过图像和文本的双向对齐利用了局部和全局相似性特征,并在多粒度相似性特征中实现了相似性推理。最后,经过提炼的局部和全局相似性特征会进行自适应融合,从而得到精确的相似性得分。实验结果表明,我们的有效模型优于一些最先进的方法,在 Flickr30K 和 MS-COCO 数据集上的 R@sum 平均提高了 2.5% 和 6.3%。
{"title":"Context-aware relation enhancement and similarity reasoning for image-text retrieval","authors":"Zheng Cui,&nbsp;Yongli Hu,&nbsp;Yanfeng Sun,&nbsp;Baocai Yin","doi":"10.1049/cvi2.12270","DOIUrl":"10.1049/cvi2.12270","url":null,"abstract":"<p>Image-text retrieval is a fundamental yet challenging task, which aims to bridge a semantic gap between heterogeneous data to achieve precise measurements of semantic similarity. The technique of fine-grained alignment between cross-modal features plays a key role in various successful methods that have been proposed. Nevertheless, existing methods cannot effectively utilise intra-modal information to enhance feature representation and lack powerful similarity reasoning to get a precise similarity score. Intending to tackle these issues, a context-aware Relation Enhancement and Similarity Reasoning model, called RESR, is proposed, which conducts both intra-modal relation enhancement and inter-modal similarity reasoning while considering the global-context information. For intra-modal relation enhancement, a novel context-aware graph convolutional network is introduced to enhance local feature representations by utilising relation and global-context information. For inter-modal similarity reasoning, local and global similarity features are exploited by the bidirectional alignment of image and text, and the similarity reasoning is implemented among multi-granularity similarity features. Finally, refined local and global similarity features are adaptively fused to get a precise similarity score. The experimental results show that our effective model outperforms some state-of-the-art approaches, achieving average improvements of 2.5% and 6.3% in R@sum on the Flickr30K and MS-COCO dataset.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"652-665"},"PeriodicalIF":1.5,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140483593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection network OmDet:利用多模态检测网络进行大规模视觉语言多数据集预训练
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-24 DOI: 10.1049/cvi2.12268
Tiancheng Zhao, Peng Liu, Kyusong Lee

The advancement of object detection (OD) in open-vocabulary and open-world scenarios is a critical challenge in computer vision. OmDet, a novel language-aware object detection architecture and an innovative training mechanism that harnesses continual learning and multi-dataset vision-language pre-training is introduced. Leveraging natural language as a universal knowledge representation, OmDet accumulates “visual vocabularies” from diverse datasets, unifying the task as a language-conditioned detection framework. The multimodal detection network (MDN) overcomes the challenges of multi-dataset joint training and generalizes to numerous training datasets without manual label taxonomy merging. The authors demonstrate superior performance of OmDet over strong baselines in object detection in the wild, open-vocabulary detection, and phrase grounding, achieving state-of-the-art results. Ablation studies reveal the impact of scaling the pre-training visual vocabulary, indicating a promising direction for further expansion to larger datasets. The effectiveness of our deep fusion approach is underscored by its ability to learn jointly from multiple datasets, enhancing performance through knowledge sharing.

在开放词汇和开放世界场景中推进物体检测(OD)是计算机视觉领域的一项重大挑战。OmDet 是一种新颖的语言感知物体检测架构,也是一种利用持续学习和多数据集视觉语言预训练的创新训练机制。OmDet 利用自然语言作为通用知识表示,从不同数据集中积累 "视觉词汇",将任务统一为语言条件检测框架。多模态检测网络(MDN)克服了多数据集联合训练所面临的挑战,无需手动合并标签分类,即可泛化到众多训练数据集。作者展示了 OmDet 在野外物体检测、开放词汇检测和短语接地方面优于强基线的性能,达到了最先进的结果。消融研究揭示了扩大预训练视觉词汇量的影响,为进一步扩展到更大的数据集指明了方向。我们的深度融合方法能够从多个数据集中进行联合学习,通过知识共享提高性能,这凸显了该方法的有效性。
{"title":"OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection network","authors":"Tiancheng Zhao,&nbsp;Peng Liu,&nbsp;Kyusong Lee","doi":"10.1049/cvi2.12268","DOIUrl":"10.1049/cvi2.12268","url":null,"abstract":"<p>The advancement of object detection (OD) in open-vocabulary and open-world scenarios is a critical challenge in computer vision. OmDet, a novel language-aware object detection architecture and an innovative training mechanism that harnesses continual learning and multi-dataset vision-language pre-training is introduced. Leveraging natural language as a universal knowledge representation, OmDet accumulates “visual vocabularies” from diverse datasets, unifying the task as a language-conditioned detection framework. The multimodal detection network (MDN) overcomes the challenges of multi-dataset joint training and generalizes to numerous training datasets without manual label taxonomy merging. The authors demonstrate superior performance of OmDet over strong baselines in object detection in the wild, open-vocabulary detection, and phrase grounding, achieving state-of-the-art results. Ablation studies reveal the impact of scaling the pre-training visual vocabulary, indicating a promising direction for further expansion to larger datasets. The effectiveness of our deep fusion approach is underscored by its ability to learn jointly from multiple datasets, enhancing performance through knowledge sharing.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"626-639"},"PeriodicalIF":1.5,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12268","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139601188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIANet: 3D object detection with structural information augment network SIANet:利用结构信息增强网络进行 3D 物体检测
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-23 DOI: 10.1049/cvi2.12272
Jing Zhou, Tengxing Lin, Zixin Gong, Xinhan Huang

3D object detection technology from point clouds has been widely applied in the field of automatic driving in recent years. In practical applications, the shape point clouds of some objects are incomplete due to occlusion or far distance, which means they suffer from insufficient structural information. This greatly affects the detection performance. To address this challenge, the authors design a Structural Information Augment (SIA) Network for 3D object detection, named SIANet. Specifically, the authors design a SIA module to reconstruct the complete shapes of objects within proposals for enhancing their geometric features, which are further fused into the spatial feature of the object for box refinement to predict accurate detection boxes. Besides, the authors construct a novel Unet-liked Context-enhanced Transformer backbone network, which stacks Context-enhanced Transformer modules and an upsampling branch to capture contextual information efficiently and generate high-quality proposals for the SIA module. Extensive experiments show that the authors’ well-designed SIANet can effectively improve detection performance, especially surpassing the baseline network by 1.04% mean Average Precision (mAP) gain in the KITTI dataset and 0.75% LEVEL_2 mAP gain in the Waymo dataset.

近年来,点云三维物体检测技术在自动驾驶领域得到了广泛应用。在实际应用中,由于遮挡或距离较远,一些物体的形状点云是不完整的,也就是结构信息不足。这极大地影响了检测性能。为解决这一难题,作者设计了一种用于三维物体检测的结构信息增强(SIA)网络,并将其命名为 SIANet。具体来说,作者设计了一个 SIA 模块,用于在提案中重建物体的完整形状,以增强其几何特征,并将其进一步融合到物体的空间特征中,以进行方框细化,从而预测准确的检测方框。此外,作者还构建了一个新颖的 Unet-liked 上下文增强变换器主干网络,该网络堆叠了上下文增强变换器模块和上采样分支,以有效捕捉上下文信息,为 SIA 模块生成高质量的建议。广泛的实验表明,作者精心设计的 SIANet 可以有效提高检测性能,尤其是在 KITTI 数据集上超过基线网络 1.04% 的平均精度(mAP)增益,在 Waymo 数据集上超过基线网络 0.75% 的 LEVEL_2 mAP 增益。
{"title":"SIANet: 3D object detection with structural information augment network","authors":"Jing Zhou,&nbsp;Tengxing Lin,&nbsp;Zixin Gong,&nbsp;Xinhan Huang","doi":"10.1049/cvi2.12272","DOIUrl":"10.1049/cvi2.12272","url":null,"abstract":"<p>3D object detection technology from point clouds has been widely applied in the field of automatic driving in recent years. In practical applications, the shape point clouds of some objects are incomplete due to occlusion or far distance, which means they suffer from insufficient structural information. This greatly affects the detection performance. To address this challenge, the authors design a Structural Information Augment (SIA) Network for 3D object detection, named SIANet. Specifically, the authors design a SIA module to reconstruct the complete shapes of objects within proposals for enhancing their geometric features, which are further fused into the spatial feature of the object for box refinement to predict accurate detection boxes. Besides, the authors construct a novel Unet-liked Context-enhanced Transformer backbone network, which stacks Context-enhanced Transformer modules and an upsampling branch to capture contextual information efficiently and generate high-quality proposals for the SIA module. Extensive experiments show that the authors’ well-designed SIANet can effectively improve detection performance, especially surpassing the baseline network by 1.04% mean Average Precision (mAP) gain in the KITTI dataset and 0.75% LEVEL_2 mAP gain in the Waymo dataset.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"682-695"},"PeriodicalIF":1.5,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12272","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139604878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial catoptric light: An effective, stealthy and robust physical-world attack to DNNs 对抗性猫眼光:一种针对 DNN 的有效、隐蔽且强大的物理世界攻击
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-18 DOI: 10.1049/cvi2.12264
Chengyin Hu, Weiwen Shi, Ling Tian, Wen Li

Recent studies have demonstrated that finely tuned deep neural networks (DNNs) are susceptible to adversarial attacks. Conventional physical attacks employ stickers as perturbations, achieving robust adversarial effects but compromising stealthiness. Recent innovations utilise light beams, such as lasers and projectors, for perturbation generation, allowing for stealthy physical attacks at the expense of robustness. In pursuit of implementing both stealthy and robust physical attacks, the authors present an adversarial catoptric light (AdvCL). This method leverages the natural phenomenon of catoptric light to generate perturbations that are both natural and stealthy. AdvCL first formalises the physical parameters of catoptric light and then optimises these parameters using a genetic algorithm to derive the most adversarial perturbation. Finally, the perturbations are deployed in the physical scene to execute stealthy and robust attacks. The proposed method is evaluated across three dimensions: effectiveness, stealthiness, and robustness. Quantitative results obtained in simulated environments demonstrate the efficacy of the proposed method, achieving an attack success rate of 83.5%, surpassing the baseline. The authors utilise common catoptric light as a perturbation to enhance the method's stealthiness, rendering physical samples more natural in appearance. Robustness is affirmed by successfully attacking advanced DNNs with a success rate exceeding 80% in all cases. Additionally, the authors discuss defence strategies against AdvCL and introduce some light-based physical attacks.

最近的研究表明,经过微调的深度神经网络(DNN)很容易受到对抗性攻击。传统的物理攻击采用贴纸作为扰动,可产生强大的对抗效果,但却有损隐蔽性。最近的创新则利用激光和投影仪等光束来产生扰动,从而在牺牲鲁棒性的情况下实现隐身物理攻击。为了实现既隐蔽又稳健的物理攻击,作者提出了一种对抗性猫眼光(AdvCL)。这种方法利用猫眼光的自然现象产生既自然又隐蔽的扰动。AdvCL 首先将猫眼光的物理参数形式化,然后使用遗传算法优化这些参数,以得出最具对抗性的扰动。最后,在物理场景中部署扰动,以实施隐蔽而稳健的攻击。对所提出的方法进行了三个方面的评估:有效性、隐蔽性和鲁棒性。在模拟环境中获得的定量结果证明了所提方法的有效性,攻击成功率高达 83.5%,超过了基线方法。作者利用普通猫眼光作为扰动,增强了该方法的隐蔽性,使物理样本的外观更加自然。通过成功攻击高级 DNN,其鲁棒性得到了肯定,在所有情况下成功率都超过了 80%。此外,作者还讨论了针对 AdvCL 的防御策略,并介绍了一些基于光的物理攻击。
{"title":"Adversarial catoptric light: An effective, stealthy and robust physical-world attack to DNNs","authors":"Chengyin Hu,&nbsp;Weiwen Shi,&nbsp;Ling Tian,&nbsp;Wen Li","doi":"10.1049/cvi2.12264","DOIUrl":"10.1049/cvi2.12264","url":null,"abstract":"<p>Recent studies have demonstrated that finely tuned deep neural networks (DNNs) are susceptible to adversarial attacks. Conventional physical attacks employ stickers as perturbations, achieving robust adversarial effects but compromising stealthiness. Recent innovations utilise light beams, such as lasers and projectors, for perturbation generation, allowing for stealthy physical attacks at the expense of robustness. In pursuit of implementing both stealthy and robust physical attacks, the authors present an adversarial catoptric light (AdvCL). This method leverages the natural phenomenon of catoptric light to generate perturbations that are both natural and stealthy. AdvCL first formalises the physical parameters of catoptric light and then optimises these parameters using a genetic algorithm to derive the most adversarial perturbation. Finally, the perturbations are deployed in the physical scene to execute stealthy and robust attacks. The proposed method is evaluated across three dimensions: effectiveness, stealthiness, and robustness. Quantitative results obtained in simulated environments demonstrate the efficacy of the proposed method, achieving an attack success rate of 83.5%, surpassing the baseline. The authors utilise common catoptric light as a perturbation to enhance the method's stealthiness, rendering physical samples more natural in appearance. Robustness is affirmed by successfully attacking advanced DNNs with a success rate exceeding 80% in all cases. Additionally, the authors discuss defence strategies against AdvCL and introduce some light-based physical attacks.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"557-573"},"PeriodicalIF":1.5,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12264","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139614963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel multi-model 3D object detection framework with adaptive voxel-image feature fusion 自适应体素图像特征融合的新型多模型三维物体检测框架
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-17 DOI: 10.1049/cvi2.12269
Zhao Liu, Zhongliang Fu, Gang Li, Shengyuan Zhang

The multifaceted nature of sensor data has long been a hurdle for those seeking to harness its full potential in the field of 3D object detection. Although the utilisation of point clouds as input has yielded exceptional results, the challenge of effectively combining the complementary properties of multi-sensor data looms large. This work presents a new approach to multi-model 3D object detection, called adaptive voxel-image feature fusion (AVIFF). Adaptive voxel-image feature fusion is an end-to-end single-shot framework that can dynamically and adaptively fuse point cloud and image features, resulting in a more comprehensive and integrated analysis of the camera sensor and the LiDar sensor data. With the aid of the adaptive feature fusion module, spatialised image features can be adroitly fused with voxel-based point cloud features, while the Dense Fusion module ensures the preservation of the distinctive characteristics of 3D point cloud data through the use of a heterogeneous architecture. Notably, the authors’ framework features a novel generalised intersection over union loss function that enhances the perceptibility of object localsation and rotation in 3D space. Comprehensive experimentation has validated the efficacy of the authors’ proposed modules, firmly establishing AVIFF as a novel framework in the field of 3D object detection.

长期以来,传感器数据的多面性一直是那些试图在三维物体检测领域充分发挥其潜力的人所面临的障碍。虽然利用点云作为输入已经取得了卓越的成果,但如何有效结合多传感器数据的互补特性仍是一个巨大的挑战。本研究提出了一种新的多模型三维物体检测方法,称为自适应体素图像特征融合(AVIFF)。自适应体素-图像特征融合是一种端到端单次拍摄框架,可动态、自适应地融合点云和图像特征,从而对相机传感器和 LiDar 传感器数据进行更全面、更综合的分析。借助自适应特征融合模块,空间化图像特征可以与基于体素的点云特征巧妙融合,而密集融合模块则通过使用异构架构确保保留三维点云数据的独特特征。值得注意的是,作者的框架采用了新颖的广义交集大于联合损失函数,增强了三维空间中物体定位和旋转的可感知性。全面的实验验证了作者提出的模块的有效性,使 AVIFF 成为三维物体检测领域的新型框架。
{"title":"A novel multi-model 3D object detection framework with adaptive voxel-image feature fusion","authors":"Zhao Liu,&nbsp;Zhongliang Fu,&nbsp;Gang Li,&nbsp;Shengyuan Zhang","doi":"10.1049/cvi2.12269","DOIUrl":"10.1049/cvi2.12269","url":null,"abstract":"<p>The multifaceted nature of sensor data has long been a hurdle for those seeking to harness its full potential in the field of 3D object detection. Although the utilisation of point clouds as input has yielded exceptional results, the challenge of effectively combining the complementary properties of multi-sensor data looms large. This work presents a new approach to multi-model 3D object detection, called adaptive voxel-image feature fusion (AVIFF). Adaptive voxel-image feature fusion is an end-to-end single-shot framework that can dynamically and adaptively fuse point cloud and image features, resulting in a more comprehensive and integrated analysis of the camera sensor and the LiDar sensor data. With the aid of the adaptive feature fusion module, spatialised image features can be adroitly fused with voxel-based point cloud features, while the Dense Fusion module ensures the preservation of the distinctive characteristics of 3D point cloud data through the use of a heterogeneous architecture. Notably, the authors’ framework features a novel generalised intersection over union loss function that enhances the perceptibility of object localsation and rotation in 3D space. Comprehensive experimentation has validated the efficacy of the authors’ proposed modules, firmly establishing AVIFF as a novel framework in the field of 3D object detection.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"640-651"},"PeriodicalIF":1.5,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12269","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139616930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale Feature Attention-DEtection TRansformer: Multi-Scale Feature Attention for security check object detection 多尺度特征关注-检测转换器:用于安全检查对象检测的多尺度特征关注
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-16 DOI: 10.1049/cvi2.12267
Haifeng Sima, Bailiang Chen, Chaosheng Tang, Yudong Zhang, Junding Sun

X-ray security checks aim to detect contraband in luggage; however, the detection accuracy is hindered by the overlapping and significant size differences of objects in X-ray images. To address these challenges, the authors introduce a novel network model named Multi-Scale Feature Attention (MSFA)-DEtection TRansformer (DETR). Firstly, the pyramid feature extraction structure is embedded into the self-attention module, referred to as the MSFA. Leveraging the MSFA module, MSFA-DETR extracts multi-scale feature information and amalgamates them into high-level semantic features. Subsequently, these features are synergised through attention mechanisms to capture correlations between global information and multi-scale features. MSFA significantly bolsters the model's robustness across different sizes, thereby enhancing detection accuracy. Simultaneously, A new initialisation method for object queries is proposed. The authors’ foreground sequence extraction (FSE) module extracts key feature sequences from feature maps, serving as prior knowledge for object queries. FSE expedites the convergence of the DETR model and elevates detection accuracy. Extensive experimentation validates that this proposed model surpasses state-of-the-art methods on the CLCXray and PIDray datasets.

X 射线安全检查的目的是检测行李中的违禁品;然而,由于 X 射线图像中物体的重叠和显著的尺寸差异,检测的准确性受到了影响。为了应对这些挑战,作者引入了一种名为多尺度特征注意(MSFA)-DEtection TRansformer(DETR)的新型网络模型。首先,将金字塔特征提取结构嵌入自我注意模块,称为 MSFA。利用 MSFA 模块,MSFA-DETR 可提取多尺度特征信息,并将其合并为高级语义特征。随后,这些特征通过注意力机制协同作用,以捕捉全局信息和多尺度特征之间的相关性。MSFA 极大地增强了模型在不同尺寸下的鲁棒性,从而提高了检测精度。同时,还提出了一种新的对象查询初始化方法。作者的前景序列提取(FSE)模块从特征图中提取关键特征序列,作为对象查询的先验知识。FSE 加快了 DETR 模型的收敛速度,提高了检测精度。广泛的实验验证了所提出的模型在 CLCXray 和 PIDray 数据集上超越了最先进的方法。
{"title":"Multi-Scale Feature Attention-DEtection TRansformer: Multi-Scale Feature Attention for security check object detection","authors":"Haifeng Sima,&nbsp;Bailiang Chen,&nbsp;Chaosheng Tang,&nbsp;Yudong Zhang,&nbsp;Junding Sun","doi":"10.1049/cvi2.12267","DOIUrl":"10.1049/cvi2.12267","url":null,"abstract":"<p>X-ray security checks aim to detect contraband in luggage; however, the detection accuracy is hindered by the overlapping and significant size differences of objects in X-ray images. To address these challenges, the authors introduce a novel network model named Multi-Scale Feature Attention (MSFA)-DEtection TRansformer (DETR). Firstly, the pyramid feature extraction structure is embedded into the self-attention module, referred to as the MSFA. Leveraging the MSFA module, MSFA-DETR extracts multi-scale feature information and amalgamates them into high-level semantic features. Subsequently, these features are synergised through attention mechanisms to capture correlations between global information and multi-scale features. MSFA significantly bolsters the model's robustness across different sizes, thereby enhancing detection accuracy. Simultaneously, A new initialisation method for object queries is proposed. The authors’ foreground sequence extraction (FSE) module extracts key feature sequences from feature maps, serving as prior knowledge for object queries. FSE expedites the convergence of the DETR model and elevates detection accuracy. Extensive experimentation validates that this proposed model surpasses state-of-the-art methods on the CLCXray and PIDray datasets.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"613-625"},"PeriodicalIF":1.5,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12267","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139620312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clean, performance-robust, and performance-sensitive historical information based adversarial self-distillation 基于历史信息的对抗性自馏分,干净、性能稳定且对性能敏感
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-08 DOI: 10.1049/cvi2.12265
Shuyi Li, Hongchao Hu, Shumin Huo, Hao Liang

Adversarial training suffers from poor effectiveness due to the challenging optimisation of loss with hard labels. To address this issue, adversarial distillation has emerged as a potential solution, encouraging target models to mimic the output of the teachers. However, reliance on pre-training teachers leads to additional training costs and raises concerns about the reliability of their knowledge. Furthermore, existing methods fail to consider the significant differences in unconfident samples between early and late stages, potentially resulting in robust overfitting. An adversarial defence method named Clean, Performance-robust, and Performance-sensitive Historical Information based Adversarial Self-Distillation (CPr & PsHI-ASD) is presented. Firstly, an adversarial self-distillation replacement method based on clean, performance-robust, and performance-sensitive historical information is developed to eliminate pre-training costs and enhance guidance reliability for the target model. Secondly, adversarial self-distillation algorithms that leverage knowledge distilled from the previous iteration are introduced to facilitate the self-distillation of adversarial knowledge and mitigate the problem of robust overfitting. Experiments are conducted to evaluate the performance of the proposed method on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets. The results demonstrate that the CPr&PsHI-ASD method is more effective than existing adversarial distillation methods in enhancing adversarial robustness and mitigating robust overfitting issues against various adversarial attacks.

对抗训练的效果不佳,原因是难以优化硬标签的损失。为了解决这个问题,对抗式提炼成为一种潜在的解决方案,它鼓励目标模型模仿教师的输出。然而,依赖预先培训教师会导致额外的培训成本,并引发对教师知识可靠性的担忧。此外,现有方法未能考虑早期和晚期阶段非自信样本的显著差异,可能导致稳健的过度拟合。本文提出了一种名为 "基于历史信息的对抗性自蒸馏(CPr & PsHI-ASD)"的对抗性防御方法。首先,开发了一种基于清洁、性能可靠和性能敏感历史信息的对抗性自蒸馏替换方法,以消除预训练成本,提高目标模型的制导可靠性。其次,引入了利用从上一次迭代中提炼出的知识的对抗性自蒸馏算法,以促进对抗性知识的自蒸馏,缓解鲁棒过拟合问题。我们在 CIFAR-10、CIFAR-100 和 Tiny-ImageNet 数据集上进行了实验,以评估所提出方法的性能。结果表明,CPr&PsHI-ASD 方法比现有的对抗性蒸馏方法更有效地增强了对抗性鲁棒性,并缓解了对抗各种对抗性攻击的鲁棒过拟合问题。
{"title":"Clean, performance-robust, and performance-sensitive historical information based adversarial self-distillation","authors":"Shuyi Li,&nbsp;Hongchao Hu,&nbsp;Shumin Huo,&nbsp;Hao Liang","doi":"10.1049/cvi2.12265","DOIUrl":"10.1049/cvi2.12265","url":null,"abstract":"<p>Adversarial training suffers from poor effectiveness due to the challenging optimisation of loss with hard labels. To address this issue, adversarial distillation has emerged as a potential solution, encouraging target models to mimic the output of the teachers. However, reliance on pre-training teachers leads to additional training costs and raises concerns about the reliability of their knowledge. Furthermore, existing methods fail to consider the significant differences in unconfident samples between early and late stages, potentially resulting in robust overfitting. An adversarial defence method named Clean, Performance-robust, and Performance-sensitive Historical Information based Adversarial Self-Distillation (CPr &amp; PsHI-ASD) is presented. Firstly, an adversarial self-distillation replacement method based on clean, performance-robust, and performance-sensitive historical information is developed to eliminate pre-training costs and enhance guidance reliability for the target model. Secondly, adversarial self-distillation algorithms that leverage knowledge distilled from the previous iteration are introduced to facilitate the self-distillation of adversarial knowledge and mitigate the problem of robust overfitting. Experiments are conducted to evaluate the performance of the proposed method on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets. The results demonstrate that the CPr&amp;PsHI-ASD method is more effective than existing adversarial distillation methods in enhancing adversarial robustness and mitigating robust overfitting issues against various adversarial attacks.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"591-612"},"PeriodicalIF":1.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139446540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning framework for multi-object tracking in team sports videos 团队运动视频中的多目标跟踪深度学习框架
IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-02 DOI: 10.1049/cvi2.12266
Wei Cao, Xiaoyong Wang, Xianxiang Liu, Yishuai Xu

In response to the challenges of Multi-Object Tracking (MOT) in sports scenes, such as severe occlusions, similar appearances, drastic pose changes, and complex motion patterns, a deep-learning framework CTGMOT (CNN-Transformer-GNN-based MOT) specifically for multiple athlete tracking in sports videos that performs joint modelling of detection, appearance and motion features is proposed. Firstly, a detection network that combines Convolutional Neural Networks (CNN) and Transformers is constructed to extract both local and global features from images. The fusion of appearance and motion features is achieved through a design of parallel dual-branch decoders. Secondly, graph models are built using Graph Neural Networks (GNN) to accurately capture the spatio-temporal correlations between object and trajectory features from inter-frame and intra-frame associations. Experimental results on the public sports tracking dataset SportsMOT show that the proposed framework outperforms other state-of-the-art methods for MOT in complex sport scenes. In addition, the proposed framework shows excellent generality on benchmark datasets MOT17 and MOT20.

针对体育场景中多目标跟踪(MOT)所面临的挑战,如严重遮挡、相似外观、剧烈姿势变化和复杂运动模式,我们提出了一种深度学习框架 CTGMOT(基于 CNN-变换器-GNN 的 MOT),专门用于体育视频中的多运动员跟踪,该框架对检测、外观和运动特征进行联合建模。首先,构建了一个结合了卷积神经网络(CNN)和变换器的检测网络,以从图像中提取局部和全局特征。通过设计并行双分支解码器,实现了外观和运动特征的融合。其次,利用图神经网络(GNN)建立图模型,从帧间和帧内关联中准确捕捉物体和轨迹特征之间的时空相关性。在公共体育追踪数据集 SportsMOT 上的实验结果表明,在复杂体育场景中的 MOT 方面,所提出的框架优于其他最先进的方法。此外,在基准数据集 MOT17 和 MOT20 上,所提出的框架也显示出卓越的通用性。
{"title":"A deep learning framework for multi-object tracking in team sports videos","authors":"Wei Cao,&nbsp;Xiaoyong Wang,&nbsp;Xianxiang Liu,&nbsp;Yishuai Xu","doi":"10.1049/cvi2.12266","DOIUrl":"10.1049/cvi2.12266","url":null,"abstract":"<p>In response to the challenges of Multi-Object Tracking (MOT) in sports scenes, such as severe occlusions, similar appearances, drastic pose changes, and complex motion patterns, a deep-learning framework CTGMOT (CNN-Transformer-GNN-based MOT) specifically for multiple athlete tracking in sports videos that performs joint modelling of detection, appearance and motion features is proposed. Firstly, a detection network that combines Convolutional Neural Networks (CNN) and Transformers is constructed to extract both local and global features from images. The fusion of appearance and motion features is achieved through a design of parallel dual-branch decoders. Secondly, graph models are built using Graph Neural Networks (GNN) to accurately capture the spatio-temporal correlations between object and trajectory features from inter-frame and intra-frame associations. Experimental results on the public sports tracking dataset SportsMOT show that the proposed framework outperforms other state-of-the-art methods for MOT in complex sport scenes. In addition, the proposed framework shows excellent generality on benchmark datasets MOT17 and MOT20.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 5","pages":"574-590"},"PeriodicalIF":1.5,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12266","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139453061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial feature embedding for robust visual object tracking 空间特征嵌入实现稳健的视觉物体跟踪
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-20 DOI: 10.1049/cvi2.12263
Kang Liu, Long Liu, Shangqi Yang, Zhihao Fu

Recently, the offline-trained Siamese pipeline has drawn wide attention due to its outstanding tracking performance. However, the existing Siamese trackers utilise offline training to extract ‘universal’ features, which is insufficient to effectively distinguish between the target and fluctuating interference in embedding the information of the two branches, leading to inaccurate classification and localisation. In addition, the Siamese trackers employ a pre-defined scale for cropping the search candidate region based on the previous frame's result, which might easily introduce redundant background noise (clutter, similar objects etc.), affecting the tracker's robustness. To solve these problems, the authors propose two novel sub-network spatial employed to spatial feature embedding for robust object tracking. Specifically, the proposed spatial remapping (SRM) network enhances the feature discrepancy between target and distractor categories by online remapping, and improves the discriminant ability of the tracker on the embedding space. The MAML is used to optimise the SRM network to ensure its adaptability to complex tracking scenarios. Moreover, a temporal information proposal-guided (TPG) network that utilises a GRU model to dynamically predict the search scale based on temporal motion states to reduce potential background interference is introduced. The proposed two network is integrated into two popular trackers, namely SiamFC++ and TransT, which achieve superior performance on six challenging benchmarks, including OTB100, VOT2019, UAV123, GOT10K, TrackingNet and LaSOT, TrackingNet and LaSOT denoting them as SiamSRMC and SiamSRMT, respectively. Moreover, the proposed trackers obtain competitive tracking performance compared with the state-of-the-art trackers in the attribute of background clutter and similar object, validating the effectiveness of our method.

最近,离线训练的连体管道因其出色的跟踪性能而受到广泛关注。然而,现有的连体跟踪器利用离线训练来提取 "通用 "特征,这不足以有效区分目标和嵌入两个分支信息的波动干扰,从而导致分类和定位不准确。此外,连体跟踪器根据上一帧的结果,采用预定义的比例裁剪搜索候选区域,这可能容易引入冗余背景噪声(杂波、相似物体等),影响跟踪器的鲁棒性。为了解决这些问题,作者提出了两种新颖的子网络空间方法,用于空间特征嵌入,以实现鲁棒的物体跟踪。具体来说,所提出的空间重映射(SRM)网络通过在线重映射来增强目标和分心类别之间的特征差异,并提高跟踪器对嵌入空间的判别能力。MAML 用于优化 SRM 网络,以确保其适应复杂的跟踪场景。此外,还引入了时间信息提议引导(TPG)网络,该网络利用 GRU 模型根据时间运动状态动态预测搜索尺度,以减少潜在的背景干扰。提出的两个网络被集成到两个流行的跟踪器中,即 SiamFC++ 和 TransT,它们在六个具有挑战性的基准测试中取得了优异的性能,包括 OTB100、VOT2019、UAV123、GOT10K、TrackingNet 和 LaSOT,TrackingNet 和 LaSOT 分别表示为 SiamSRMC 和 SiamSRMT。此外,与最先进的跟踪器相比,所提出的跟踪器在背景杂波和相似物体的属性方面获得了有竞争力的跟踪性能,验证了我们方法的有效性。
{"title":"Spatial feature embedding for robust visual object tracking","authors":"Kang Liu,&nbsp;Long Liu,&nbsp;Shangqi Yang,&nbsp;Zhihao Fu","doi":"10.1049/cvi2.12263","DOIUrl":"10.1049/cvi2.12263","url":null,"abstract":"<p>Recently, the offline-trained Siamese pipeline has drawn wide attention due to its outstanding tracking performance. However, the existing Siamese trackers utilise offline training to extract ‘universal’ features, which is insufficient to effectively distinguish between the target and fluctuating interference in embedding the information of the two branches, leading to inaccurate classification and localisation. In addition, the Siamese trackers employ a pre-defined scale for cropping the search candidate region based on the previous frame's result, which might easily introduce redundant background noise (clutter, similar objects etc.), affecting the tracker's robustness. To solve these problems, the authors propose two novel sub-network spatial employed to spatial feature embedding for robust object tracking. Specifically, the proposed spatial remapping (SRM) network enhances the feature discrepancy between target and distractor categories by online remapping, and improves the discriminant ability of the tracker on the embedding space. The MAML is used to optimise the SRM network to ensure its adaptability to complex tracking scenarios. Moreover, a temporal information proposal-guided (TPG) network that utilises a GRU model to dynamically predict the search scale based on temporal motion states to reduce potential background interference is introduced. The proposed two network is integrated into two popular trackers, namely SiamFC++ and TransT, which achieve superior performance on six challenging benchmarks, including OTB100, VOT2019, UAV123, GOT10K, TrackingNet and LaSOT, TrackingNet and LaSOT denoting them as SiamSRMC and SiamSRMT, respectively. Moreover, the proposed trackers obtain competitive tracking performance compared with the state-of-the-art trackers in the attribute of background clutter and similar object, validating the effectiveness of our method.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 4","pages":"540-556"},"PeriodicalIF":1.7,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12263","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138954945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised image blind super resolution via real degradation feature learning 通过真实退化特征学习实现无监督图像盲超分辨率
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.1049/cvi2.12262
Cheng Yang, Guanming Lu

In recent years, many methods for image super-resolution (SR) have relied on pairs of low-resolution (LR) and high-resolution (HR) images for training, where the degradation process is predefined by bicubic downsampling. While such approaches perform well in standard benchmark tests, they often fail to accurately replicate the complexity of real-world image degradation. To address this challenge, researchers have proposed the use of unpaired image training to implicitly model the degradation process. However, there is a significant domain gap between the real-world LR and the synthetic LR images from HR, which severely degrades the SR performance. A novel unsupervised image-blind super-resolution method that exploits degradation feature-based learning for real-image super-resolution reconstruction (RDFL) is proposed. Their approach learns the degradation process from HR to LR using a generative adversarial network (GAN) and constrains the data distribution of the synthetic LR with real degraded images. The authors then encode the degraded features into a Transformer-based SR network for image super-resolution reconstruction through degradation representation learning. Extensive experiments on both synthetic and real datasets demonstrate the effectiveness and superiority of the RDFL method, which achieves visually pleasing reconstruction results.

近年来,许多图像超分辨率(SR)方法都依赖于成对的低分辨率(LR)和高分辨率(HR)图像进行训练,其中降解过程是通过双三次降采样预先确定的。虽然这些方法在标准基准测试中表现良好,但往往无法准确复制真实世界图像降解的复杂性。为了应对这一挑战,研究人员提出了使用非配对图像训练来隐式模拟退化过程的方法。然而,真实世界的 LR 图像与来自 HR 的合成 LR 图像之间存在明显的域差距,这严重降低了 SR 性能。有人提出了一种新颖的无监督图像盲超分辨方法,利用基于降解特征的学习进行真实图像超分辨重建(RDFL)。他们的方法利用生成式对抗网络(GAN)学习从 HR 到 LR 的降解过程,并用真实降解图像约束合成 LR 的数据分布。然后,作者将降解特征编码到基于变换器的 SR 网络中,通过降解表示学习进行图像超分辨率重建。在合成数据集和真实数据集上进行的大量实验证明了 RDFL 方法的有效性和优越性,并取得了视觉上令人愉悦的重建结果。
{"title":"Unsupervised image blind super resolution via real degradation feature learning","authors":"Cheng Yang,&nbsp;Guanming Lu","doi":"10.1049/cvi2.12262","DOIUrl":"10.1049/cvi2.12262","url":null,"abstract":"<p>In recent years, many methods for image super-resolution (SR) have relied on pairs of low-resolution (LR) and high-resolution (HR) images for training, where the degradation process is predefined by bicubic downsampling. While such approaches perform well in standard benchmark tests, they often fail to accurately replicate the complexity of real-world image degradation. To address this challenge, researchers have proposed the use of unpaired image training to implicitly model the degradation process. However, there is a significant domain gap between the real-world LR and the synthetic LR images from HR, which severely degrades the SR performance. A novel unsupervised image-blind super-resolution method that exploits degradation feature-based learning for real-image super-resolution reconstruction (RDFL) is proposed. Their approach learns the degradation process from HR to LR using a generative adversarial network (GAN) and constrains the data distribution of the synthetic LR with real degraded images. The authors then encode the degraded features into a Transformer-based SR network for image super-resolution reconstruction through degradation representation learning. Extensive experiments on both synthetic and real datasets demonstrate the effectiveness and superiority of the RDFL method, which achieves visually pleasing reconstruction results.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 4","pages":"485-498"},"PeriodicalIF":1.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139001043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1